-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
ETIMEDOUT connecting to AWS Elasticache (non-cluster) #1042
Comments
Checked |
+1 Some of the time, having this error in AWS lambdas connecting to elasticache
|
I have encountered the same problem. Do you have a solution? |
@simondutertre @wjt382063576 I'm still trying to find a solution - if I find anything I will post it here - hopefully the maintainers of ioredis will be able to help soon. |
I managed to find a working stopgap solution as follows; In AWS Elasticache Redis:
In code - connect like this; const connection = new Redis({
host: "my_aws_redis_instance_url"
}); This is only a temporary solution so I can continue development - encryption will be required for production deployments of my code so a solution for this timeout problem will still need to be found. |
@chrisfinch this._client = new IORedis({
tls: {},
port: '6379',
host: process.env.REDIS_HOST,
password: process.env.REDIS_AUTH,
keyPrefix: `${process.env.LAMBDA_ENV}-`,
connectTimeout: 17000,
maxRetriesPerRequest: 4,
retryStrategy: (times) => Math.min(times * 30, 1000),
reconnectOnError: (error) => {
const targetErrors = [/READONLY/, /ETIMEDOUT/];
targetErrors.forEach((targetError) => {
if (targetError.test(error.message)) {
return true;
}
});
}
}); You can try it out and check if it works ;) |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 7 days if no further activity occurs, but feel free to re-open a closed issue if needed. |
FYI, you have to run your Lambdas in your own VPC to be able to connect to Elasticache. Double check subnets and security groups. |
I've already double-checked everything you mentioned in this thread but I keep getting timeout errors and I don't know where to look at! |
@demian85 is your Lambda in your own VPC with the right subnet group? |
Which should be the right subnet group? |
Did anyone find a solution for this? I am seeing the same issue. @luin |
Same issue as well. |
I'm still running into the same issue and I don't know which is the proper solution to Lambda aggressively killing the socket and making the connection not reusable. Is that the problem, right?
Thanks. |
Hi guys, I'm still having this problem on AWS Elasticache with encryption in-transit and at-rest, with TLS. Did you find any solution? If I turn encryption off on AWS, it works, but I need encryption for my production environment. |
hello.. I am on ioredis 4.19.4 and still having this issue. i tried @vcalmic 's configs but it still keep trying to reconnect. I tried to call |
I'm also having this issue, I've tried using very similar config to @vcalmic but I still get ETIMEDOUT events to my client. I'm wondering if this is just a logging issue whether these errors can be ignored when you have |
I'm also facing this problem |
+1 facing performance issue due to this problem |
+1 facing same issue |
Encountered the identical problem and successfully resolved it by
|
@vcalmic @vaibhavphutane question for you: What does your error handling look like for this issue? I create the Redis client outside of my handler, so that it may be re-used. Right now, my lambda is timing out at 45 seconds (because the connection is never made). Do you handle the connection error itself, or just let your lambda timeout? What is your lambda timeout? |
Trying to connect to an AWS Elasticache instance and running in to this problem.
Verifying that Redis connection works from EC2 instance;
Connecting like this:
Recieving ETIMEDOUT error when connection through
ioredis
;Timeouts continue indefinitely like above.
Out of ideas with this one so if anyone could help I would really appreciate it.
The text was updated successfully, but these errors were encountered: