-
Notifications
You must be signed in to change notification settings - Fork 10.1k
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Memory leaks #67
Memory leaks #67
Conversation
… exception. This small if statement solves it.
… will leak memory. This caused http://speedo.no.de/ to go up from 1mb per connection after a ECONNECTRESET message
Thanks for the patch. http://github.com/LearnBoost/Socket.IO-node/commit/999eba68dc647738d425666d777f47817627e474 |
I wonder if your edits will work. If I remember correctly I did attempt to fix it that way but some errors where still uncaught.. |
But then those leaks will be unrelated to socket.io :) |
I would love that to be true, but the only thing running on my server is socket.io :p anways I spotted an error in your commit: http://github.com/LearnBoost/Socket.IO-node/commit/999eba68dc647738d425666d777f47817627e474#diff-2 You dont have a req and socket variable there |
Sure, not consistent with this.connection, but websocket.js will be cleaned up soon anyways. |
Whoops, I totally missed that. And a clean up sounds great :) |
But you missed: http://github.com/3rd-Eden/Socket.IO-node/commit/eeab2fd1534a12db4e2ee7996b58bc5de70f0172 Where the options.log give a error ^ which cause, ALLOT of ECONNECT RESET errors |
I still see it there? Can you re-commit your suggestions? |
Will re-commit a load patches tomorrow, found another issue a few min ago. And this time i will leave out the console.log's so its easier for you ;) |
Hi,
http://speedo.no.de/ has been plagued with memory leaks for a while now. I have been debugging this issue since the end of NKO. I finally found the issues. After a ECONNECTRESET error occurred each request started taking about 1 MB per connection o_O ( still no clue why ) anyways I was able to backtrace all errors and capture them using the error event. By closing the connections in every possible way I was able to close several leaks. Yes, they all leaked...
I have been running these patches for over a while now on 300 concurrent active streaming users and my memory usage has never been so low, normally I would been around 1 / 2 gig of memory, but now I run 250mb.
Awsomenesss.
While I was at it I also fixed a small bug that caught my attention.
The only issue that remains to be fixed is that some connected clients says queued up on the clientList as I'm still losing memory on that ( I know this because I log the length of the queue on each connection, and after a server reset the queue list is significantly smaller than the logged stats. ) Anyways that is another bug.
Gives us something to hunt after in future versions :)