[rabbitmq-discuss] RabbitMQ (branch bug21673) running out of file descriptors
smirnov.andrey at gmail.com
Thu Feb 18 15:38:53 GMT 2010
Thanks a lot, Matthew!
Rebuilt from source, running currently with new version. Will report
back how it goes.
Actually I have about 13 TCP connections to broker and this number is
2010/2/18 Matthew Sackman <matthew at lshift.net>:
> Hi Andrey,
> On Thu, Feb 18, 2010 at 10:58:34AM +0300, Andrey Smirnov wrote:
>> We're running RabbitMQ from branch bug21673 (built yesterday). We have
>> a lot of non-persisten queues (2k - 10k+) that are created/destroyed
>> all the time. Some queues have unconsumed messaged and they will get
>> destroyed with unconsumed messages.
>> When file descriptor limit for RabbitMQ was left to default 1024 files
>> it was dying in 2 hours. I've raised the limit to 10240 files and it
>> worked for 16 hours and crashed.
>> Crash for clients looks like "INTERNAL_ERROR" error on connections.
>> RabbitMQ is still running and eating 100% of CPU. Restarting RabbitMQ
>> helps to revive it.
> Many thanks for the bug report. The short version is that this is now
> Fixed - if you pull again and rebuild, it should all "just work" now.
> The long version is in the commit comment
> (http://hg.rabbitmq.com/rabbitmq-server/rev/56a0daed8b4e). I can now
> artificially limit Rabbit to approx 3 file descriptors, and it still
> runs some fairly aggressive tests perfectly well. Note that whilst on
> branch bug21673, Rabbit is able to very flexibly redistribute file
> handles as necessary, this obviously doesn't apply to sockets. So you
> should make sure you always have at least, say, 100 more file
> descriptors available to Rabbit than you need sockets.
> Please try retesting - provided you don't have more than about 900
> simultaneous connections open, you shouldn't need to raise the ulimit
> above 1024, though you might find performance improves if you do.
> Do let us know how you get on.
phone. +7 (905) 769-83-20
More information about the rabbitmq-discuss