[rabbitmq-discuss] server crashes with very fast consumers

John DeTreville jdetreville at vmware.com
Mon Mar 28 23:38:48 BST 2011


One more question. You say RabbitMQ crashes when you run these tests? And it crashes without writing anything interesting in th logs? Or printing anything to the console? It just exits?

Cheers,
John

On Mar 27, 2011, at 1:29 PM, alex chen wrote:

John,

Thanks for looking into this problem!

> When I run run.sh<http://run.sh>, it fails to fork some of the producers and some of the consumers because of Unix kernel limitations, which also > cause some of the commands in common.sh<http://common.sh> to fail. I'm assuming for the moment that these failures are not affecting the tests.

Actually these failures would affect the tests.  The goal of the tests is consuming 50 GB of messages from 1000 queues simultaneously.  If run.sh could not start 1000 producers, there won't be 50 GB messages backed up in the broker.  amqp_producer.c can be modified to publish those messages into 1000 topics with single process, however, we still need to start 1000 consumers to consume messages simultaneously.   I am using RHEL5 and did not run into any kernel limitations with 1000 producers and 1000 consumers.  Have you tried "ulimit" to workaround it?

-alex



More information about the rabbitmq-discuss mailing list