[rabbitmq-discuss] server crashes with very fast consumers

alex chen chen650 at yahoo.com
Sun Mar 27 21:29:35 BST 2011


John,

Thanks for looking into this problem!


> When I run run.sh, it fails to fork some of the producers and some of the 
>consumers because of Unix kernel limitations, which also > cause some of the 
>commands in common.sh to fail. I'm assuming for the moment that these failures 
>are not affecting the tests.


Actually these failures would affect the tests.  The goal of the tests is 
consuming 50 GB of messages from 1000 queues simultaneously.  If run.sh could 
not start 1000 producers, there won't be 50 GB messages backed up in the 
broker.  amqp_producer.c can be modified to publish those messages into 1000 
topics with single process, however, we still need to start 1000 consumers to 
consume messages simultaneously.   I am using RHEL5 and did not run into any 
kernel limitations with 1000 producers and 1000 consumers.  Have you tried 
"ulimit" to workaround it?

-alex
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20110327/2a64c70f/attachment.htm>


More information about the rabbitmq-discuss mailing list