[rabbitmq-discuss] server crashes with very fast consumers
John DeTreville
jdetreville at vmware.com
Mon Mar 28 22:21:28 BST 2011
My ulimit says "unimited."
I will try to build an RHEL system to try this out. My guess, though, is that it also will exhibit no problems. (For example, it will have a limited amount of RAM, as it will be a VM.)
Cheers,
John
On Mar 27, 2011, at 1:29 PM, alex chen wrote:
John,
Thanks for looking into this problem!
> When I run run.sh<http://run.sh>, it fails to fork some of the producers and some of the consumers because of Unix kernel limitations, which also > cause some of the commands in common.sh<http://common.sh> to fail. I'm assuming for the moment that these failures are not affecting the tests.
Actually these failures would affect the tests. The goal of the tests is consuming 50 GB of messages from 1000 queues simultaneously. If run.sh could not start 1000 producers, there won't be 50 GB messages backed up in the broker. amqp_producer.c can be modified to publish those messages into 1000 topics with single process, however, we still need to start 1000 consumers to consume messages simultaneously. I am using RHEL5 and did not run into any kernel limitations with 1000 producers and 1000 consumers. Have you tried "ulimit" to workaround it?
-alex
More information about the rabbitmq-discuss
mailing list