[rabbitmq-discuss] problem with new persiter with 1000 topics/queues
alex chen
chen650 at yahoo.com
Thu Oct 28 00:39:01 BST 2010
Matthias,
> That is expected. When clients are publishing at a higher rate than the broker can write to disk, and consumers aren't draining the queues, then eventually rabbit will have to pause producers so it can page messages to disk and free up some memory.
i was able to increase the memory_high_watermark to 0.8 (6.4GB) and got 50 MB/sec publish rate all the way to 200 GB messages stored. the broker used 5.9 GB and there was no throttling on publishers. However, when i started 1000 consumers, the same old errors happen again as described in my first email: lots of consumers got errors on login or basic_consume:
http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/2010-August/008420.html
i am using erlang R14B for this test.
> That threshold setting via vm_memory_high_watermark definitely still works in 2.1.1.
yes it works. my /etc/rabbitmq/rabbitmq.config was a symlink instead of a regular file. rabbitmq-server script uses -f instead of -e to check file exist, so it does not work for symlink.
thanks.
-alex
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20101027/4954006b/attachment.htm>
More information about the rabbitmq-discuss
mailing list