[rabbitmq-discuss] problem with new persiter with 1000 topics/queues

Matthias Radestock matthias at rabbitmq.com
Tue Aug 24 17:18:07 BST 2010


On 18/08/10 22:06, alex chen wrote:
> i think the big puzzle is:  for both 100 topics and 1000 topics, when there are
> total number of 200 GB messages queued, the number of messages is the same (10M
> for 20 KB message size).  why the 1000 topics case uses much more memory than
> the 100 topic case?

Based on your previous comments I am assuming you have 100/1000 
*queues*, right?

With 1000 queues the memory is more fragmented than with 100 queues and 
a lot more things are going on in parallel. That makes it harder for the 
persister to keep memory usage in check. Particularly if your file 
descriptor limit is also low.

> [with a watermark of 0.2] the memory usage on consuming becomes
> lower, but the throughput also goes down. so far the highest
> throughput we get is by setting mem high watermark to 0.6 (20-30
> MB/sec consume rate). the only problem is the mem would go to 10 GB.
> decreasing the watermark would lower the mem usage, but in the same
> time the throughput would decrease.

Decreasing the watermark will inevitably affect the throughput since 
more frequent and fine-grained paging to/from disk will occur.



More information about the rabbitmq-discuss mailing list