[rabbitmq-discuss] problem with new persiter with 1000 topics/queues

Matthew Sackman matthew at rabbitmq.com
Tue Aug 17 18:17:21 BST 2010

Hi Alex,

On Tue, Aug 17, 2010 at 12:21:41AM -0700, alex chen wrote:
> however, there is unexpected memory usage when consuming from 200 GB of backed 
> up data.
> first, when i tried to publish to 1000 topics/queues, when the queue size 
> reached 100 GB, the memory usage would grow to 4GB.  our system has 8 GB of 
> RAM.  with the default high watermark of 0.4, the broker started to throttle the 
> publisher.  the publish rate would drop from 80 MB/sec to 10 MB/sec.

Hmm, that's unfortunate. Really, you should only be bound by disk write
speed at that point, and I'm sure your disks can do more than 10MB/sec.
You might try tuning the msg_store_file_size_limit setting: the default
is 16777216 (i.e. 16MB), but try pushing that up to 67108864 (i.e. 64MB)
and you might find Rabbit can drive the disks harder. You'll need to do
that in rabbitmq.config with something like:

[{rabbit, [{msg_store_file_size_limit, 67108864}]}].

> when queue 
> size reached 200 GB, i started the consumer,  then the memory usage would grow 
> to 10 GB.  the system started swapping and the consume rate dropped from 30 
> MB/sec to < 10 MB/sec.

We can reproduce that here. I have a few theories about this, but it's
really pending more testing and debugging. Is there any chance you could
send us your test code - it'd be good to see exactly what you're doing?

> later i tried to increase the high watermark to 0.7.  the max mem usage for 
> broker reached around 6.6 GB while publishing to 200 GB.  for some reason the 
> throttling did not occur, so the publish rate maintained at 80 MB/sec.  however, 
> when starting to consume the messages from 200 GB, the mem usage would still 
> reach 10 GB.  the system started swapping, the consume rate dropped from 30 
> MB/sec to 20 MB/sec.  it is better than the 0.4 watermark case.

Mmm. Try seeing what happens if you reduce the high watermark to 0.2 or
lower - yes, publishing will be slower, but it might help with the
memory usage on consuming.

> any idea on how to decrease the usage of broker for the 1000 topics case?  i am 
> not sure if it is caused by a bug, or it is something that cannot be avoided.
> thanks.

We're not quite sure yet either I'm afraid. As I say, I have a few
theories I want to test out but it could be an issue with the Erlang
garbage collector. Also, what version of Erlang are you using?

Best wishes,


More information about the rabbitmq-discuss mailing list