[rabbitmq-discuss] EXT :Re: Memory not flushing to disk

Emile Joubert emile at rabbitmq.com
Tue Apr 23 15:43:01 BST 2013

Hi Logan,

On 23/04/13 15:04, Rodrian, Logan P (IS) wrote:

> Attached (text file) is the output from when this happens.  My
> previous snippets were from when this happens as well, I just didn't
> include the entire report.
> The issue is exactly as you stated--the memory alarm remains set
> indefinitely while ram_msg_counts remain high.  This causes the
> publisher to be unable to send anything further to the broker.

I think you simply have run out of memory. Queue processes only account
for 360Mb of the total 2.6Gb in use, which is not very much. The bulk of
the RAM is being used by the message index. Even when message bodies are
saved to disk, accounting information will amounts to about 170 bytes
per message which has to stay in RAM. In your case queue lengths total
over 14million and this per-message overhead is the dominating term.

It is not possible to queue an indefinite number of messages on a server
with a finite amount of RAM. The broker documentation should be clearer
about that limitation.

You may be able to relive the memory pressure by setting the vm memory
limit above 40% temporarily while disconnecting publishers:

"rabbitmqctl set_vm_memory_high_watermark 0.5" might be enough.

If you expect the broker to queue that many messages regularly then you
will need to provision more RAM, in line with the 170 byte overhead.


More information about the rabbitmq-discuss mailing list