[rabbitmq-discuss] EXT :Re: Memory not flushing to disk

Rodrian, Logan P (IS) Logan.Rodrian at ngc.com
Tue Apr 23 18:11:33 BST 2013


Thank you for that information.

Let me present our use case in a larger context and see what your thoughts are.  Our goal with using RabbitMQ as a broker was to provide a managed "disk backup" of data while consumers were unavailable, ensuring no data is lost.  In our environment, the consumers may not be available for a long time.  

I understand the use of vm_memory_high_watermark to put a "band-aid" on the problem, but it sounds like we will still run into issues eventually.

So, are there any parameters that can be set on the server to configure it for this type of environment?  This includes use of any alternative plugins that keep the indexes in a different way and/or persist data to the disk using a different approach.

I want to be sure that I have exhausted all possibilities with RabbitMQ in our use case to decide if it is something that will work in our environment or not, so any insight you have in this area would be appreciated.  Perhaps we are attempting to use RabbitMQ in a way other than what it was designed for??

Logan Rodrian

From: Emile Joubert [emile at rabbitmq.com]
Sent: Tuesday, April 23, 2013 08:43
To: Rodrian, Logan P (IS)
Cc: Discussions about RabbitMQ
Subject: Re: EXT :Re: [rabbitmq-discuss] Memory not flushing to disk

Hi Logan,

On 23/04/13 15:04, Rodrian, Logan P (IS) wrote:

> Attached (text file) is the output from when this happens.  My
> previous snippets were from when this happens as well, I just didn't
> include the entire report.
> The issue is exactly as you stated--the memory alarm remains set
> indefinitely while ram_msg_counts remain high.  This causes the
> publisher to be unable to send anything further to the broker.

I think you simply have run out of memory. Queue processes only account
for 360Mb of the total 2.6Gb in use, which is not very much. The bulk of
the RAM is being used by the message index. Even when message bodies are
saved to disk, accounting information will amounts to about 170 bytes
per message which has to stay in RAM. In your case queue lengths total
over 14million and this per-message overhead is the dominating term.

It is not possible to queue an indefinite number of messages on a server
with a finite amount of RAM. The broker documentation should be clearer
about that limitation.

You may be able to relive the memory pressure by setting the vm memory
limit above 40% temporarily while disconnecting publishers:

"rabbitmqctl set_vm_memory_high_watermark 0.5" might be enough.

If you expect the broker to queue that many messages regularly then you
will need to provision more RAM, in line with the 170 byte overhead.


More information about the rabbitmq-discuss mailing list