[rabbitmq-discuss] Node reaches RAM High Watermark quickly

Simon MacMullen simon at rabbitmq.com
Fri Sep 16 17:02:58 BST 2011


On 12/09/11 23:21, Chris Larsen wrote:
> Hello, we’re running a bunch of nodes of RabbitMQ 2.3.1 on Erlang R13B03
> under Debian and one of the nodes has a problem where it reaches the
> high watermark, stops accepting producers and requires a reboot to
> recover. The consumers eat all of the messages so that the queues are
> empty but the used Ram never drops. I checked for queue leakage and we
> only have 9 queues defined with about 100 unacked messages in one queue
> when the ram fills up. We only have about 30 connections and less than
> 20 channels shown in the GUI when it fails. The system isn’t running in
> swap and messages are sometimes a couple of MB in size but no larger.
> Any ideas what it might be? Thank you!

There was a memory leak bug in 2.3.1 when using confirms with the 
immediate flag - but "immediate" is pretty rarely used and it was 
probably a slow leak. Still, if you are using that combination, that's 
suspicious.

What does mgmt say about the memory use for each queue? Unfortunately 
there's also a bug in 2.3.1 which makes mgmt report inflated memory use 
for idle queues but it should be accurate enough to see any gross problems.

Also, you can get slightly more detailed global memory statistics than 
are shown in the GUI with:

rabbitmqadmin -f long list nodes

or something like:

curl -u guest:guest http://localhost:55672/api/nodes

- it might be instructive to see those.

Other than that, the standard advice is to upgrade to 2.6.1 and see if 
the problem goes away.

Cheers, Simon

-- 
Simon MacMullen
RabbitMQ, VMware


More information about the rabbitmq-discuss mailing list