[rabbitmq-discuss] Proper protocol for dealing with crash dumps?

Matthias Radestock matthias at rabbitmq.com
Sun Oct 7 08:05:22 BST 2012


On 07/10/12 06:18, Alex Zepeda wrote:
> These machines have under 2GB of RAM so allocating in excess of 700MB
> seems a bit odd.

The total memory and high watermark are shown in the rabbit logs, e.g.
s.t. like

=INFO REPORT==== 3-Oct-2012::20:31:08 ===
Memory limit set to 4814MB of 12036MB total.

So check that these figures make sense for your setup.

> In an ideal world we'd see around 48,000 messages per day *at the
> very most*.  In practice, we're running into problems where an order
> of magnitude more messages are being queued up under some
> circumstances... but I'd expect that rabbit should handle that
> gracefully and block the connection, no?

When under memory pressure rabbit will page the messages to disk, and
block producers to control the rate of message influx so it can keep up. 
Hence high message volumes should not cause rabbit to run out of memory.

As suggested previously, when rabbit is using more memory than you
expect the output of 'rabbitmqctl report' should shed some light on
where it's going.



More information about the rabbitmq-discuss mailing list