[rabbitmq-discuss] unexplained broker shutdown

Matthew Sackman matthew at rabbitmq.com
Mon Jul 18 13:20:48 BST 2011

On Mon, Jul 18, 2011 at 02:04:13PM +0200, PADIOU Pierre-Marie (MORPHO) wrote:
> Can the broker make the decision to page messages even if there is plenty of ram left (say 30GB out of 40GB?), just on its estimation?


The worst possible implementation of paging would be to wait until all
RAM is used up, and then write out potentially millions of messages.

Essentially, Rabbit has to decide how to allocate RAM (and actually more
importantly, disk bandwidth) to queues. In many ways, it tries to
prioritise the allocation of RAM to queues that are very active (i.e.
lots of msgs being received by and/or sent by the queue) because such
queues would not be able to cope with the performance hit of being
written to disk.

To avoid the crow-bar effect of suddenly having to write out millions of
messages, Rabbit starts writing out messages fairly early on. In most
cases, should RAM remain relatively plentiful, these msgs will be only
written async, and never read, because a copy will remain in RAM until
memory pressure is such that the RAM copy has to be discarded.

Due to taking into account the effects of garbage collection and other
issues, it's possible that Rabbit, with default configuration, will
start paging out messages to disk when it's using only 8% of the RAM
installed in the machine.

The goal is for Rabbit to try and stay as responsive as possible in all
its operations for as long as possible, as memory gets tighter and
tighter. Once that goal fails (i.e. we have to block publishers because,
ultimately, they're publishing msgs faster than we can write them to
disk), the next goal is to ensure that Rabbit will always be able to
recover (assuming disk space is plentiful) to the point where it will be
able to accept the next message sent to it.


More information about the rabbitmq-discuss mailing list