[rabbitmq-discuss] major failure when clearing large queue backlog

Matthew Sackman matthew at rabbitmq.com
Tue Aug 16 13:56:16 BST 2011


On Tue, Aug 16, 2011 at 08:45:27AM -0400, Aaron Westendorf wrote:
> We've been running this configuration for a long time now, and I
> vaguely recall that when the spool-to-disk feature was added, it was
> dependent on this setting. However, I've seen it write to disk even
> with this setting off so I may be mistaken.

Yes, because whilst setting it to 0 stops the broker from throttling
producers when memory gets tight, the queues are forced to assume that
there is 1GB of memory available and they do continue to watch memory
use and write to disk as they see fit. Thus if you have more than 1GB
RAM in your machines, Rabbit will be writing to disk earlier that it
should.

> Regardless, we don't have
> support in our applications for flow control (yet?) so I've been wary
> to enable vm_memory_high_watermark. I plan to add a concept of flow
> control strategies to haigha so that common use cases can be
> abstracted without per-app implementations.

Ok understood. The way Rabbit does it these days is that it just stops
reading from the sockets of publishers. Thus the TCP buffers will fill
up, and the publishers will just find that writing to their socket
blocks. We don't use channel.flow any more. Thus you may find that you
need do nothing, or, you may have a situation where you really don't
want to have publishers blocked.

> I've seen rabbit use much more than 1GB so I don't think it's defaulted to that.

Well we can't control fully how much RAM gets used. That's why we hedge
our bets and have the default at 0.4 of the installed RAM. This allows
space for transient spikes such as GC and so forth. The point of the 1GB
default is that this controls when Rabbit starts to send msgs to disk to
try and free up RAM. But ultimately, it won't stop Rabbit/Erlang from
using more than 1GB RAM if that's necessary.

Matthew


More information about the rabbitmq-discuss mailing list