[rabbitmq-discuss] Huge queues - memory alarm going off

Tim Robertson timrobertson100 at gmail.com
Sun Mar 10 13:28:29 GMT 2013

Thanks Matthias,

Knowing that the tokyo plugin is not often used is useful info.  I've spent
the weekend reading as much as I can and some of the blogs on the rabbit
site [e.g. 1,2 were particularly useful].  I don't claim to get all this,
but I think I am beginning to get a general understanding and realising we
do indeed need a buffer to decouple those consumers that we know are
inherently slow or will be taken offline periodically (e.g. the map tile
renderers which are maintaining Billions of tiles in HBase in large
queue-draining batch operations).

Really appreciate the helpful pointers from both you and Jerry - cheers


On Sun, Mar 10, 2013 at 2:17 PM, Matthias Radestock
<matthias at rabbitmq.com>wrote:

> Tim,
> On 10/03/13 12:14, Tim Robertson wrote:
>> it is really not good practice to
>> deliberately use RMQ as a large buffering technology for queues, due to
>> the memory management.  Namely, that if one queue is hugely backed up,
>> we'll get into a oscillation of:
>>    i) memory limit hit
>>    ii) block everything while flush partially to disk
>>    iii) repeat immediately (while the disabled consumer remains)
>> While it will work, we'll likely cripple other parts of the system if
>> they are going through the same broker.
> Rabbit will start paging to disk *before* the memory limit is hit, in
> order to prevent precisely the stop-start scenario you describe above.
> Obviously if the publishing happens at a faster rate than rabbit can write
> to disk then eventually the memory limit will still be hit and producers
> blocked while rabbit catches up.
> Furthermore, as Jerry says, messages do have a non-zero memory footprint
> even when paged to disk. As a consequence, when "filling" rabbit with
> messages, the paging becomes more and more aggressive over time to the
> point where all memory is taken up by the residual footprint of paged
> messages, and publishing even a small number of messages will trigger a
> memory alarm. And eventually *all* memory is consumed that way and
> publishers are blocked indefinitely until some messages are removed from
> the queues.
>  I think this was probably a lack of understanding on our part, as we
>> anticipated using it as a queue (to do large buffering) whereas I
>> presume it is (?) really intended to be a messaging system and targeting
>> zero queue sizes is the expected behavior (consumer throughput matched
>> to producer).
> We used to say that "an empty rabbit is a healthy rabbit", but that
> statement predates many improvements to persistence and flow-control that
> have happened since. So nowadays I'd rephrase that as "a non-obese rabbit
> is a healthy rabbit".
>  Are there alternative configurations that you are aware of that would
>> allow it to back up large queues, without hitting memory limits?  (the
>> tokyo cabinet plugin perhaps?)
> Exactly that. With the tokyo cabinet plugin, messages that have been paged
> to disk leave zero memory footprint, allowing rabbit to hold on to as much
> message data as will fit on disk.
> Note however that the plugin is rarely used and is not included in our
> distributions due to the dependency on non-Erlang, platform-specific code.
> Alternatively, increase the amount of memory on your rabbit machine.
> Regards,
> Matthias.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20130310/80ffaeca/attachment.htm>

More information about the rabbitmq-discuss mailing list