[rabbitmq-discuss] Huge queues - memory alarm going off
matthias at rabbitmq.com
Sun Mar 10 13:17:33 GMT 2013
On 10/03/13 12:14, Tim Robertson wrote:
> it is really not good practice to
> deliberately use RMQ as a large buffering technology for queues, due to
> the memory management. Namely, that if one queue is hugely backed up,
> we'll get into a oscillation of:
> i) memory limit hit
> ii) block everything while flush partially to disk
> iii) repeat immediately (while the disabled consumer remains)
> While it will work, we'll likely cripple other parts of the system if
> they are going through the same broker.
Rabbit will start paging to disk *before* the memory limit is hit, in
order to prevent precisely the stop-start scenario you describe above.
Obviously if the publishing happens at a faster rate than rabbit can
write to disk then eventually the memory limit will still be hit and
producers blocked while rabbit catches up.
Furthermore, as Jerry says, messages do have a non-zero memory footprint
even when paged to disk. As a consequence, when "filling" rabbit with
messages, the paging becomes more and more aggressive over time to the
point where all memory is taken up by the residual footprint of paged
messages, and publishing even a small number of messages will trigger a
memory alarm. And eventually *all* memory is consumed that way and
publishers are blocked indefinitely until some messages are removed from
> I think this was probably a lack of understanding on our part, as we
> anticipated using it as a queue (to do large buffering) whereas I
> presume it is (?) really intended to be a messaging system and targeting
> zero queue sizes is the expected behavior (consumer throughput matched
> to producer).
We used to say that "an empty rabbit is a healthy rabbit", but that
statement predates many improvements to persistence and flow-control
that have happened since. So nowadays I'd rephrase that as "a non-obese
rabbit is a healthy rabbit".
> Are there alternative configurations that you are aware of that would
> allow it to back up large queues, without hitting memory limits? (the
> tokyo cabinet plugin perhaps?)
Exactly that. With the tokyo cabinet plugin, messages that have been
paged to disk leave zero memory footprint, allowing rabbit to hold on to
as much message data as will fit on disk.
Note however that the plugin is rarely used and is not included in our
distributions due to the dependency on non-Erlang, platform-specific code.
Alternatively, increase the amount of memory on your rabbit machine.
More information about the rabbitmq-discuss