[rabbitmq-discuss] exessive memory usage and blocked connection, even when no messages left
Matthias Radestock
matthias at rabbitmq.com
Thu May 19 10:28:29 BST 2011
Theo,
On 19/05/11 06:35, Theo wrote:
> I rewrote my test code in Java, just to make sure there's nothing with
> the Ruby driver that is causing the problem, here is the code:
> https://gist.github.com/980238
>
> I ran that with more or less the same result. After a few minutes at
> 20K publishes/deliveries/acks per second the cluster rotted and all
> connections got blocked. What's more, this time I saw that if I turned
> off my producers the web UI still reported messages being published
> for several minutes -- no producers were running (I'm absolutely sure
> the processes were terminated) but there were still messages being
> published.
The figure you see reported in the management UI is the rate at which
rabbit processes these messages. The publisher code publishes messages
as fast as it can, which is a far higher rate than rabbit can process
them. Hence rabbit is buffering the messages in order to try to keep up.
That way rabbit can handle brief periods of high publishing rate.
However, if publishing continues at a high rate then eventually the
buffers fill up all memory. At that point rabbit blocks further
publishes until the buffers have been drained sufficiently to allow
publishing to resume. This can take a while since rabbit has to page the
messages to disk in order to free up space.
Moreover, as soon as the publishers are unblocked, if they resume
publishing at the high rate, as is the case in your test, the memory
limit will be hit again quickly. Hence you will see rabbit "bouncing off
the limiter", where publishers get unblocked briefly and then blocked again.
This is all quite normal.
Regards,
Matthias.
More information about the rabbitmq-discuss
mailing list