[rabbitmq-discuss] Huge queues - memory alarm going off

Tim Robertson timrobertson100 at gmail.com
Sat Mar 9 16:03:53 GMT 2013


Hi all,

I've got the following going on (version 2.8.4):

- My consumers all stop (e.g. imagine a failure scenario / upgrade), but
producers keep on producing
- Queues start backing up
- Memory increases with queue size
- The high water mark gets hit and the node memory alarm goes off

There are a couple of things I'd like to understand that I can't seem to
find in the docs:

- with this being a durable queue, I anticipated RMQ would flush to disk
and free memory.  Could someone please explain the memory overhead for
messages sitting on a queue?  I guess there is a something in memory for
each message on a queue - is there a way to work around that? (we
anticipate deliberately getting into this state from time to time, when we
e.g. upgrade HBase)

- I'm kind of in a deadlock I think now as when the consumers start, they
won't ack a message until they have successfully sent a message on (it's a
multihop process) but that is blocked.  Should the per connection flow
control not have kicked in and blocked the producers before the whole lot
just blocked?  (have I missed some setting to enable that, as the docs say
it is on by default).

Also please know I am a newbie, running on default config, so please don't
be shy at pointing out the obvious to me.  Chances are I could have missed
something, but I did try and read all the docs before posting.

Thanks for any pointers!
Tim
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20130309/cf11af69/attachment.htm>


More information about the rabbitmq-discuss mailing list