[rabbitmq-discuss] mysterious rabbit crash

Dmitriy Samovskiy dmitriy.samovskiy at cohesiveft.com
Tue May 19 18:53:45 BST 2009

Matthias Radestock wrote:
> Tony Garnock-Jones wrote:
>> Dmitriy Samovskiy wrote:
>>> Which makes me wonder why rabbit was using  so much memory if its
>>> queues were being drained normally...
>> Indeed. If you still have the persister log around somewhere, it might
>> be worth feeding it to a rabbit and using rabbitmqctl to get a picture
>> of the states of the queues implied by the file?

I don't think it survived :)

> The difference between
>> -rw-r--r--    1 rabbitmq rabbitmq 22722770 May 19 00:27 
>> rabbit_persister.LOG.previous
> and
>> binary: 815315560
> Suggests that the bulk of the memory is consumed by transient messages, 
> and that it just happened to be a persister log rollover that triggered 
> the OOM condition.
> What client are you using to publish the messages? Does it react to 
> channel.flow messages from the server telling it to throttle?

Py-amqplib. I don't think it supports channel.flow, or at least the version that I 
currently have doesn't support it.

> Also, are you sure the consumers were draining all the queues and 
> ack'ing messages properly?

Not anymore. The code seems to indicate it does, but I will verify it with rabbitmqctl 
next time.

> You could set up some monitoring that periodically calls 'rabbitmqctl 
> list_queues ...' to catch any queues not getting drained.

Thanks for your ideas and your time. I will keep a closer eye on this box now. If it 
happens again, I can get more information.

- Dmitriy

More information about the rabbitmq-discuss mailing list