[rabbitmq-discuss] Recovering Rabbit without restarting

Ben Hood 0x6e6562 at gmail.com
Tue Jan 17 18:26:43 GMT 2012

Hey Simon,

On Tue, Jan 17, 2012 at 11:40 AM, Simon MacMullen <simon at rabbitmq.com> wrote:
> On 16/01/12 17:34, Ben Hood wrote:
>> I'm currently looking into a production scenario with
>> 2.7.0/RHEL5/R13B3 where Rabbit has ground to halt by consuming as good
>> as all of the CPU, but is still responding to external requests,
>> albeit very slowly. Peers appear to still be connected to the broker.
>> Running rabbitmqctl works, but any kind of report (status,
>> list_queues, report etc) is taking too long to return to get any
>> diagnostics whatsoever.
>> So I was wondering if there is any way to somehow throttle heavyweight
>> activities so that we can run some of the normal diagnostics, short of
>> restarting the whole instance.
> If the memory alarm has not already gone off by this point you may benefit
> from making it go off to halt publishers:

I assume that the memory alarm had gone off, since the management UI
was reporting 3.9 GB over a 1.5 GB watermark. The UI also correctly
colored the background box as bright red :-)

> Also, the management plugin should in general be more responsive than
> rabbitmqctl in heavily-loaded situations. This is because rabbitmqctl will
> go and interrogate each {queue / connection / channel} in turn, while
> management caches updates that are broadcast by these objects.

So it turns out the UI was effectively unresponsive - each _refresh_
would take a couple of minutes and the JS app would periodically crash
in Chrome (don't know about Mozilla).

The way I ended solving the issue was the with realization that 3
client instances had each leaked ca. 50K channels. After killing these
connections with rabbitmqctl (which took about 1 minute per
connection), the broker returned to a normal operating state. So in
the end, no other client connections needed to get terminated.

So all that was required was a bit of patience - thanks for taking the
time to look into this though.



More information about the rabbitmq-discuss mailing list