[rabbitmq-discuss] Proper protocol for dealing with crash dumps?

Matthias Radestock matthias at rabbitmq.com
Mon Oct 8 20:44:11 BST 2012


Alex,

On 08/10/12 20:20, Alex Zepeda wrote:
> =INFO REPORT==== 8-Oct-2012::10:00:43 ===
> Memory limit set to 395MB of 988MB total.

Right, so rabbit attempting to allocate 700Mb looks a bit suspicious but 
could still be ok, i.e. if it wasn't using much at the time.

Could a client might be sending a very large message?

>> As suggested previously, when rabbit is using more memory than you
>> expect the output of 'rabbitmqctl report' should shed some light on
>> where it's going.
>
> I'll try, but generally the devices having the most trouble have the
> most unreliable connections, so logging in is often very difficult.

You may want to set up some automated monitoring/logging, so when a 
problem does arise you can look at the most recent reports in the post 
mortem.

> The other thing I'm seeing is that shovel appears to be getting
> stuck (but the shovel status shows it's running) on a number
> of these devices.  What sort of diagnostics would be useful
> here?

Have you got heartbeats enabled? If not then turn them on.

Do the shovel connections show up at the destination?

Btw, have you got a prefetch_count set in your shovel config and are 
running in an ack mode other than no_ack? If not then that might explain 
the unexpectedly high memory usage since a stuck shovel connection would 
cause messages to pile up in memory in the shovel.

Regards,

Matthias.


More information about the rabbitmq-discuss mailing list