[rabbitmq-discuss] Possible memory leak in the management plugin

Travis Mehlinger tmehlinger at gmail.com
Mon Jun 17 15:45:58 BST 2013


Hi Simon,

Thanks for getting back to me. I'll need to restart our monitor and give it
some time to leak the memory. I'll let you know the results sometime later
today.

One thing I failed to mention in my initial report: whenever we restarted
one of our services, the queues they were using would get cleaned up (we
have them set to auto-delete) and redeclared. Whenever we did that, we
would see the memory consumption of the management DB fall off sharply
before starting to rise again.

Let me know if you'd like any further information in the meantime.

Best, Travis


On Mon, Jun 17, 2013 at 6:39 AM, Simon MacMullen <simon at rabbitmq.com> wrote:

> Hi. Thanks for the report.
>
> My first guess is that garbage collection for the management DB process is
> happening too slowly. Can you invoke:
>
> $ rabbitmqctl eval 'P=global:whereis_name(rabbit_**mgmt_db),M1=process_info(P,
> memory),garbage_collect(P),M2=**process_info(P, memory),{M1,M2,rabbit_vm:*
> *memory()}.'
>
> and post the results?
>
> Cheers, Simon
>
> On 15/06/13 03:09, Travis Mehlinger wrote:
>
>> We recently upgraded RabbitMQ from 3.0.4 to 3.1.1 after noticing two bug
>> fixes in 3.1.0 related to our RabbitMQ deployment:
>>
>>   * 25524 fix memory leak in mirror queue slave with many short-lived
>>     publishing channel
>>   * 25290 fix per-queue memory leak recording stats for mirror queue
>> slaves
>>
>> However, in our case, it seems that the management plugin may still have
>> a memory leak. We have a monitoring agent that hits the REST API to
>> gather information about the broker (number of queues, queue depth,
>> etc.). With the monitoring agent running and making requests against the
>> API, memory consumption steadily increased; when we stopped the agent,
>> memory consumption in the management plugin leveled off.
>>
>> Here a couple graphs detailing memory consumption in the broker (the
>> figures are parsed from rabbitmqctl report). The first graph shows the
>> ebb and flow of memory consumption in a number of components and the
>> second shows just consumption by the management plugin. You can see
>> pretty clearly where we stopped the monitoring agent at 1:20.
>>
>> https://dl.dropboxusercontent.**com/u/7022167/Screenshots/n-**
>> np6obt-m9f.png<https://dl.dropboxusercontent.com/u/7022167/Screenshots/n-np6obt-m9f.png>
>> https://dl.dropboxusercontent.**com/u/7022167/Screenshots/**
>> an6dpup33xvx.png<https://dl.dropboxusercontent.com/u/7022167/Screenshots/an6dpup33xvx.png>
>>
>> We have two clustered brokers, both running RabbitMQ 3.1.1 on Erlang
>> R14B-04.1. There are typically around 200 queues, about 20 of which are
>> mirrored. There are generally about 200 consumers. Messages are rarely
>> queued and most queues typically sit idle.
>>
>> I'll be happy to provide any further diagnostic information.
>>
>> Thanks!
>>
>>
>> ______________________________**_________________
>> rabbitmq-discuss mailing list
>> rabbitmq-discuss at lists.**rabbitmq.com<rabbitmq-discuss at lists.rabbitmq.com>
>> https://lists.rabbitmq.com/**cgi-bin/mailman/listinfo/**rabbitmq-discuss<https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss>
>>
>>
>
> --
> Simon MacMullen
> RabbitMQ, Pivotal
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20130617/27642125/attachment.htm>


More information about the rabbitmq-discuss mailing list