[rabbitmq-discuss] |Spam| Re: Possible memory leak in the management plugin

Travis Mehlinger tmehlinger at gmail.com
Thu Jun 20 01:32:57 BST 2013


I won't be able to test it in the same production environment but should be
able to reproduce the behavior in a test environment. I'm travelling
tomorrow afternoon, so if you don't hear from me by end of business UK time
tomorrow, I'll definitely have something for you Monday.

Thanks so much for all your help. :)

Best, Travis


On Wed, Jun 19, 2013 at 10:34 AM, Simon MacMullen <simon at rabbitmq.com>wrote:

> Also, I have a patched version of the management plugin which fixes the
> leak I found today:
>
> http://www.rabbitmq.com/**releases/plugins/v3.1.1/**
> rabbitmq_management-3.1.1-**leakfix1.ez<http://www.rabbitmq.com/releases/plugins/v3.1.1/rabbitmq_management-3.1.1-leakfix1.ez>
>
> (see http://www.rabbitmq.com/**plugins.html#installing-**plugins<http://www.rabbitmq.com/plugins.html#installing-plugins>for where to put it, it replaces rabbitmq_management-3.1.1.ez)
>
> If you are able to test this version, that would also be great.
>
> Cheers, Simon
>
> On 19/06/13 15:57, Simon MacMullen wrote:
>
>> Thanks.
>>
>> Could you run
>>
>> rabbitmqctl eval
>> '{_,_,_,[_,_,_,_,[_,_,{_,[{_,**S}]}]]}=sys:get_status(global:**
>> whereis_name(rabbit_mgmt_db)),**T=element(5,S),ets:foldl(fun(**
>> E,_)->io:format("~p~n~n",[E])**end,[],T).'
>>
>>
>> please? This will dump the entire leaky table to standard output, so you
>> would want to redirect it to a file.
>>
>> If you could also accompany that with "rabbitmqctl report" so I can see
>> what actually exists at that time then I can at least see what is leaking.
>>
>> Cheers, Simon
>>
>> On 19/06/13 15:30, Travis Mehlinger wrote:
>>
>>> Hi Simon,
>>>
>>> We aren't doing anything like that. Whenever one of our services starts
>>> (which are based on Kombu, if that helps), it plucks a connection from
>>> its internal pool, creates a channel on that connection, then binds on
>>> its request queue, which hangs around until the service stops. The only
>>> thing that deviates from this pattern is our monitor, which connects and
>>> disconnects fairly rapidly, and uses exclusive queues.
>>>
>>> That said, it's entirely possible that we have something floating around
>>> that I don't know about that fits the behavior you described. I'll keep
>>> digging and see what I can turn up. In the meantime, let me know if
>>> there's any more information I can collect from Rabbit.
>>>
>>> Thanks, Travis
>>>
>>>
>>> On Wed, Jun 19, 2013 at 6:13 AM, Simon MacMullen <simon at rabbitmq.com
>>> <mailto:simon at rabbitmq.com>> wrote:
>>>
>>>     On 18/06/13 16:11, Travis Mehlinger wrote:
>>>
>>>         We declare those queues as exclusive so they're getting
>>> cleaned up
>>>         automatically.
>>>
>>>
>>>     I have found a plausible candidate for the leak. But it's dependent
>>>     on having a long lived channel which declares and deletes lots of
>>>     short-lived queues. We keep some information on the deleted queues
>>>     until the channel is closed. Could your monitoring tool be doing
>>>     that? Obviously it would have to be deleting queues explicitly, not
>>>     relying on them being exclusive.
>>>
>>>     Cheers, Simon
>>>
>>>
>>>     --
>>>     Simon MacMullen
>>>     RabbitMQ, Pivotal
>>>
>>>
>>>
>>
>>
>
> --
> Simon MacMullen
> RabbitMQ, Pivotal
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20130619/7db18ab3/attachment.htm>


More information about the rabbitmq-discuss mailing list