[rabbitmq-discuss] Possible memory leak in the management plugin

Joseph Lambert joseph.g.lambert at gmail.com
Tue Jun 18 08:17:18 BST 2013


Hi, 

I just wanted to mention that we are seeing the same behavior, but we don't 
have any mirrored queues. We have had to disable the management interface 
and the memory usage gradually dropped.

We are using 3.1.1 on R13B04 on some machines, and a few others with Erlang 
R16B.

I looked into the code, and it creates some ets tables in the 
rabbit_mgmt_db module. The first snippet is memory usage with the 
management plugin:

 {memory,
     [{total,1417822136},
      {connection_procs,37788152},
      {queue_procs,6659896},
      {plugins,686592},
      {other_proc,11236576},
      {mnesia,855184},
      {mgmt_db,935311000},
      {msg_index,7976224},
      {other_ets,70559680},
      {binary,13529704},
      {code,19001963},
      {atom,1601817},
      {other_system,312615348}]},


This second snippet is after disabling the management plugin and restarting 
one of the nodes:
 

[{total,190412936},

{memory,
{connection_procs,34512200},
{queue_procs,8970352},
{plugins,0},
{other_proc,9246776},
{mnesia,794776},
{mgmt_db,0},
{msg_index,1650736},
{other_ets,6406656},
{binary,63363448},
{code,16232973},
{atom,594537},
{other_system,48640482}]}


You'll notice that the memory used by mgmt_db is not 0 and other_system is 
around 48MB, while before mgmt_db was over 935MB, and other_system over 
300MB. Unfortunately, I don't have any growth trends for the DB size as we 
have had to disable management on all nodes and we weren't tracking this 
memory usage.

I'm not very familiar with RabbitMQ, but if I have some time I will try and 
dig more deeply into it and run some tests.

- Joe

On Saturday, June 15, 2013 10:09:21 AM UTC+8, Travis Mehlinger wrote:
>
> We recently upgraded RabbitMQ from 3.0.4 to 3.1.1 after noticing two bug 
> fixes in 3.1.0 related to our RabbitMQ deployment:
>
>    - 25524 fix memory leak in mirror queue slave with many short-lived 
>    publishing channel
>    - 25290 fix per-queue memory leak recording stats for mirror queue 
>    slaves
>
> However, in our case, it seems that the management plugin may still have a 
> memory leak. We have a monitoring agent that hits the REST API to gather 
> information about the broker (number of queues, queue depth, etc.). With 
> the monitoring agent running and making requests against the API, memory 
> consumption steadily increased; when we stopped the agent, memory 
> consumption in the management plugin leveled off.
>
> Here a couple graphs detailing memory consumption in the broker (the 
> figures are parsed from rabbitmqctl report). The first graph shows the 
> ebb and flow of memory consumption in a number of components and the second 
> shows just consumption by the management plugin. You can see pretty clearly 
> where we stopped the monitoring agent at 1:20.
>
> https://dl.dropboxusercontent.com/u/7022167/Screenshots/n-np6obt-m9f.png
> https://dl.dropboxusercontent.com/u/7022167/Screenshots/an6dpup33xvx.png
>
> We have two clustered brokers, both running RabbitMQ 3.1.1 on Erlang 
> R14B-04.1. There are typically around 200 queues, about 20 of which are 
> mirrored. There are generally about 200 consumers. Messages are rarely 
> queued and most queues typically sit idle.
>
> I'll be happy to provide any further diagnostic information.
>
> Thanks!
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20130618/7acfc056/attachment.htm>


More information about the rabbitmq-discuss mailing list