[rabbitmq-discuss] Possible memory leak in the management plugin
tmehlinger at gmail.com
Mon Jun 17 20:08:10 BST 2013
I have more information for you. It turns out I hadn't fully understood the
interaction causing this to happen.
Aside from their regular communication, our services also declare a queue
bound on # to an exchange that we use for collecting stats the services
store internally. In addition to hitting the REST API for information about
the broker, the monitor also opens a connection/channel, declares an
anonymous queue for itself, then sends a message indicating to our services
that they should respond with their statistics. The services then send a
message with a routing key that will direct the response onto the queue
declared by the monitor. This happens every five seconds.
It appears that this is in fact responsible for memory consumption growing
out of control. If I disable that aspect of monitoring and leave the REST
API monitor up, memory consumption stays level.
The problem seems reminiscent of the issues described in this mailing list
However, the queues we declare for stats collection are *not* mirrored.
Hope that helps narrow things down. :)
On Mon, Jun 17, 2013 at 12:58 PM, Travis Mehlinger <tmehlinger at gmail.com>wrote:
> Hi Simon,
> I flipped our monitor back on and let Rabbit consume some additional
> memory. Invoking the garbage collector had no impact.
> Let me know what further information you'd like to see and I'll be happy
> to provide it.
> Thanks, Travis
> On Mon, Jun 17, 2013 at 10:32 AM, Simon MacMullen <simon at rabbitmq.com>wrote:
>> On 17/06/13 15:45, Travis Mehlinger wrote:
>>> Hi Simon,
>>> Thanks for getting back to me. I'll need to restart our monitor and give
>>> it some time to leak the memory. I'll let you know the results sometime
>>> later today.
>>> One thing I failed to mention in my initial report: whenever we
>>> restarted one of our services, the queues they were using would get
>>> cleaned up (we have them set to auto-delete) and redeclared. Whenever we
>>> did that, we would see the memory consumption of the management DB fall
>>> off sharply before starting to rise again.
>> That is presumably because the historical data the management plugin has
>> been retaining for those queues got thrown away. If you don't want to
>> retain this data at all, change the configuration as documented here:
>> However, I (currently) don't believe it's this historical data you are
>> seeing as "leaking" since making queries should not cause any more of it to
>> be retained.
>> Cheers, Simon
>> Let me know if you'd like any further information in the meantime.
>>> Best, Travis
>>> On Mon, Jun 17, 2013 at 6:39 AM, Simon MacMullen <simon at rabbitmq.com
>>> <mailto:simon at rabbitmq.com>> wrote:
>>> Hi. Thanks for the report.
>>> My first guess is that garbage collection for the management DB
>>> process is happening too slowly. Can you invoke:
>>> $ rabbitmqctl eval
>>> and post the results?
>>> Cheers, Simon
>>> On 15/06/13 03:09, Travis Mehlinger wrote:
>>> We recently upgraded RabbitMQ from 3.0.4 to 3.1.1 after noticing
>>> two bug
>>> fixes in 3.1.0 related to our RabbitMQ deployment:
>>> * 25524 fix memory leak in mirror queue slave with many
>>> publishing channel
>>> * 25290 fix per-queue memory leak recording stats for mirror
>>> queue slaves
>>> However, in our case, it seems that the management plugin may
>>> still have
>>> a memory leak. We have a monitoring agent that hits the REST API
>>> gather information about the broker (number of queues, queue
>>> etc.). With the monitoring agent running and making requests
>>> against the
>>> API, memory consumption steadily increased; when we stopped the
>>> memory consumption in the management plugin leveled off.
>>> Here a couple graphs detailing memory consumption in the broker
>>> figures are parsed from rabbitmqctl report). The first graph
>>> shows the
>>> ebb and flow of memory consumption in a number of components and
>>> second shows just consumption by the management plugin. You can
>>> pretty clearly where we stopped the monitoring agent at 1:20.
>>> We have two clustered brokers, both running RabbitMQ 3.1.1 on
>>> R14B-04.1. There are typically around 200 queues, about 20 of
>>> which are
>>> mirrored. There are generally about 200 consumers. Messages are
>>> queued and most queues typically sit idle.
>>> I'll be happy to provide any further diagnostic information.
>>> rabbitmq-discuss mailing list
>>> rabbitmq-discuss at lists.__rabbi**tmq.com <http://rabbitmq.com>
>>> <mailto:rabbitmq-discuss@**lists.rabbitmq.com<rabbitmq-discuss at lists.rabbitmq.com>
>>> Simon MacMullen
>>> RabbitMQ, Pivotal
>> Simon MacMullen
>> RabbitMQ, Pivotal
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the rabbitmq-discuss