[rabbitmq-discuss] |Spam| Re: Possible memory leak in the management plugin

Simon MacMullen simon at rabbitmq.com
Tue Jun 18 16:47:08 BST 2013


OK, that definitely looks like a leak. Could you also give me the output 
from:

rabbitmqctl eval '{[{T, ets:info(T,size), ets:info(T,memory)} || T <- 
lists:sort(ets:all()), rabbit_mgmt_db <- [ets:info(T, name)]], 
sys:get_status(global:whereis_name(rabbit_mgmt_db))}.'

to make sure I'm clear on which table is leaking.

Cheers, Simon

On 18/06/13 16:11, Travis Mehlinger wrote:
> Hi Simon,
>
> We declare those queues as exclusive so they're getting cleaned up
> automatically.
>
> I ran the command you gave periodically over the course of the last two
> hours. The row count and total size in the highlighted line are
> definitely growing unchecked. All other values hovered closely around
> what you see in the gist.
>
> https://gist.github.com/tmehlinger/0c9a9a0d5fe1d31c8f6d#file-gistfile1-txt-L9
>
> Thanks, Travis
>
>
> On Tue, Jun 18, 2013 at 5:23 AM, Simon MacMullen <simon at rabbitmq.com
> <mailto:simon at rabbitmq.com>> wrote:
>
>     Hi. So I assume your monitoring code is not actually leaking those
>     queues - they are getting deleted I assume? How? (Are they
>     autodelete, exclusive, x-expires, deleted manually?)
>
>     If so, can you run:
>
>     rabbitmqctl eval '[{ets:info(T,size), ets:info(T,memory)} || T <-
>     lists:sort(ets:all()), rabbit_mgmt_db <- [ets:info(T, name)]].'
>
>     periodically? This will output a list of tuples showing the rows and
>     bytes per table for each table in the mgmt DB. Do these increase?
>
>     Cheers, Simon
>
>
>     On 17/06/13 20:08, Travis Mehlinger wrote:
>
>         Hi Simon,
>
>         I have more information for you. It turns out I hadn't fully
>         understood
>         the interaction causing this to happen.
>
>         Aside from their regular communication, our services also declare a
>         queue bound on # to an exchange that we use for collecting stats the
>         services store internally. In addition to hitting the REST API for
>         information about the broker, the monitor also opens a
>         connection/channel, declares an anonymous queue for itself, then
>         sends a
>         message indicating to our services that they should respond with
>         their
>         statistics. The services then send a message with a routing key that
>         will direct the response onto the queue declared by the monitor.
>         This
>         happens every five seconds.
>
>         It appears that this is in fact responsible for memory consumption
>         growing out of control. If I disable that aspect of monitoring
>         and leave
>         the REST API monitor up, memory consumption stays level.
>
>         The problem seems reminiscent of the issues described in this
>         mailing
>         list thread:
>         http://rabbitmq.1065348.n5.__nabble.com/RabbitMQ-Queues-__memory-leak-td25813.html
>         <http://rabbitmq.1065348.n5.nabble.com/RabbitMQ-Queues-memory-leak-td25813.html>.
>         However, the queues we declare for stats collection are *not*
>         mirrored.
>
>         Hope that helps narrow things down. :)
>
>         Best, Travis
>
>
>         On Mon, Jun 17, 2013 at 12:58 PM, Travis Mehlinger
>         <tmehlinger at gmail.com <mailto:tmehlinger at gmail.com>
>         <mailto:tmehlinger at gmail.com <mailto:tmehlinger at gmail.com>>> wrote:
>
>              Hi Simon,
>
>              I flipped our monitor back on and let Rabbit consume some
>         additional
>              memory. Invoking the garbage collector had no impact.
>
>              Let me know what further information you'd like to see and
>         I'll be
>              happy to provide it.
>
>              Thanks, Travis
>
>
>              On Mon, Jun 17, 2013 at 10:32 AM, Simon MacMullen
>              <simon at rabbitmq.com <mailto:simon at rabbitmq.com>
>         <mailto:simon at rabbitmq.com <mailto:simon at rabbitmq.com>>> wrote:
>
>                  On 17/06/13 15:45, Travis Mehlinger wrote:
>
>                      Hi Simon,
>
>                      Thanks for getting back to me. I'll need to restart our
>                      monitor and give
>                      it some time to leak the memory. I'll let you know the
>                      results sometime
>                      later today.
>
>                      One thing I failed to mention in my initial report:
>         whenever we
>                      restarted one of our services, the queues they were
>         using
>                      would get
>                      cleaned up (we have them set to auto-delete) and
>         redeclared.
>                      Whenever we
>                      did that, we would see the memory consumption of the
>                      management DB fall
>                      off sharply before starting to rise again.
>
>
>                  That is presumably because the historical data the
>         management
>                  plugin has been retaining for those queues got thrown
>         away. If
>                  you don't want to retain this data at all, change the
>                  configuration as documented here:
>
>         http://www.rabbitmq.com/____management.html#sample-____retention
>         <http://www.rabbitmq.com/__management.html#sample-__retention>
>
>
>         <http://www.rabbitmq.com/__management.html#sample-__retention
>         <http://www.rabbitmq.com/management.html#sample-retention>>
>
>                  However, I (currently) don't believe it's this
>         historical data
>                  you are seeing as "leaking" since making queries should not
>                  cause any more of it to be retained.
>
>                  Cheers, Simon
>
>                      Let me know if you'd like any further information
>         in the
>                      meantime.
>
>                      Best, Travis
>
>
>                      On Mon, Jun 17, 2013 at 6:39 AM, Simon MacMullen
>                      <simon at rabbitmq.com <mailto:simon at rabbitmq.com>
>         <mailto:simon at rabbitmq.com <mailto:simon at rabbitmq.com>>
>                      <mailto:simon at rabbitmq.com
>         <mailto:simon at rabbitmq.com> <mailto:simon at rabbitmq.com
>         <mailto:simon at rabbitmq.com>>>> wrote:
>
>                           Hi. Thanks for the report.
>
>                           My first guess is that garbage collection for the
>                      management DB
>                           process is happening too slowly. Can you invoke:
>
>                           $ rabbitmqctl eval
>
>
>         'P=global:whereis_name(rabbit_______mgmt_db),M1=process_info(__P,
>
>           memory),garbage_collect(P),M2=______process_info(P,
>                           memory),{M1,M2,rabbit_vm:______memory()}.'
>
>
>
>                           and post the results?
>
>                           Cheers, Simon
>
>                           On 15/06/13 03:09, Travis Mehlinger wrote:
>
>                               We recently upgraded RabbitMQ from 3.0.4
>         to 3.1.1
>                      after noticing
>                               two bug
>                               fixes in 3.1.0 related to our RabbitMQ
>         deployment:
>
>                                  * 25524 fix memory leak in mirror queue
>         slave
>                      with many
>                               short-lived
>                                    publishing channel
>                                  * 25290 fix per-queue memory leak recording
>                      stats for mirror
>                               queue slaves
>
>                               However, in our case, it seems that the
>         management
>                      plugin may
>                               still have
>                               a memory leak. We have a monitoring agent
>         that hits
>                      the REST API to
>                               gather information about the broker (number of
>                      queues, queue depth,
>                               etc.). With the monitoring agent running
>         and making
>                      requests
>                               against the
>                               API, memory consumption steadily
>         increased; when we
>                      stopped the
>                               agent,
>                               memory consumption in the management
>         plugin leveled
>                      off.
>
>                               Here a couple graphs detailing memory
>         consumption
>                      in the broker (the
>                               figures are parsed from rabbitmqctl
>         report). The
>                      first graph
>                               shows the
>                               ebb and flow of memory consumption in a
>         number of
>                      components and the
>                               second shows just consumption by the
>         management
>                      plugin. You can see
>                               pretty clearly where we stopped the monitoring
>                      agent at 1:20.
>
>         https://dl.dropboxusercontent.______com/u/7022167/Screenshots/__n-____np6obt-m9f.png
>
>
>         <https://dl.__dropboxuserconte__nt.com/u/__7022167/__Screenshots/n-np6obt-__m9f.png
>         <http://dropboxusercontent.com/u/__7022167/Screenshots/n-np6obt-__m9f.png>
>
>         <https://dl.__dropboxusercontent.com/u/__7022167/Screenshots/n-np6obt-__m9f.png
>         <https://dl.dropboxusercontent.com/u/7022167/Screenshots/n-np6obt-m9f.png>>>
>         https://dl.dropboxusercontent.______com/u/7022167/Screenshots/______an6dpup33xvx.png
>
>
>
>         <https://dl.__dropboxuserconte__nt.com/u/__7022167/__Screenshots/__an6dpup33xvx.png
>         <http://dropboxusercontent.com/u/__7022167/Screenshots/__an6dpup33xvx.png>
>
>
>         <https://dl.__dropboxusercontent.com/u/__7022167/Screenshots/__an6dpup33xvx.png
>         <https://dl.dropboxusercontent.com/u/7022167/Screenshots/an6dpup33xvx.png>>>
>
>                               We have two clustered brokers, both running
>                      RabbitMQ 3.1.1 on Erlang
>                               R14B-04.1. There are typically around 200
>         queues,
>                      about 20 of
>                               which are
>                               mirrored. There are generally about 200
>         consumers.
>                      Messages are
>                               rarely
>                               queued and most queues typically sit idle.
>
>                               I'll be happy to provide any further
>         diagnostic
>                      information.
>
>                               Thanks!
>
>
>
>           _____________________________________________________
>                               rabbitmq-discuss mailing list
>                               rabbitmq-discuss at lists.__rabbi____tmq.com
>         <http://rabbi__tmq.com>
>                      <http://rabbitmq.com>
>                               <mailto:rabbitmq-discuss@
>         <mailto:rabbitmq-discuss@>__lis__ts.rabbitmq.com
>         <http://lists.rabbitmq.com>
>                      <mailto:rabbitmq-discuss at __lists.rabbitmq.com
>         <mailto:rabbitmq-discuss at lists.rabbitmq.com>>>
>         https://lists.rabbitmq.com/______cgi-bin/mailman/listinfo/______rabbitmq-discuss
>         <https://lists.rabbitmq.com/____cgi-bin/mailman/listinfo/____rabbitmq-discuss>
>
>         <https://lists.rabbitmq.com/____cgi-bin/mailman/listinfo/____rabbitmq-discuss
>         <https://lists.rabbitmq.com/__cgi-bin/mailman/listinfo/__rabbitmq-discuss>>
>
>
>
>
>         <https://lists.rabbitmq.com/____cgi-bin/mailman/listinfo/____rabbitmq-discuss
>         <https://lists.rabbitmq.com/__cgi-bin/mailman/listinfo/__rabbitmq-discuss>
>
>         <https://lists.rabbitmq.com/__cgi-bin/mailman/listinfo/__rabbitmq-discuss
>         <https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss>>>
>
>
>
>                           --
>                           Simon MacMullen
>                           RabbitMQ, Pivotal
>
>
>
>
>                  --
>                  Simon MacMullen
>                  RabbitMQ, Pivotal
>
>
>
>
>
>     --
>     Simon MacMullen
>     RabbitMQ, Pivotal
>
>


-- 
Simon MacMullen
RabbitMQ, Pivotal


More information about the rabbitmq-discuss mailing list