<div dir="ltr">Hi Simon,<div><br></div><div>We declare those queues as exclusive so they&#39;re getting cleaned up automatically.</div><div><br></div><div style>I ran the command you gave periodically over the course of the last two hours. The row count and total size in the highlighted line are definitely growing unchecked. All other values hovered closely around what you see in the gist.</div>
<div style><br></div><div style><a href="https://gist.github.com/tmehlinger/0c9a9a0d5fe1d31c8f6d#file-gistfile1-txt-L9">https://gist.github.com/tmehlinger/0c9a9a0d5fe1d31c8f6d#file-gistfile1-txt-L9</a><br></div><div style>
<br></div><div style>Thanks, Travis</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Tue, Jun 18, 2013 at 5:23 AM, Simon MacMullen <span dir="ltr">&lt;<a href="mailto:simon@rabbitmq.com" target="_blank">simon@rabbitmq.com</a>&gt;</span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi. So I assume your monitoring code is not actually leaking those queues - they are getting deleted I assume? How? (Are they autodelete, exclusive, x-expires, deleted manually?)<br>

<br>
If so, can you run:<br>
<br>
rabbitmqctl eval &#39;[{ets:info(T,size), ets:info(T,memory)} || T &lt;- lists:sort(ets:all()), rabbit_mgmt_db &lt;- [ets:info(T, name)]].&#39;<br>
<br>
periodically? This will output a list of tuples showing the rows and bytes per table for each table in the mgmt DB. Do these increase?<br>
<br>
Cheers, Simon<div class="im"><br>
<br>
On 17/06/13 20:08, Travis Mehlinger wrote:<br>
</div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hi Simon,<br>
<br><div class="im">
I have more information for you. It turns out I hadn&#39;t fully understood<br>
the interaction causing this to happen.<br>
<br>
Aside from their regular communication, our services also declare a<br>
queue bound on # to an exchange that we use for collecting stats the<br>
services store internally. In addition to hitting the REST API for<br>
information about the broker, the monitor also opens a<br>
connection/channel, declares an anonymous queue for itself, then sends a<br>
message indicating to our services that they should respond with their<br>
statistics. The services then send a message with a routing key that<br>
will direct the response onto the queue declared by the monitor. This<br>
happens every five seconds.<br>
<br>
It appears that this is in fact responsible for memory consumption<br>
growing out of control. If I disable that aspect of monitoring and leave<br>
the REST API monitor up, memory consumption stays level.<br>
<br>
The problem seems reminiscent of the issues described in this mailing<br>
list thread:<br>
<a href="http://rabbitmq.1065348.n5.nabble.com/RabbitMQ-Queues-memory-leak-td25813.html" target="_blank">http://rabbitmq.1065348.n5.<u></u>nabble.com/RabbitMQ-Queues-<u></u>memory-leak-td25813.html</a>.<br>
However, the queues we declare for stats collection are *not* mirrored.<br>
<br>
Hope that helps narrow things down. :)<br>
<br>
Best, Travis<br>
<br>
<br>
On Mon, Jun 17, 2013 at 12:58 PM, Travis Mehlinger &lt;<a href="mailto:tmehlinger@gmail.com" target="_blank">tmehlinger@gmail.com</a><br></div><div class="im">
&lt;mailto:<a href="mailto:tmehlinger@gmail.com" target="_blank">tmehlinger@gmail.com</a>&gt;&gt; wrote:<br>
<br>
    Hi Simon,<br>
<br>
    I flipped our monitor back on and let Rabbit consume some additional<br>
    memory. Invoking the garbage collector had no impact.<br>
<br>
    Let me know what further information you&#39;d like to see and I&#39;ll be<br>
    happy to provide it.<br>
<br>
    Thanks, Travis<br>
<br>
<br>
    On Mon, Jun 17, 2013 at 10:32 AM, Simon MacMullen<br></div><div class="im">
    &lt;<a href="mailto:simon@rabbitmq.com" target="_blank">simon@rabbitmq.com</a> &lt;mailto:<a href="mailto:simon@rabbitmq.com" target="_blank">simon@rabbitmq.com</a>&gt;&gt; wrote:<br>
<br>
        On 17/06/13 15:45, Travis Mehlinger wrote:<br>
<br>
            Hi Simon,<br>
<br>
            Thanks for getting back to me. I&#39;ll need to restart our<br>
            monitor and give<br>
            it some time to leak the memory. I&#39;ll let you know the<br>
            results sometime<br>
            later today.<br>
<br>
            One thing I failed to mention in my initial report: whenever we<br>
            restarted one of our services, the queues they were using<br>
            would get<br>
            cleaned up (we have them set to auto-delete) and redeclared.<br>
            Whenever we<br>
            did that, we would see the memory consumption of the<br>
            management DB fall<br>
            off sharply before starting to rise again.<br>
<br>
<br>
        That is presumably because the historical data the management<br>
        plugin has been retaining for those queues got thrown away. If<br>
        you don&#39;t want to retain this data at all, change the<br>
        configuration as documented here:<br>
<br></div>
        <a href="http://www.rabbitmq.com/__management.html#sample-__retention" target="_blank">http://www.rabbitmq.com/__<u></u>management.html#sample-__<u></u>retention</a><div class="im"><br>
        &lt;<a href="http://www.rabbitmq.com/management.html#sample-retention" target="_blank">http://www.rabbitmq.com/<u></u>management.html#sample-<u></u>retention</a>&gt;<br>
<br>
        However, I (currently) don&#39;t believe it&#39;s this historical data<br>
        you are seeing as &quot;leaking&quot; since making queries should not<br>
        cause any more of it to be retained.<br>
<br>
        Cheers, Simon<br>
<br>
            Let me know if you&#39;d like any further information in the<br>
            meantime.<br>
<br>
            Best, Travis<br>
<br>
<br>
            On Mon, Jun 17, 2013 at 6:39 AM, Simon MacMullen<br>
            &lt;<a href="mailto:simon@rabbitmq.com" target="_blank">simon@rabbitmq.com</a> &lt;mailto:<a href="mailto:simon@rabbitmq.com" target="_blank">simon@rabbitmq.com</a>&gt;<br></div><div class="im">
            &lt;mailto:<a href="mailto:simon@rabbitmq.com" target="_blank">simon@rabbitmq.com</a> &lt;mailto:<a href="mailto:simon@rabbitmq.com" target="_blank">simon@rabbitmq.com</a>&gt;&gt;&gt; wrote:<br>
<br>
                 Hi. Thanks for the report.<br>
<br>
                 My first guess is that garbage collection for the<br>
            management DB<br>
                 process is happening too slowly. Can you invoke:<br>
<br>
                 $ rabbitmqctl eval<br>
<br></div>
            &#39;P=global:whereis_name(rabbit_<u></u>____mgmt_db),M1=process_info(<u></u>P,<br>
                 memory),garbage_collect(P),M2=<u></u>____process_info(P,<br>
                 memory),{M1,M2,rabbit_vm:____<u></u>memory()}.&#39;<div><div class="h5"><br>
<br>
<br>
                 and post the results?<br>
<br>
                 Cheers, Simon<br>
<br>
                 On 15/06/13 03:09, Travis Mehlinger wrote:<br>
<br>
                     We recently upgraded RabbitMQ from 3.0.4 to 3.1.1<br>
            after noticing<br>
                     two bug<br>
                     fixes in 3.1.0 related to our RabbitMQ deployment:<br>
<br>
                        * 25524 fix memory leak in mirror queue slave<br>
            with many<br>
                     short-lived<br>
                          publishing channel<br>
                        * 25290 fix per-queue memory leak recording<br>
            stats for mirror<br>
                     queue slaves<br>
<br>
                     However, in our case, it seems that the management<br>
            plugin may<br>
                     still have<br>
                     a memory leak. We have a monitoring agent that hits<br>
            the REST API to<br>
                     gather information about the broker (number of<br>
            queues, queue depth,<br>
                     etc.). With the monitoring agent running and making<br>
            requests<br>
                     against the<br>
                     API, memory consumption steadily increased; when we<br>
            stopped the<br>
                     agent,<br>
                     memory consumption in the management plugin leveled<br>
            off.<br>
<br>
                     Here a couple graphs detailing memory consumption<br>
            in the broker (the<br>
                     figures are parsed from rabbitmqctl report). The<br>
            first graph<br>
                     shows the<br>
                     ebb and flow of memory consumption in a number of<br>
            components and the<br>
                     second shows just consumption by the management<br>
            plugin. You can see<br>
                     pretty clearly where we stopped the monitoring<br>
            agent at 1:20.<br>
<br></div></div>
            <a href="https://dl.dropboxusercontent." target="_blank">https://dl.dropboxusercontent.</a><u></u>____com/u/7022167/Screenshots/<u></u>n-____np6obt-m9f.png<br>
<br>
            &lt;<a href="https://dl." target="_blank">https://dl.</a>__<a href="http://dropboxusercontent.com/u/__7022167/Screenshots/n-np6obt-__m9f.png" target="_blank">dropboxuserconte<u></u>nt.com/u/__7022167/<u></u>Screenshots/n-np6obt-__m9f.png</a><br>

            &lt;<a href="https://dl.dropboxusercontent.com/u/7022167/Screenshots/n-np6obt-m9f.png" target="_blank">https://dl.<u></u>dropboxusercontent.com/u/<u></u>7022167/Screenshots/n-np6obt-<u></u>m9f.png</a>&gt;&gt;<br>

            <a href="https://dl.dropboxusercontent." target="_blank">https://dl.dropboxusercontent.</a><u></u>____com/u/7022167/Screenshots/<u></u>____an6dpup33xvx.png<br>
<br>
<br>
            &lt;<a href="https://dl." target="_blank">https://dl.</a>__<a href="http://dropboxusercontent.com/u/__7022167/Screenshots/__an6dpup33xvx.png" target="_blank">dropboxuserconte<u></u>nt.com/u/__7022167/<u></u>Screenshots/__an6dpup33xvx.png</a><div class="im">
<br>
            &lt;<a href="https://dl.dropboxusercontent.com/u/7022167/Screenshots/an6dpup33xvx.png" target="_blank">https://dl.<u></u>dropboxusercontent.com/u/<u></u>7022167/Screenshots/<u></u>an6dpup33xvx.png</a>&gt;&gt;<br>

<br>
                     We have two clustered brokers, both running<br>
            RabbitMQ 3.1.1 on Erlang<br>
                     R14B-04.1. There are typically around 200 queues,<br>
            about 20 of<br>
                     which are<br>
                     mirrored. There are generally about 200 consumers.<br>
            Messages are<br>
                     rarely<br>
                     queued and most queues typically sit idle.<br>
<br>
                     I&#39;ll be happy to provide any further diagnostic<br>
            information.<br>
<br>
                     Thanks!<br>
<br>
<br></div>
                     ______________________________<u></u>_____________________<br>
                     rabbitmq-discuss mailing list<br>
                     rabbitmq-discuss@lists.__<a href="http://rabbi__tmq.com" target="_blank">rabbi<u></u>__tmq.com</a><br>
            &lt;<a href="http://rabbitmq.com" target="_blank">http://rabbitmq.com</a>&gt;<br>
                     &lt;mailto:<a href="mailto:rabbitmq-discuss@" target="_blank">rabbitmq-discuss@</a>__<a href="http://lists.rabbitmq.com" target="_blank">lis<u></u>ts.rabbitmq.com</a><br>
            &lt;mailto:<a href="mailto:rabbitmq-discuss@lists.rabbitmq.com" target="_blank">rabbitmq-discuss@<u></u>lists.rabbitmq.com</a>&gt;&gt;<br>
            <a href="https://lists.rabbitmq.com/____cgi-bin/mailman/listinfo/____rabbitmq-discuss" target="_blank">https://lists.rabbitmq.com/___<u></u>_cgi-bin/mailman/listinfo/____<u></u>rabbitmq-discuss</a><br>
            &lt;<a href="https://lists.rabbitmq.com/__cgi-bin/mailman/listinfo/__rabbitmq-discuss" target="_blank">https://lists.rabbitmq.com/__<u></u>cgi-bin/mailman/listinfo/__<u></u>rabbitmq-discuss</a>&gt;<div class="im">
<br>
<br>
<br>
            &lt;<a href="https://lists.rabbitmq.com/__cgi-bin/mailman/listinfo/__rabbitmq-discuss" target="_blank">https://lists.rabbitmq.com/__<u></u>cgi-bin/mailman/listinfo/__<u></u>rabbitmq-discuss</a><br>
            &lt;<a href="https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss" target="_blank">https://lists.rabbitmq.com/<u></u>cgi-bin/mailman/listinfo/<u></u>rabbitmq-discuss</a>&gt;&gt;<br>
<br>
<br>
<br>
                 --<br>
                 Simon MacMullen<br>
                 RabbitMQ, Pivotal<br>
<br>
<br>
<br>
<br>
        --<br>
        Simon MacMullen<br>
        RabbitMQ, Pivotal<br>
<br>
<br>
<br>
</div></blockquote><div class="HOEnZb"><div class="h5">
<br>
<br>
-- <br>
Simon MacMullen<br>
RabbitMQ, Pivotal<br>
</div></div></blockquote></div><br></div>