[rabbitmq-discuss] 2.7.1 mirrored queues leak a lot of memory on slave nodes

Travis hcoyote at ghostar.org
Wed Feb 22 21:58:22 GMT 2012

On Wed, Feb 22, 2012 at 2:12 PM, Reverend Chip <rev.chip at gmail.com> wrote:
> I have a four-node 2.7.1 cluster.  I just started experimenting with
> mirrored queues.  One queue is mirrored across nodes 1&2, a second queue
> is mirrored across nodes 3&4.  I've been feeding a lot of large messages
> through, using delivery-mode 2.  Most of the messages I've purged, since
> the reader process can't keep up yet.
> Here's the problem: Memory usage.  Nodes 1 & 3, presumably the master
> nodes for the queues, have maintained a normal memory profile, 3-6GB.
> But nodes 2 & 4, the presumable slaves, have had their memory grow to
> 58GB each.  Worse, when I purged and then even deleted the queues, the
> memory usage did not go down.  It seems I may have to reboot these nodes
> to get the memory back, and obviously I can't use mirrored queues if
> they're going to make my nodes do this, which is disappointing.  I do
> have a workaround involving alternate exchanges, but the workaround can
> leave data stranded if a node is lost forever.
> Is there any other info I can provide to help diagnose and/or fix this?

I think we're experiencing the same thing.  Previously, we were seeing
a memory leak in 2.6.1 which was patched[1] in subsequent releases.
Since then, we've upgraded to 2.7.1 as well and we're seeing slowly
growing memory usage on our slaves that requires us to restart the
slaves periodically to keep the memory usage down.

In our case, we're using only two nodes with mirrored queues.


[1] this patch http://hg.rabbitmq.com/rabbitmq-server/rev/89315378597d
fixed our memory leak in 2.6.1.  worked great when we applied just the
patch to 2.6.1.  it has to be something different in 2.7.1 causing
this new memory leak though.

Travis Campbell
travis at ghostar.org

More information about the rabbitmq-discuss mailing list