[rabbitmq-discuss] Excessive memory consumption of one server in cluster setup

Matthias Reik matthias.reik at gmail.com
Mon Aug 27 10:26:19 BST 2012


I just upgraded to 2.8.6, and I see the same effect with the latest version
:-(
(not really unexpected, since nothing was fixed in that regard in 2.8.6).

Already after about 10 minutes server2 consumes 200% more memory than
server1,
even though all clients are connected to server1, and there are no queues
with TTL anymore.

(removing the other queues is unfortunately not possible, since it's a
production
server :-(, and I haven't seen this effect on any of our other
(development) servers)

Cheers
Matthias

On Fri, Aug 24, 2012 at 2:25 PM, Matthias Reik <maze at reik.se> wrote:

> Hi,
>
> 2 days ago I have upgraded our RabbitMQ cluster (2 machines running in HA
> mode) from 2.8.1 to 2.8.5.
> Mainly to get those OOD fixes in, since we experienced those exact issues.
>
> The upgrade went very smooth, but at some point one of the machines
> (server2) started to allocate more
> and more memory (even though all queues are more or less at 0 with almost
> no outstanding acks)
>
> server 1 uses ~200Mb
> server 2 (at the point where I took it down) used ~6Gb
>
> I run a rabbitmqctl report... but it didn't give me any insights
> I run a rabbitmqctl eval 'erlang:memory().' but that didn't tell me too
> much more (but I will attach that at the end)
>
> I found people with similar problems:
> http://grokbase.com/t/**rabbitmq/rabbitmq-discuss/**
> 1223qcx3gg/rabbitmq-memory-**usage-in-two-node-cluster<http://grokbase.com/t/rabbitmq/rabbitmq-discuss/1223qcx3gg/rabbitmq-memory-usage-in-two-node-cluster>
> but that's a while back so many things might have changed since then. Also
> the memory difference was
> rather minimal, whereas here the difference is _very_ significant,
> especially since the node with less load
> has the increased memory footprint.
>
> I can upgrade to 2.8.6 (unfortunately I upgraded just before it was
> released :-(), but I only want to do that if
> there is some hope that the problem is solved.
> I can bring server2 back online and try to investigate what is consuming
> that much memory, but my
> RabbitMQ/Erlang knowledge is not good enough, therefore reaching out for
> some help.
>
> So any help would be much appreciated.
>
> Thx
> Matthias
>
> Our setup is something like the following:
>       2 servers exclusively running RabbitMQ on CentOS 5.x (high watermark
> ~22Gb),
>             - both with web-console enabled
>             - both defined as disk nodes
>             - both running RabbitMQ 2.8.5 on Erlang R15B01 (after the
> upgrade, Erlang was already at R15 before)
>     10 queues configured with mirroring
>       3 queues configured (without mirroring) only on server1 with a TTL
>     Most consumers are connecting to server1, server2 only in case of
> failover
>
> We get about 1k messages/sec (with peaks much higher than that) into the
> system, and each message is
> passed through several queues for processing.
>
> -bash-3.2$ sbin/rabbitmqctl eval 'erlang:memory().'
> [{total,5445584424},
>  {processes,2184155418},
>  {processes_used,2184122352},
>  {system,3261429006},
>  {atom,703377},
>  {atom_used,678425},
>  {binary,3216386480},
>  {code,17978535},
>  {ets,4142048}]
> ...done.
> ______________________________**_________________
> rabbitmq-discuss mailing list
> rabbitmq-discuss at lists.**rabbitmq.com<rabbitmq-discuss at lists.rabbitmq.com>
> https://lists.rabbitmq.com/**cgi-bin/mailman/listinfo/**rabbitmq-discuss<https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20120827/2e741165/attachment.htm>


More information about the rabbitmq-discuss mailing list