[rabbitmq-discuss] Fwd: Excessive memory consumption of one server in cluster setup

Laing, Michael P. Michael.Laing at nytimes.com
Mon Aug 27 16:53:54 BST 2012


I have not yet run into this issue but I have a question:

Would it be appropriate to use 'rabbitmqctl set_vm_memory_high_watermark'
to a low value to temporarily pause publishers, then restart the cluster
nodes, and then reset to the normal value?

Thanks.

Michael

On 8/27/12 11:22 AM, "Matthias Radestock" <matthias at rabbitmq.com> wrote:

>Matthias,
>
>On 27/08/12 16:02, Matthias Reik wrote:
>> even though the setup looks slightly differently (since we are not
>> using the shovel plugin), the reason could be the same. We are
>> explicitly ACKing the messages (i.e. no auto-ack), even though the
>> consumers are in the same data-centers (so we should have a reliable
>> network), but if the acks are lost and that causes memory increase in
>> the server then it could be the same bug.
>
>As noted in my analysis, the bug has nothing do with the shovel, or
>consuming/acking - simply publishing to HA queues when (re)starting
>slaves is sufficient to trigger it.
>
>> Is there anything I could do to validate this assumption?
>
>I don't think it's worth the hassle. I am quite certain that you are
>suffering from the same bug.
>
>> Is there anything I can do in the meantime to get into a state where I
>> have a working cluster again
>
>Pause all publishing before (re)starting any cluster nodes.
>
>Regards,
>
>Matthias.
>_______________________________________________
>rabbitmq-discuss mailing list
>rabbitmq-discuss at lists.rabbitmq.com
>https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss



More information about the rabbitmq-discuss mailing list