[rabbitmq-discuss] cluster memory alarm blocks producer too long

Matthias Radestock matthias at rabbitmq.com
Mon Jul 8 08:29:09 BST 2013


On 08/07/13 06:07, richmonkey wrote:
> I have decided not to use rabbitmq mirror queue.
> The testing working is stopped.

Just to be clear, are you saying that the problem went away when using 
ordinary, non-mirrored queues?

> To reproduce the problem, just start a little of very simple producer and a
> little of very simple consumer followed the step that I mentioned.

I tried that and cannot reproduce the problem.

I don't have the exact same environment - I'm on Ubuntu rather than 
centos and ran the two nodes on a single machine with 12GB of RAM, 
dropping the vm_memory_highwatermark to 0.2 to compensate for the 
sharing - but that shouldn't make a difference.

I set the ha policy with

   rabbitmqctl set_policy ha-all "^ha\." '{"ha-mode":"all"}'

and then ran our universal MulticastMain test program that ships with 
the Java client, as follows...

1) sh ./runjava.sh com.rabbitmq.examples.MulticastMain -u ha.q -x 4 -n 2 
-q 1 -f persistent

This creates a durable queue named 'ha.q' to which the above mirroring 
policy applies, and starts four producers that publish tiny (tens of 
bytes) persistent messages to it

I then watched the queue length with 'rabbitmqctl list_queues' and when 
it hit ~1M (after about 3 minutes) I ran a second instance of 
MulticastMain with

2) sh ./runjava.sh com.rabbitmq.examples/MulticastMain -u ha.q -x 0 -f 
persistent -y 4

This starts four consumers that consume from the above queue in 'ack' mode.


I can run the above forever without hitting the memory alarm on either 
the node with the queue master or slave.


So I am guessing your test is not quite as straightforward as you 
describe. Or our test environments are too different. To rule out the 
latter, please run the above test against your setup and report whether 
you are seeing memory alarms, and pro-longed memory alarms at the slave, 
which you encounter in your own tests.


Regards,

Matthias.


More information about the rabbitmq-discuss mailing list