[rabbitmq-discuss] Producers hanging when reaching high memory watermark on 1.8.1

Aaron Westendorf aaron at agoragames.com
Fri Aug 13 20:16:45 BST 2010


It's likely that flow control kicked in.  I don't know how that's
implemented in Spring, as each client is welcome to implement it as
they see fit.  You can turn it off at the broker thusly:

bofh at host:~$ cat /etc/rabbitmq/rabbitmq.config
[{rabbit, [{vm_memory_high_watermark, 0}]}].

If you have enough monitoring tools in place to keep tabs on Rabbit
and deal with your infrastructure / apps in advance of catastrophe, I
don't personally see a need for flow control.


On Fri, Aug 13, 2010 at 1:04 PM, Dave Greggory <davegreggory at yahoo.com> wrote:
> In one of our load tests, we found that RabbitMQ causes producers to hang when it reaches the high memory watermark. We expected RabbitMQ to hang and but not the producers. How can we work around it?
> Is there a way to configure RabbitMQ to close the connection or throw an exception from the client API when that happens so the producers can respond. Or maybe there's a timeout I can specify for sends? If there's no timeout can that thread be interrupted when it hangs?
> It's imperative that our client apps remain unaffected if RabbitMQ blows up in any way.
> We're connecting to RabbitMQ 1.8.1 nodes using 1.8.1 java client running under Spring-AMQP 1.0M1.
> _______________________________________________
> rabbitmq-discuss mailing list
> rabbitmq-discuss at lists.rabbitmq.com
> https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss

Aaron Westendorf
Senior Software Engineer
Agora Games
359 Broadway
Troy, NY 12180
Phone: 518.268.1000
aaron at agoragames.com

More information about the rabbitmq-discuss mailing list