[rabbitmq-discuss] When disk usage is below threshold, how do I prevent my consumers from blocking when closing a channel?
mauro.gandelli at gmail.com
Thu Apr 3 16:47:57 BST 2014
On my application, after calling IModel.BasicPublish(); , if I call
IModel.Close() afterwards the app blocks.
In my development machine I noticed that the call unlocks and proceeds when
I free disk space above threshold (that's how I figured out that it was the
Flow Control mechanism that blocked my connection). In the production
environment I need my clients not to block when this happens.
According to this article <https://www.rabbitmq.com/memory.html> on Flow
There are two flow control mechanisms in RabbitMQ. Both work by exerting
> TCP backpressure on connections that are publishing too fast. They are:
> - A per-connection mechanism that prevents messages being published
> faster than they can be routed to queues.
> - A global mechanism that prevents any messages from being published
> when the memory usage <https://www.rabbitmq.com/memory-use.html> exceeds
> a configured threshold or free disk space drops below a configured
> Both mechanisms will temporarily *block* connections - the server will
> pause reading from the sockets of connected clients which send
> content-bearing methods (such as basic.publish) which have been blocked.
> Connection heartbeat monitoring will be disabled too.
on the bottom of that article, there's a link to this other article<https://www.rabbitmq.com/connection-blocked.html>,
which shows a way to handle blocked connections in both Java and .NET (I'm
using .NET API) through events.
What is the best way to handle this situation in the client? Do I have to
implement those event handlers and prevent any future calls to
IModel.BasicPublish() when the connection is blocked?
RabbitMQ Version: 3.1.0
Client .NET Version: 188.8.131.52
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the rabbitmq-discuss