[rabbitmq-discuss] Fairness wrt queues under heavy load
matthias at rabbitmq.com
Tue Oct 11 09:09:22 BST 2011
On 07/10/11 16:30, Eugene Kirpichov wrote:
> I've got a large cluster with many consumers, and several producers
> who publish a very intense steady stream of messages in
> publish/confirm mode.
> Some components occasionally submit a single but very important and
> urgent message in transactional mode, which takes up to 5-10 minutes
> to be delivered.
> Is this expected behavior?
> How can I avoid these obscenely large delays?
Most likely due to backlogs. The cure depends on where the backlog is.
Let's follow the path of a message from producer to consumer...
If the producer publishes at a high rate then backlogs will arise in the
client buffers, network buffers, server-side network buffers and buffers
inside rabbit. These backlogs can be substantial, though unlikely
minutes worth. The way to avoid them is to ensure that high-priority
messages are published on a separate connection from ordinary messages.
If the publishing rate exceeds the rabbit capacity, the broker will
eventually hit its memory threshold and throttle producers. The logs
will tell you when that happens. Memory pressure can take a while to
dissipate, sometimes several minutes. The cure for this is to set up a
Messages pile up in queues when consumers don't consume them fast
enough. Since queues are FIFO structures that manifests itself as a
delay. To avoid this delay for high priority messages, make sure they
get routed to a separate queue and code consumers s.t. they consume from
that queue in addition to the regular queues.
At the consuming end messages can pile up in network buffers and,
depending on the client library and its use, in client-side buffers. To
avoid that set a basic.qos prefetch limit on the consuming channels.
More information about the rabbitmq-discuss