[rabbitmq-discuss] Fairness wrt queues under heavy load

Matthias Radestock matthias at rabbitmq.com
Tue Oct 11 11:20:38 BST 2011


On 11/10/11 10:47, Eugene Kirpichov wrote:
> On Tue, Oct 11, 2011 at 12:09 PM, Matthias Radestock
> <matthias at rabbitmq.com>  wrote:
>> If the publishing rate exceeds the rabbit capacity, the broker will
>> eventually hit its memory threshold and throttle producers. The logs will
>> tell you when that happens. Memory pressure can take a while to dissipate,
>> sometimes several minutes. The cure for this is to set up a bigger rabbit.
> This could be the case, actually, though I'm limiting the number of
> unconfirmed publishes on each queue, but I don't limit it on a global
> level. Thanks, I'll look at the logs.
>
> What somewhat disproves this is that this *only* happens with
> transactional messages (establish channel, start transaction, publish
> message, commit, close channel - though the connection may be shared).

Hmm. It should be quite difficult to hit the memory limit when 
publishing in tx or confirm mode and committing/waiting after every message.

Anyway, check the logs for memory alarms.

Also, you should avoid sharing connections between producers and 
consumers. Otherwise the throttling of producers will also starve consumers.

>> Messages pile up in queues when consumers don't consume them fast enough.
>> Since queues are FIFO structures that manifests itself as a delay. To avoid
>> this delay for high priority messages, make sure they get routed to a
>> separate queue and code consumers s.t. they consume from that queue in
>> addition to the regular queues.
> I am routing to a separate queue already.

When you experience the delay for a high priority message, check whether 
the message is sitting in the queue. If it is then there is a problem at 
the consuming side, i.e. rabbit thinks there is no consumer ready to 
accept the message.


Matthias.


More information about the rabbitmq-discuss mailing list