[rabbitmq-discuss] Fairness wrt queues under heavy load

Eugene Kirpichov ekirpichov at gmail.com
Wed Oct 12 06:19:33 BST 2011


Hello,

I have some more information to share.

* The messages taking long to deliver are sitting in the queues
("ready"), according to the management interface.
* I looked in the logs for the scenario where my "high-priority"
messages were taking ages to deliver. There was absolutely nothing
suspicious; not a single warning report.
* I replaced usage of transactions with a simple
WaitForConfirmsOrDie() call after publishing the message. This didn't
change anything, the message was still taking ages to deliver, and
still nothing in the logs.
* However, moving the queues for high-priority messages to a different
server solved the problem immediately. So the problem is *not* in my
consumers or producers of these messages - it's probably really in
rabbit.
* To add information about producers/consumers - producers shared a
connection (though not a channel) with the activity of publishing a
lot of other messages; the consumer only listened to the high-priority
messages.

On Tue, Oct 11, 2011 at 2:20 PM, Matthias Radestock
<matthias at rabbitmq.com> wrote:
> On 11/10/11 10:47, Eugene Kirpichov wrote:
>>
>> On Tue, Oct 11, 2011 at 12:09 PM, Matthias Radestock
>> <matthias at rabbitmq.com>  wrote:
>>>
>>> If the publishing rate exceeds the rabbit capacity, the broker will
>>> eventually hit its memory threshold and throttle producers. The logs will
>>> tell you when that happens. Memory pressure can take a while to
>>> dissipate,
>>> sometimes several minutes. The cure for this is to set up a bigger
>>> rabbit.
>>
>> This could be the case, actually, though I'm limiting the number of
>> unconfirmed publishes on each queue, but I don't limit it on a global
>> level. Thanks, I'll look at the logs.
>>
>> What somewhat disproves this is that this *only* happens with
>> transactional messages (establish channel, start transaction, publish
>> message, commit, close channel - though the connection may be shared).
>
> Hmm. It should be quite difficult to hit the memory limit when publishing in
> tx or confirm mode and committing/waiting after every message.
>
> Anyway, check the logs for memory alarms.
>
> Also, you should avoid sharing connections between producers and consumers.
> Otherwise the throttling of producers will also starve consumers.
>
>>> Messages pile up in queues when consumers don't consume them fast enough.
>>> Since queues are FIFO structures that manifests itself as a delay. To
>>> avoid
>>> this delay for high priority messages, make sure they get routed to a
>>> separate queue and code consumers s.t. they consume from that queue in
>>> addition to the regular queues.
>>
>> I am routing to a separate queue already.
>
> When you experience the delay for a high priority message, check whether the
> message is sitting in the queue. If it is then there is a problem at the
> consuming side, i.e. rabbit thinks there is no consumer ready to accept the
> message.
>
>
> Matthias.
>



-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/


More information about the rabbitmq-discuss mailing list