[rabbitmq-discuss] publish/commit rate drops to 0 when consuming large queues from new persister

Matthew Sackman matthew at lshift.net
Fri Mar 12 15:09:30 GMT 2010


Hi Alex,

On Thu, Mar 11, 2010 at 06:16:39PM -0800, alex chen wrote:
> We use the new persister and tx_commit in publish.   When there are large number of messages backed 
> up in the queues, starting the consumers would block the 
> publish+commit.  it could take more than 20 seconds for the client to receive tx_commit_ok.

Yup, I can duplicate this here. I'm afraid there really is no easy
solution to this - certainly no knob as you suggest. Rabbit is built to
get rid of messages as a priority. Thus publishes will be slower on very
long queues. I'll raise a bug for this, but it's very tricky to work out
exactly what the "correct" behaviour should be here - basically the
commits are being starved because there's higher priority traffic coming
back from the consumer.

The simple (if maybe unhelpful) solution is don't let your queues get so
long, or, don't allow you consumers to consume messages so fast - i.e.
set qos to a lowish value (between 1 and 10) and don't use auto-acking.
Certainly my testing shows the commit rate is much better at staying up
with qos set on the consumer.

Matthew




More information about the rabbitmq-discuss mailing list