[rabbitmq-discuss] Per-Connection Flow Control -- The Case Against
simon at rabbitmq.com
Fri May 25 13:12:15 BST 2012
(Please take this as a reply to your other mail too.)
There's a couple of things going on here. The key point is that a few
years ago, in the 1.7.0(!) timeframe, the decision was taken to make
queues prioritise delivering messages over accepting messages. The idea
was - of course - that all else being equal it's better for messages to
go out than come in.
The trouble is that it's possible for a queue to get into a state where
it is spending 100% of its time sending messages to consumers and
handling acks. With the current prioritisation scheme, this means that
no publishes are processed at all.
It's considerably easier to get into this state with lots of consumers
and low prefetch count, since the queue has to do a bunch of accounting
as it constantly blocks and unblocks consumers.
Before 2.8.0, this condition still existed, but since there was no flow
control, published messages would just back up in the channel and reader
processes. This is not good, since memory use balloons and when the
queue is finally able to start accepting publishes again it has a huge
backlog to get through - that clients thought were published some time
in the past.
So I don't think that flow control is the problem. But it is making the
problem rather more visible.
I am strongly tempted to just remove this prioritisation from the queue.
It would be easier to get into a state where queue lengths are growing
rather than shrinking, but behaviour would be more predictable, and I
think that's important.
Would you be interested in testing this for your use case?
In the mean time, I wonder whether 500 consumers and prefetch-count of 2
is what you really want. Normally I would expect a configuration like
that when you expect to take some time to process each message, but it
sounds like you're going fast enough that either 500 consumers are not
needed, or prefetch-count > 2 might lead to better performance.
On 25/05/12 09:07, Chip Salzenberg wrote:
> The fundamental paradox of per-connection flow control is that it
> holds up the stop sign just when progress becomes possible. It is
> backwards. Consider:
> 1. A client is publishing 1.5K/sec to each of four exchanges, each of
> which has a queue.
> 2. There are no consumers. Therefore the queue is growing.
> 3. RMQ does not stop this.
> 4. The consumers appear to begin to tear down the backlog.
> 5. RMQ per-connection flow control suddenly decides that now there are
> some consumers, now it has a reason to throttle the sender.
> Therefore, it is only when the backlog can go DOWN that the broker
> decides to throttle the sender. Not when the backlog was GROWING, but
> when it could be SHRINKING, that's when RMQ decides to stop accepting.
> This is not acceptable.
> rabbitmq-discuss mailing list
> rabbitmq-discuss at lists.rabbitmq.com
More information about the rabbitmq-discuss