[rabbitmq-discuss] Implementing Prioritised Queues
jamesdear at gmail.com
Thu Oct 13 16:25:58 BST 2011
I notice that RabbitMQ QoS does some kind of probabilistic
load-balancing between queues in a channel. If the client could
specify a priority factor (or even better a probabilistic priority
factor) for each queue it subscribes to, and the server would then
load-balance according to the supplied factors, then this would solve
the problem entirely.
It would also add great flexibility for running clustered computing.
For example, suppose Alice and Bob both work for BigCorp, and each
have a cluster of 1000 cores. The comp resources required are not
uniform over time - there's lots of head-scratching time when Alice is
figuring out what calculation to do next and same for Bob. Hence Alice
is fine with Bob using her cores when they are inactive (ie when
Alice's queue is empty) but wants immediate access when she happens to
want to run her own numbers. Likewise for Bob. So they both ensure
that their clients subscribe to both queues, but Alice's cores
prioritize Alice's queue and Bob's cores prioritize Bob's queue.
I used to work in a large bank where they had this problem big time.
There were literally dozens of 500+ core 'silos' that were idle most
of the time, but were required to be that large so that when they
*were* needed they could kick out an answer rapidly. The different
groups didn't trust each other enough to give each other symmetric
access (for good reason - in reality groups never cared about other
groups), and siloization was the only alternative.
On 13 October 2011 13:55, James Dear <jamesdear at gmail.com> wrote:
> But if the application buffer is of bounded size (say < N messages),
> and the buffer is fed at equal rate from Q1 & Q2, then over a
> time-horizon >> N the buffer will become saturated with lower-priority
> Q2 messages and the application will start receiving Q1 and Q2 at an
> equal rate again. To solve this you could do one of:
> 1. Have an unlimited buffer or
> 2. Ensure the buffer receives Q1 more rapidly than Q2 or
> 3. Allow the buffer to reject/requeue when it gets saturated with Q2 or
> 4. Something else?
> 1. is not really feasible, 2. means there's no longer a need for a
> buffer anyway, not sure about 3. but it seems inefficient.
> Is there some other way?
> On 13 October 2011 12:23, Eugene Kirpichov <ekirpichov at gmail.com> wrote:
>> Just consume both queues and internally, in your application,
>> implement "getting next message" as "next from your application's
>> buffer associated with Q1 if it's nonempty, otherwise next from Q2's
>> On Thu, Oct 13, 2011 at 3:17 PM, James Dear <jamesdear at gmail.com> wrote:
>>> Referring back to a thread active in Feb, it seems the recommended way
>>> to implement a priority queue is to have multiple queues, one for each
>>> priority level:
>>>> The usual way of implementing priorities is to use separate queues for
>>>> each priority. So if you need two priority levels then declare queues Q1
>>>> and Q2. Route messages to those queues based on priority and have
>>>> clients consume from queues in that order. So Q2 messages won't be
>>>> consumed unless Q1 is empty.
>>> How do you actually implement the logic "consume Q2 iff Q1 empty else
>>> consume Q1"? I've read through much of the docs but can't see anything
>>> other than using GET, but I thought I read somewhere that GET was much
>>> slower than CONSUME.
>>> Can anyone help me with this? By the way, I'm writing in python/pika,
>>> although any language example would help me.
>>> rabbitmq-discuss mailing list
>>> rabbitmq-discuss at lists.rabbitmq.com
>> Eugene Kirpichov
>> Principal Engineer, Mirantis Inc. http://www.mirantis.com/
>> Editor, http://fprog.ru/
More information about the rabbitmq-discuss