<div dir="ltr">Hello all,<br><br>I've searched and not seen this issue talked about on the mailing list so I wanted to generate some discussion regarding the round-robin nature of having multiple consumers on a single queue.<br>
<br>The scenario I'm using Rabbit-MQ for is as follows:<br> - Many producers send messages to a direct exchange that end up in a "job" queue<br> - Many single-threaded consumers are listening on a single job queue using basic_consume<br>
- When a consumer receives a job from the queue, they process it, put the result on a reply queue then ack the message<br><br>The round-robin nature of message deliveries is great for load balancing if all our jobs require equal processing time but this is not the case.. jobs can range anywhere from a few seconds to a minute worth of processing. <br>
<br>I did up an artificial test where 3 consumers were using basic.consume on a single queue. Two consumers processed their jobs immediately, however the final consumer slept for 30 seconds to simulate getting a "big job" before ack'ing the message and consuming the next one. The result (which I believe is due to the round-robin nature specified by AMQP) was two consumers pumping through their jobs and sitting idle, even though there were plenty of jobs remaining on the queue and the third consumer left busy "processing" all the "big jobs".<br>
This was done using Barry Pederson's amqplib-0.3 library in Python.<br><br>I don't know erlang well enough to check the source but I assume RabbitMQ does something like assigning jobs as they come in to each consumer on an amqp queue? If this is the case, I think the desired behaviour would be rather than assigning the jobs to a particular consumer immediately, assign the job to the consumer when it sends an ack for the last message it received to signify that it is ready to process a new job. This would work too in the multi-threaded scenario where people wanted to ack the message immediately before processing it so one thread can process the message contents whilst the other downloads the next message.<br>
<br>As a work-around at the moment I am ack'ing a message after I receive it and cancelling the consume before processing the job so other jobs will be delivered to other consumers however, this means if the job fails for whatever reason it will not be delivered to another consumer for processing.<br>
<br>I'm happy to put the test code up if anyone is interested in trying this for themselves.<br><br>Nathan.<br></div>