That was the big thing I was missing - channels can be subscribed to more than one queue! Thank you so much, that was a nice breakthrough.<div>Adam</div><div><br></div><div><div class="gmail_quote">On Tue, Oct 11, 2011 at 2:19 AM, Matthias Radestock <span dir="ltr"><<a href="mailto:matthias@rabbitmq.com">matthias@rabbitmq.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">Adam,<div class="im"><br>
<br>
On 11/10/11 02:48, Adam Rabung wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hello,<br>
I am trying to implement a work queue in RabbitMQ. I have several<br>
machines that will act as worker pools for consuming my queues. I would<br>
like each job to be processed only once. Some jobs<br>
have environmental requirements, ie must be executed on a machine with<br>
an SSD. Not all worker pools will meet these requirements. A first<br>
approach would be to have a queue for every permutation of requirements:<br>
"No Requirements", "Requires SSD", "Requires Certificate", etc and have<br>
each worker pool subscribe to all queues which it can handle.<br>
</blockquote>
<br></div>
That is a sound approach.<div class="im"><br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
A majority of the jobs will have no requirements, so many worker pools<br>
will be underutilized.<br>
</blockquote>
<br></div>
As you say, the worker pools would subscribe to all the queues with combinations of requirements they can handle. That includes the "No Requirements" queue. The subscriptions can be active *simultaneously*, so work in the "No Requirements" queue would be round-robin routed to all workers, jobs in the "Require SSD" queue will be routed to all workers that have SSDs, etc.<br>
<br>
If each worker uses a single channel with a basic.qos prefetch setting of 1 and subscribes to all the relevant queues on that channel, it will be fed work items one at a time from the subset of all these queues that have messages in them. There's some logic at the server that ensures this is reasonably fair, though you will most likely find that queues which can be handled by many workers drain faster than others.<br>
<br>
Regards,<br><font color="#888888">
<br>
Matthias.<br>
</font></blockquote></div><br></div>