[rabbitmq-discuss] Multiple rabbitmq servers, one or more consumers (.NET)

Alexandru Scvorţov alexandru at rabbitmq.com
Thu Feb 17 10:46:22 GMT 2011


Hi Bryan,

> I know there are other strategies to do this

In particular, RabbitMQ clustering (which has nothing to do with HA)
sounds a bit (but not much) like what you're doing.

> What I believe will happen is this (using a
> BasicQos of prefetchSize=0, prefetchCount=1):

> and consumer 2 (3, 4, 5, etc.) are sitting idle because message 2 has
> been allocated to consumer 1.

Sounds right.

> So, my basic question, is there a simple way I get message 2 to not
> block on message 1 and instead get delivered to a different consumer,
> or am I barking up the wrong tree here?

You could not set prefetchCount to 1.

Otherwise, you might have to send the same message multiple times and
then ensure that they don't get processed twice.  From the broker's
perspective, there's no difference between a pre-fetched message and a
normal message.

If you need a more intelligent prefetchCount (i.e. pre-fetch the
message, but if it's not processed in a certain amount of time,
basic.reject{requeue=true} it), you'll need to write your own.  Starting
from QueueingBasicConsumer sounds like a good idea.

Does this answer your question?

Cheers,
Alex

On Mon, Feb 14, 2011 at 10:56:34AM -0600, Bryan Murphy wrote:
> I'm working on some advanced HA strategies for our infrastructure.
> One thing I'm experimenting with is setting up one or many consumers
> that can poll from queues on multiple RabbitMQ servers.
> 
> The idea is simple, set up multiple redundant servers configured the
> same.  Send messages round-robin to the various servers, and then poll
> the servers for messages.  If a server goes off the reservation, pull
> it from the list until it comes back online or is replaced with a new
> one and keep processing messages from the remaining servers.
> 
> I know there are other strategies to do this, but I wanted to
> experiment with a few different techniques to see what works best for
> us and learn a little bit more about how the driver works.
> 
> I dug around in the .NET driver code and managed to pull together an
> implementation that uses a SharedQueue and multiple
> connections/models.  I had to extend QueueingBasicConsumer and
> BasicDeliverEventArgs to add a few properties that allow me to map the
> messages back to the appropriate connection/model for acking.
> 
> This appears to work great at first pass, however, I don't believe
> this will work well for long running jobs in a load balanced scenario
> (multiple consumers).  What I believe will happen is this (using a
> BasicQos of prefetchSize=0, prefetchCount=1):
> 
> message 1 --> rabbitmq server 1 --> consumer 1 shared local queue
> message 2 --> rabbitmq server 2 --> consumer 1 shared local queue
> 
> (BasicQos of 1 is per model, and there's a model per connection, which
> means there's a model per rabbitmq server)
> 
> locally:
> 
> consumer 1 shared local queue --> message 1 --> message 1
> consumer 1 shared local queue --> message 2 --> blocked on message 1
> 
> and consumer 2 (3, 4, 5, etc.) are sitting idle because message 2 has
> been allocated to consumer 1.
> 
> Instead of using model.BasicConsume, I could use model.BasicGet with a
> timeout to fetch messages, but I've found the performance of this
> method to be dreadful in the past so I wanted to try this method
> first.
> 
> So, my basic question, is there a simple way I get message 2 to not
> block on message 1 and instead get delivered to a different consumer,
> or am I barking up the wrong tree here?
> 
> Thanks!
> Bryan
> _______________________________________________
> rabbitmq-discuss mailing list
> rabbitmq-discuss at lists.rabbitmq.com
> https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss


More information about the rabbitmq-discuss mailing list