[rabbitmq-discuss] Load Balanced Consumers?
James Carr
james.r.carr at gmail.com
Tue Aug 16 17:15:45 BST 2011
We Completed some experiments and here was our findings.
We set up a 3 node broker (rabbitmq running on three separate boxes)
and clustered them together. The publisher publishes to a load
balancer (crescendo) while the consumer defined a queue on one node
and subscriber to all three nodes. The behavior we saw is the consumer
got unique messages from all three nodes. for example it might get
messages 1,2, and 4 from node one, 3,7 from node 2, and 5, 6 from node
3. The publisher messages reached the broker as fast as they were
published (1,000/s on average) and performed well. Likewise the
consumer also performed well and got messages as fast as they came in.
We also tried just connected a consumer to one broker and got similar
results.
Putting the consumer behind a load balancer however resulted in slow
as snot consuming. For now we've decided against that approach.
You're right, my aim is to be able to take any node on the cluster
down for maintanence without impacting application performance or
message delivery. All of our apps are using spring-amqp which
auto-reconnects if their connects get reset.
So far it seems like having the consumer connect to each of the boxes
is the best approach, albeit slightly heavy. I'd definitely like to
know some of the approaches others have taken to achieve HA. :)
Thanks,
James
On Tue, Aug 16, 2011 at 9:36 AM, Michael Bridgen <mikeb at rabbitmq.com> wrote:
>> In all the material I've read on clustering and load balancing the
>> setup I always see is publishers publishing through a load balancer
>> while consumers consume all the nodes on the cluster.
>
>> Is it possible to place the consumers behind a load balancer? I'm
>> just asking, we plan on trying it on Monday with our cluster at work.
>> The setup we're thinking of is one balancer for publishers, two forWell, We
>> consumers (which consumers will consume on both) and divide the
>> cluster into segments that the consumer balancers forward to.
>
> Load-balancing for publishers is for handling high throughput,
> the theory being that more connections can be opened and more messages
> processed. Routing is done by channel processes, so that will to some
> extent[1] scale along with the number of connections.
>
> The overall scaling behaviour is probably[2] mostly dependent on the
> routing topology; i.e., to how many queues each message is routed. If
> messages are being fanned out to many queues, that will of course
> mitigate the benefit of handling many publishers and incoming messages at
> once.
>
> You're right that no-one has really gone into load-balancing consumers (or
> at least, has not reported doing so here). Possibly this is because how
> queues and consumers are deployed is more tied in with application logic
> (e.g., routing) than publishing is -- i.e., it's different for everyone, and
> no one scheme works across the board. Publishing isn't stateful but
> consuming is.
>
> From a throughput point of view, taking a single use case -- let's say
> publishing to a single queue and consuming from it -- I guess the idea would
> be to consume on more than one connection, on the basis that the queue would
> load-balance across those connections and each connection would be on a
> different node. I'm not sure how much this would improve performance, since
> the queue (located on one node) still has to deliver messages across nodes.
> See [2] though.
>
> It sounds from your brief description you are more interested in failover --
> e.g., if a node fails, another node will continue to deliver messages. Since
> queues are located at a particular node, this won't work in that mode,
> presently[3]; unless you are, somewhere, prepared to duplicate messages
> (e.g., with a fanout exchange and a queue bound to it from each segment) and
> deduplicate at the clients (and even then, it doesn't quite work, as the
> round-robin at each queue wouldn't be the same). But perhaps I have
> misunderstood what you're trying to achieve.
>
> [1] To some extent because it does involve database reads and other
> overheads.
>
> [2] Nothing beats actually measuring.
>
> [3] Replicated queues are coming soon.
>
>> Is this an ideal solution? The only problem comes when the nodes
>> that both connect to go down at the same time but I am putting money
>> on that not happening. We'll see. :)
>>
>> Thanks, James _______________________________________________
>> rabbitmq-discuss mailing list rabbitmq-discuss at lists.rabbitmq.com
>> https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss
>
> _______________________________________________
> rabbitmq-discuss mailing list
> rabbitmq-discuss at lists.rabbitmq.com
> https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss
>
More information about the rabbitmq-discuss
mailing list