[rabbitmq-discuss] Mirrored queues behind a load balancer

Girish Gangadharan girish at giri.sh
Tue Jun 18 19:00:26 BST 2013


Thank you.

On Sun, Jun 16, 2013 at 11:24 PM, Jason McIntosh <mcintoshj at gmail.com>wrote:

> Queues exist (definition wise) on all nodes in a cluster.  Messages only
> existing unless the queue is mirrored on a single node.  You can mirror a
> queue's messages to N number of cluster nodes.  Admin console is an overall
> view of the cluster, so you'll see all the available
> nodes/queues/exchanges/etc.
>
> One reason you'd NOT have queues have their messages automatically
> replicated to other nodes is scaling.  If you had a cluster of 10 machines,
> and all the machines mirrored, then your message would have to hit all 10
> nodes before being acknowledged (publish confirm situation as I recall).
>  You CAN instead say a queue must be mirrored on N number of nodes,
> allowing a failover capacity of N.
>
> Rabbit is also a Master-slave write system. Only ONE node in a cluster is
> the "master".  Other nodes would be there for failover/scaling support.
>  i.e. queues could be created on any one of the nodes in a cluster to
> distribute load.  In the event of a failure on the master, a slave node
> would take over the masters position.  When the old master comes on line,
> it becomes a slave, but unless you have 3.1 and auto healing type stuff, it
> would only get messages as of the time it has recovered, leaving an
> indeterminate amount of time before it goes back into sync.
>
> Further, if clients do auto queue creation, it can get even more
> complex/diverse.  RabbitMQ makes messaging pretty simple but architecting
> it right and true understanding is a much bigger issue.
>
> Jason
>
>
> On Fri, Jun 14, 2013 at 10:37 AM, Girish Gangadharan <girish at giri.sh>wrote:
>
>> Thanks for your reply. Please see my response in red.
>>
>> On Fri, Jun 14, 2013 at 4:03 AM, Emile Joubert <emile at rabbitmq.com>wrote:
>>
>>>
>>> Hi,
>>>
>>> On 13/06/13 20:41, Girish Gangadharan wrote:
>>> > For example, if a publisher writes a message to an exchange that ends
>>> > up in a specific queue bound to that exchange, it immediately shows
>>> > up in the same queue on the other node.
>>>
>>> There is only one queue on one node in this case. The cluster offers a
>>> consistent view of all resources in the cluster and allows producers and
>>> consumers to connect to any node. Messages are transparently routed
>>> across the cluster.
>>>
>>
>> Does this mean that the queues actually exist only in one node to whose
>> exchange the message was originally written to? And the fact that I can see
>> them in the admin console of the other node is just to provide a consistent
>> view across the cluster? So any action that I do with the original queue on
>> the other nodes' admin consoles basically makes the changes to the queue on
>> the original node and not to the copy of that queue on the other nodes? In
>> fact, if I understand you correctly, there IS no copy of the queue on the
>> other nodes? Did I get that right?
>>
>> Not sure I understand the consistent view explanation either? What are
>> the main benefits of having multiple nodes in a cluster if they can't
>> really share anything? I would think Mirrored Queues should be an automatic
>> by-product of clustering, thus offering HA out of the box. Can you explain
>> to me why somebody would create a cluster of nodes and NOT have mirrored
>> queues set up?
>>
>>
>>>
>>> > But then why does the article on Mirrored Queues say that *you have
>>> > to explicitly set up mirrored queues by policy*?
>>>
>>> If a queue exists on only one node and that node crashes then the queue
>>> becomes unavailable. Mirrored queues provides resilience by maintaining
>>> copies of the queue that can take over in case a node crashes. See
>>> http://www.rabbitmq.com/ha.html
>>>
>>> > So if one node goes down, the load balancer would automatically
>>> > redirect all the traffic just to the node that is up and running. And
>>> > the clients will not be affected. Would that work? Is that the
>>> > recommended approach?
>>>
>>> That is a common approach. Another is to provide clients with reconnect
>>> logic that allows them to select the alternative node when one node
>>> crashes.
>>>
>>> > I was thinking about using Shoveling technique to push messages from
>>> > a logical broker sitting on a cluster in our local HA servers to a
>>> > different logical broker to the DR server. In this case, a copy of
>>> > the messages will keep getting pumped into the DR with no clients
>>> > processing it. What will happen to those messages?
>>>
>>> Nothing will happen to them. They will accumulate in the DR server
>>> unless you take action to remove them, or unless they expire.
>>>
>>> > Should I have a workflow to go purge them every day manually or via a
>>> > script (after I make sure those messages aren't needed anymore since
>>> > the clients have already process them from the main HA cluster).
>>>
>>> That depends on the specifics of your disaster scenario, your messaging
>>> pattern and how you plan to switch over to your DR environment. Purging
>>> queues after a day sounds reasonable.
>>>
>>> > If the main HA cluster goes down and I have to bring the DR server into
>>> > action, what happens to the messages that did get processed already by
>>> > the clients? Should the clients have logic built in to ignore
>>> duplicates
>>> > to handle this specific scenario?
>>>
>>> Performing deduplication in clients is probably the simplest solution,
>>> or you could mirror the effects of consumers removing messages from
>>> queues in your DR environment.
>>>
>>>
>>>
>>> -Emile
>>>
>>>
>>>
>>>
>>>
>>
>> _______________________________________________
>> rabbitmq-discuss mailing list
>> rabbitmq-discuss at lists.rabbitmq.com
>> https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss
>>
>>
>
>
> --
> Jason McIntosh
> http://mcintosh.poetshome.com/blog/
> 573-424-7612
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20130618/057111d3/attachment.htm>


More information about the rabbitmq-discuss mailing list