[rabbitmq-discuss] Mirrored queues behind a load balancer
girish at giri.sh
Fri Jun 14 16:37:07 BST 2013
Thanks for your reply. Please see my response in red.
On Fri, Jun 14, 2013 at 4:03 AM, Emile Joubert <emile at rabbitmq.com> wrote:
> On 13/06/13 20:41, Girish Gangadharan wrote:
> > For example, if a publisher writes a message to an exchange that ends
> > up in a specific queue bound to that exchange, it immediately shows
> > up in the same queue on the other node.
> There is only one queue on one node in this case. The cluster offers a
> consistent view of all resources in the cluster and allows producers and
> consumers to connect to any node. Messages are transparently routed
> across the cluster.
Does this mean that the queues actually exist only in one node to whose
exchange the message was originally written to? And the fact that I can see
them in the admin console of the other node is just to provide a consistent
view across the cluster? So any action that I do with the original queue on
the other nodes' admin consoles basically makes the changes to the queue on
the original node and not to the copy of that queue on the other nodes? In
fact, if I understand you correctly, there IS no copy of the queue on the
other nodes? Did I get that right?
Not sure I understand the consistent view explanation either? What are the
main benefits of having multiple nodes in a cluster if they can't really
share anything? I would think Mirrored Queues should be an automatic
by-product of clustering, thus offering HA out of the box. Can you explain
to me why somebody would create a cluster of nodes and NOT have mirrored
queues set up?
> > But then why does the article on Mirrored Queues say that *you have
> > to explicitly set up mirrored queues by policy*?
> If a queue exists on only one node and that node crashes then the queue
> becomes unavailable. Mirrored queues provides resilience by maintaining
> copies of the queue that can take over in case a node crashes. See
> > So if one node goes down, the load balancer would automatically
> > redirect all the traffic just to the node that is up and running. And
> > the clients will not be affected. Would that work? Is that the
> > recommended approach?
> That is a common approach. Another is to provide clients with reconnect
> logic that allows them to select the alternative node when one node
> > I was thinking about using Shoveling technique to push messages from
> > a logical broker sitting on a cluster in our local HA servers to a
> > different logical broker to the DR server. In this case, a copy of
> > the messages will keep getting pumped into the DR with no clients
> > processing it. What will happen to those messages?
> Nothing will happen to them. They will accumulate in the DR server
> unless you take action to remove them, or unless they expire.
> > Should I have a workflow to go purge them every day manually or via a
> > script (after I make sure those messages aren't needed anymore since
> > the clients have already process them from the main HA cluster).
> That depends on the specifics of your disaster scenario, your messaging
> pattern and how you plan to switch over to your DR environment. Purging
> queues after a day sounds reasonable.
> > If the main HA cluster goes down and I have to bring the DR server into
> > action, what happens to the messages that did get processed already by
> > the clients? Should the clients have logic built in to ignore duplicates
> > to handle this specific scenario?
> Performing deduplication in clients is probably the simplest solution,
> or you could mirror the effects of consumers removing messages from
> queues in your DR environment.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the rabbitmq-discuss