[rabbitmq-discuss] Mirrored queues behind a load balancer
Emile Joubert
emile at rabbitmq.com
Fri Jun 14 10:03:37 BST 2013
Hi,
On 13/06/13 20:41, Girish Gangadharan wrote:
> For example, if a publisher writes a message to an exchange that ends
> up in a specific queue bound to that exchange, it immediately shows
> up in the same queue on the other node.
There is only one queue on one node in this case. The cluster offers a
consistent view of all resources in the cluster and allows producers and
consumers to connect to any node. Messages are transparently routed
across the cluster.
> But then why does the article on Mirrored Queues say that *you have
> to explicitly set up mirrored queues by policy*?
If a queue exists on only one node and that node crashes then the queue
becomes unavailable. Mirrored queues provides resilience by maintaining
copies of the queue that can take over in case a node crashes. See
http://www.rabbitmq.com/ha.html
> So if one node goes down, the load balancer would automatically
> redirect all the traffic just to the node that is up and running. And
> the clients will not be affected. Would that work? Is that the
> recommended approach?
That is a common approach. Another is to provide clients with reconnect
logic that allows them to select the alternative node when one node crashes.
> I was thinking about using Shoveling technique to push messages from
> a logical broker sitting on a cluster in our local HA servers to a
> different logical broker to the DR server. In this case, a copy of
> the messages will keep getting pumped into the DR with no clients
> processing it. What will happen to those messages?
Nothing will happen to them. They will accumulate in the DR server
unless you take action to remove them, or unless they expire.
> Should I have a workflow to go purge them every day manually or via a
> script (after I make sure those messages aren't needed anymore since
> the clients have already process them from the main HA cluster).
That depends on the specifics of your disaster scenario, your messaging
pattern and how you plan to switch over to your DR environment. Purging
queues after a day sounds reasonable.
> If the main HA cluster goes down and I have to bring the DR server into
> action, what happens to the messages that did get processed already by
> the clients? Should the clients have logic built in to ignore duplicates
> to handle this specific scenario?
Performing deduplication in clients is probably the simplest solution,
or you could mirror the effects of consumers removing messages from
queues in your DR environment.
-Emile
More information about the rabbitmq-discuss
mailing list