[rabbitmq-discuss] Mirrored queues behind a load balancer

Girish Gangadharan togirish at gmail.com
Thu Jun 13 20:41:04 BST 2013


Hello,

I'm trying to set up HA infrastructure for our new project that uses 
RabbitMQ and MassTransit.

I've read and re-read the articles on HA and Mirrored Queues<http://www.rabbitmq.com/ha.html>several times. And I understand the theory behind it. 

However, I do have a few questions. I hope somebody can kindly answer them 
for me.

1. I just set up a cluster with 2 nodes. When I have a client talk to 
either of these nodes, the other node reflects those changes almost 
instantly. For example, if a publisher writes a message to an exchange that 
ends up in a specific queue bound to that exchange, it immediately shows up 
in the same queue on the other node. Likewise, if I manually create a queue 
on one node, it immediately shows up on the other node as well. 


No special policies have been set up, by the way. This is all just out of 
the box. 


And this is expected, I'd assume, since these both nodes are in a cluster. 
But then why does the article on Mirrored Queues say that *you have to 
explicitly set up mirrored queues by policy*?


"Queues have mirroring enabled via policy<http://www.rabbitmq.com/parameters.html#policies>. 
Policies can change at any time; it is valid to create a non-mirrored 
queue, and then make it mirrored at some later point (and vice versa)."


Am I missing something obvious here? 


2. Let's say I have a internal DNS entry that points to a load balancer, 
which in turn sends traffic to either of these nodes mentioned above (based 
on round-robin or whatever algorithm), and ask the clients (publishers and 
consumers) to talk to the DNS entry when it wants to write or read from the 
queues. So if one node goes down, the load balancer would automatically 
redirect all the traffic just to the node that is up and running. And the 
clients will not be affected. Would that work? Is that the recommended 
approach? 

3. Finally, I have a question about DR. Our DR server is on a separate WAN 
in a different state. I was thinking about using Shoveling technique to 
push messages from a logical broker sitting on a cluster in our local HA 
servers to a different logical broker to the DR server. In this case, a 
copy of the messages will keep getting pumped into the DR with no clients 
processing it. What will happen to those messages? Should I have a workflow 
to go purge them every day manually or via a script (after I make sure 
those messages aren't needed anymore since the clients have already process 
them from the main HA cluster). 

If the main HA cluster goes down and I have to bring the DR server into 
action, what happens to the messages that did get processed already by the 
clients? Should the clients have logic built in to ignore duplicates to 
handle this specific scenario?

Thanks in advance for your help,
Girish
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20130613/7ad3674a/attachment.htm>


More information about the rabbitmq-discuss mailing list