Thanks for your reply Simon.<div><br></div><div>Is it possible to use MySQL for persistence of messages in RabbitMQ and then use the Master-Slave replication that MySQL provides to replicate the data at the remote site? Are there any performance numbers for this kind of configuration?</div>
<div><br></div><div>Thanks,</div><div>Terance.<br><br><div class="gmail_quote">On Fri, Oct 19, 2012 at 6:30 PM, Simon MacMullen <span dir="ltr"><<a href="mailto:simon@rabbitmq.com" target="_blank">simon@rabbitmq.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="im">On 19/10/12 12:48, Terance wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
We want to set up RabbitMQ such that the current state of the broker is<br>
backed up at a remote disaster recovery site at real time. We also want our<br>
clients (producers and consumers) to fail-over to this remote broker in case<br>
the main broker becomes unreachable for some reason. I was looking at the<br>
Distributed broker approaches supported by RabbitMQ at<br>
<a href="http://www.rabbitmq.com/distributed.html" target="_blank">http://www.rabbitmq.com/<u></u>distributed.html</a> but I'm not sure If what we are<br>
looking for can be achieved. Please let me know if you know how to do this.<br>
</blockquote>
<br></div>
Hi. Interesting question. Some thoughts:<br>
<br>
CAP theorem tells us you're not going to get a transparent solution to this problem. If it's remote you need to be partition-tolerant, and you almost certainly want to be available too. So consistency has to go. That rules out clustering.<br>
<br>
So you could use the shovel or federation to get messages published from your main site to your recovery site. That's fairly easy (assuming your broker definitions are not too dynamic); what is harder is ensuring that messages are consumed from the recovery site in some sort of correlation with them being consumed at the main site. There's nothing built in to Rabbit which can do that.<br>
<br>
There are some possibilities which may or may not work for you. You could federate / shovel into queues with a message TTL at the remote site to bound the amount of data you hold - but at failover you could have a lot of duplicate messages to work through, and if your main site queues back up enough, messages could be expired at the remote site when they have not been processed at the main site.<br>
<br>
Possibly the most plausible solution is to synchronise the mnesia directory from main to remote and only boot the remote broker on failover. This stands a decent chance of recovering your persistent messages of failure; but keeping the filesystems reliably in sync is its own challenge. And we don't guarantee to recover everything after (the equivalent of) an unclean shutdown.<br>
<br>
Hmm, you've got me thinking about replication now...<br>
<br>
Cheers, Simon<span class="HOEnZb"><font color="#888888"><br>
<br>
-- <br>
Simon MacMullen<br>
RabbitMQ, VMware<br>
</font></span></blockquote></div><br></div>