<div>Hi Matthew,</div>
<div> </div>
<div>Thank you very much. That was very informative and has given me plenty to think about for now.</div>
<div> </div>
<div>I will be engaging in some prototyping of this over the next few weeks.</div>
<div> </div>
<div>Thanks again,</div>
<div> </div>
<div>Matt.<br><br></div>
<div class="gmail_quote">On Tue, Mar 2, 2010 at 11:13 PM, Matthew Sackman <span dir="ltr"><<a href="mailto:matthew@lshift.net">matthew@lshift.net</a>></span> wrote:<br>
<blockquote style="BORDER-LEFT: #ccc 1px solid; MARGIN: 0px 0px 0px 0.8ex; PADDING-LEFT: 1ex" class="gmail_quote">Hi Matthew,<br>
<div class="im"><br>On Tue, Mar 02, 2010 at 08:57:45AM +1030, Matthew Dunn wrote:<br>> *It's OK for a few messages to be dropped when a broker goes down<br>> *I need to load balance these messages<br>> *Availability is more important than dropping the ocassional message<br>
> *I would prefer messages only be proccessed once.<br><br></div>Is this in fact rather similar to what the standard MySQL master/slave<br>HA setup is - i.e. async from the master to the slave, but instant<br>availability of the slave when the master fails, and the slave is<br>
(pretty much) unusable up until the master fails?<br>
<div class="im"><br>> If the broker hosting the public queue crashes or is unavailable. Each<br>> consumer will connect to a new available broker and recreate the queue there<br>> with the same name.<br><br></div>
Your entire plan can be implemented using the new rabbitmq-shovel<br>plugin. I would recommend you try this out as it was written for exactly<br>these situations.<br>
<div class="im"><br>> The question I have is, if I have created a public queue on a different<br>> broker, when the crashed broker comes back online will there be a problem<br>> with conflicting queues?<br><br></div>
No, provided you don't use clustering.<br>
<div class="im"><br>> *Broker A with Queue A Crashes<br>> *Consumers failover to Broker B<br>> *A consumer recreates Queue A on Broker B<br>> *Broker A is restored -- Will Broker A have problems starting because Queue<br>
> A has been recreated elsewhere?<br><br></div>I would suggest a slight variation. Brokers A and B are both up. The<br>shovel connects from your local "leaf" broker to both brokers, creating<br>queues and consuming from both queues. The publishers can then publish<br>
to either broker and the messages will get through to the consumers, who<br>are consuming via their local leaf brokers.<br><br>In event of failure, hopefully the queues on A and B will be pretty<br>short, if not empty, so you shouldn't lose much. The shovels will<br>
continue, and as soon as the failed node comes back up, will reconnect<br>to the node, recreating the queues and consuming as necessary. The<br>publishers simply need to have some logic to be able to switch between<br>either broker if they find a node has failed.<br>
<br>Is that sufficient for your needs?<br><br>Matthew<br><br>_______________________________________________<br>rabbitmq-discuss mailing list<br><a href="mailto:rabbitmq-discuss@lists.rabbitmq.com">rabbitmq-discuss@lists.rabbitmq.com</a><br>
<a href="http://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss" target="_blank">http://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss</a><br></blockquote></div><br>