<br><div class="gmail_quote">On Sat, Jun 30, 2012 at 12:42 AM, Eric <span dir="ltr"><<a href="mailto:ejarendt@gmail.com" target="_blank">ejarendt@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
I thought I sent along another message, but apparently it didn't go<br>
through. Viewing this group through the new Google Groups UI is<br>
confusing.<br>
<br></blockquote><div><br></div><div>I find it's better to avoid the Google Group, and just use the mailing list directly.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
I misunderstood the federation configuration; I see now that you<br>
declare a backing type when you declare the federated exchange, which<br>
makes sense. I thought the backing type was based on the upstream<br>
exchange.<br>
<br>
Going back to shovel for a second... if I have two shovels running in<br>
a cluster, one on each broker, and they connect to their host broker<br>
and consume from the same mirrored queue (the queue is 'shared' across<br>
the cluster) will they simply behave like two consumers on the same<br>
queue, and basically receive messages round-robin? Because that would<br>
work fine for my scenario. They'll share the workload unless one of<br>
the brokers in the cluster dies, after which one shovel would be doing<br>
all the work.<br>
<div class="HOEnZb"><div class="h5"><br></div></div></blockquote><div><br></div><div>Correct, the shovels will behave as you outlined when consuming.</div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="HOEnZb"><div class="h5">
On Jun 29, 9:00 am, Eric <<a href="mailto:ejare...@gmail.com">ejare...@gmail.com</a>> wrote:<br>
> Thanks for the reply.<br>
><br>
> The odd thing about the federation plugin is that the downstream broker,<br>
> which is the 'master' conceptually in my case, has to declare its exchange<br>
> type as 'federation', which means the actual type is based on the upstream<br>
> 'slave' exchange. That feels strange to me, because I don't want the<br>
> master to really care about the upstream exchange - it's sort of optional.<br>
> If the upstream broker is alive that's great, and I want it to forward<br>
> messages along to the master. If it's not, or isn't present when the<br>
> master starts, I don't want that to be a problem.<br>
><br>
> Maybe I'm not understanding federation correctly? It just struck me as odd<br>
> that the master has to go declaring all of the upstream sources it expects,<br>
> and the other way (with shovel) seemed more natural.<br>
><br>
> I understand that the sources and destination fields accept a list for the<br>
> purpose of failover. I could configure a single shovel that consumes from<br>
> either broker in the 'slave' cluster and publishes to either broker on the<br>
> master cluster, but I'm worried about the shovel's host broker failing and<br>
> taking the shovel down.<br>
><br>
><br>
><br>
><br>
><br>
><br>
><br>
> On Friday, June 29, 2012 12:31:41 AM UTC-7, Brendan Hay wrote:<br>
><br>
> > Hi Eric,<br>
><br>
> > FYI: The 'sources' and 'destinations' configuration fields both accept a<br>
> > list. The shovel plugin doesn't actually pull/push to/from all of these<br>
> > simultaneously, it uses them as a form of simple failover - if a connection<br>
> > fails, it uses the next one in the list.<br>
><br>
> > It sounds like for your scenario (when clustering in general), it would be<br>
> > easier to use the federation plugin (<br>
> ><a href="http://www.rabbitmq.com/federation.html" target="_blank">http://www.rabbitmq.com/federation.html</a>). When you declare a federated<br>
> > exchange on the downstream/master cluster, the plugin auto-magically<br>
> > declares queues (mirrored if configured) on the upstream/AWS cluster .. you<br>
> > would then bind a mirrored queue to the federated exchange on the<br>
> > downstream/master cluster to cause messages to be routed across the link to<br>
> > that queue.<br>
><br>
> > The plugin will then stay connected even if one of your nodes on either<br>
> > cluster go down .. if the whole downstream/master cluster goes away/down,<br>
> > then messages will pile up in the upstream/AWS queues, and be transfered<br>
> > once the link is re-established.<br>
><br>
> > Does that solve your use-case?<br>
><br>
> > Regards,<br>
> > Brendan<br>
</div></div><div class="HOEnZb"><div class="h5">_______________________________________________<br>
rabbitmq-discuss mailing list<br>
<a href="mailto:rabbitmq-discuss@lists.rabbitmq.com">rabbitmq-discuss@lists.rabbitmq.com</a><br>
<a href="https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss" target="_blank">https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss</a><br>
</div></div></blockquote></div><br>