[rabbitmq-discuss] Shovel configuration for a clustered broker with mirrored high-availability queues

Eric ejarendt at gmail.com
Fri Jun 29 17:00:54 BST 2012


Thanks for the reply.

The odd thing about the federation plugin is that the downstream broker, 
which is the 'master' conceptually in my case, has to declare its exchange 
type as 'federation', which means the actual type is based on the upstream 
'slave' exchange.  That feels strange to me, because I don't want the 
master to really care about the upstream exchange - it's sort of optional. 
 If the upstream broker is alive that's great, and I want it to forward 
messages along to the master.  If it's not, or isn't present when the 
master starts, I don't want that to be a problem.

Maybe I'm not understanding federation correctly?  It just struck me as odd 
that the master has to go declaring all of the upstream sources it expects, 
and the other way (with shovel) seemed more natural.

I understand that the sources and destination fields accept a list for the 
purpose of failover.  I could configure a single shovel that consumes from 
either broker in the 'slave' cluster and publishes to either broker on the 
master cluster, but I'm worried about the shovel's host broker failing and 
taking the shovel down.

On Friday, June 29, 2012 12:31:41 AM UTC-7, Brendan Hay wrote:
>
> Hi Eric,
>
> FYI: The 'sources' and 'destinations' configuration fields both accept a 
> list. The shovel plugin doesn't actually pull/push to/from all of these 
> simultaneously, it uses them as a form of simple failover - if a connection 
> fails, it uses the next one in the list.
>
> It sounds like for your scenario (when clustering in general), it would be 
> easier to use the federation plugin (
> http://www.rabbitmq.com/federation.html). When you declare a federated 
> exchange on the downstream/master cluster, the plugin auto-magically 
> declares queues (mirrored if configured) on the upstream/AWS cluster .. you 
> would then bind a mirrored queue to the federated exchange on the 
> downstream/master cluster to cause messages to be routed across the link to 
> that queue.
>
> The plugin will then stay connected even if one of your nodes on either 
> cluster go down .. if the whole downstream/master cluster goes away/down, 
> then messages will pile up in the upstream/AWS queues, and be transfered 
> once the link is re-established.
>
> Does that solve your use-case?
>
> Regards,
> Brendan
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20120629/d3ff712a/attachment.htm>


More information about the rabbitmq-discuss mailing list