[rabbitmq-discuss] RabbitMQ Clustering with Federation
Robert Parker
reparker23 at gmail.com
Wed Jul 31 17:35:56 BST 2013
I'm having some trouble getting the right architecture configured to
support rabbitmq federation in a clustered high availability environment.
Our basic setup is four clusters of two nodes each which are separated by
a WAN.
"home" is the "home" cluster and "remote1", "remote2" and "remote3" are
remote clusters. There are a lot of queues being created and published to
via the API identically on all clusters but I only want to federate ONE of
the queues, which is called "EventListener" back to the "home" cluster from
all the "remote" clusters. I want messages published to this queue on each
remote cluster to be federated to the home cluster to be consumed by a
separate event listener service that only lives at the "home" site.
Each cluster is configured with:
# run on node 1
rabbitmqctl stop_app
rabbitmqctl reset
rabbitmqctl join_cluster rabbit at node2
rabbitmqctl start_app
We have configured three hardware load balancer VIPs that load balance
connections on port 5672 between both nodes in each cluster, and have
configured federation upstreams to use these and also configured our
application to use these VIPs for queue/exchange creation and for
publishing messages. Here's our federation config script thats run on
node1 of the home cluster only:
# run on home cluster node1
rabbitmq-plugins enable rabbitmq_federation
rabbitmq-plugins enable rabbitmq_federation_management
rabbitmq-plugins enable rabbitmq_management
service rabbitmq-server restart
rabbitmqctl set_parameter federation-upstream remote1-upstream
'{"uri":"amqp://remote1-vip","expires":3600000}'
rabbitmqctl set_parameter federation-upstream remote2-upstream
'{"uri":"amqp://remote2-vip","expires":3600000}'
rabbitmqctl set_parameter federation-upstream remote3-upstream
'{"uri":"amqp://remote-3-vip","expires":3600000}'
rabbitmqctl set_policy federate-EventListener "^EventListener$"
'{"federation-upstream-set":"all"}' 0
rabbitmqctl list_exchanges name policy | grep federate
rabbitmqctl eval 'rabbit_federation_status:status().'
However I am noticing that the "rabbitmqctl eval
'rabbit_federation_status:status()." command only shows federation config
on node2 of the home cluster despite having been configured on node1:
[root at node1 ~]# rabbitmqctl eval 'rabbit_federation_status:status().'
[]
...done.
[root at node2 ~]# rabbitmqctl eval 'rabbit_federation_status:status().'
[[{exchange,<<"EventListener">>},
{vhost,<<"/">>},
{connection,<<"remote3-upstream">>},
{upstream_exchange,<<"EventListener">>},
{status,normal},
{timestamp,{{2013,7,31},{11,16,19}}}],
[{exchange,<<"EventListener">>},
{vhost,<<"/">>},
{connection,<<"remote2-upstream">>},
{upstream_exchange,<<"EventListener">>},
{status,normal},
{timestamp,{{2013,7,31},{11,16,19}}}],
[{exchange,<<"EventListener">>},
{vhost,<<"/">>},
{connection,<<"remote1-upstream">>},
{upstream_exchange,<<"EventListener">>},
{status,normal},
{timestamp,{{2013,7,31},{11,16,19}}}]]
...done.
And I'm noticing that messages published to the queues via the load
balancer VIPs to the remote clusters are not showing up in the
"EventListener" queue of the home cluster.
Is this not the right way to set this up? Is it possible to configure this
without using an external load balancer and still achieve high
availability? If I federate to a single node of a cluster directly, what
happens if that node goes down? Do I have to configure redundant
federation between every node of home cluster to every node of each remote
cluster? If I don't use a load balancer VIP to publish to from my
application, if the single node of the cluster I've configured goes down,
it wont help me that the other node is still available.
Whats the best practice, if anything, on this kind of config?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20130731/df32e120/attachment.htm>
More information about the rabbitmq-discuss
mailing list