[rabbitmq-discuss] Fwd: Client-connection failover workarounds (ruby)

Matthew Sackman matthew at lshift.net
Mon Feb 8 15:49:07 GMT 2010


On Mon, Feb 08, 2010 at 09:22:32AM -0600, Peter Fitzgibbons wrote:
> > Nope. To use your terminology, if you have rabbitP, which is local to
> > producerP, then you then have the option to run the shovel inside
> > rabbitP. This is configured to know about rabbitA, rabbitB… and so if
> > any of those rabbits go down, the shovel will automatically try to
> > reconnect, trying all the rabbits it knows about until it finds one that
> > works. Whilst the shovel is trying to find a rabbit to connect to,
> > messages queue up inside rabbitP, from producerP, in the normal way -
> > the interruption is totally invisible to producerP.
> >
> > How is this different from configuring a rabbit cluster that includes my
> "local" node?
> Matthias warned me that this is a degenerate case, where the cluster will
> struggle to support 10's (100's in a farm) of clustered servers.

Clustering Rabbit achieves scalability, not reliability. With a cluster,
queues are not replicated, whilst the meta data associated with queues,
exchanges and bindings are shared across all nodes of the cluster.

It's rather tricky, at the moment, to do any sort of high availability,
active/passive fail-over type stuff with clusters.

> To extend my understanding (and offer an opportunity to correct my
> understanding),
> Shovel is preferred because the local rabbitP is not clustered, does not
> have to be "aware" of the cluster, and shovel has the proper failover
> routines to handle it when times get tough.

Yes. In general, the lower the coupling and higher the independence, the
better, from the pov of resiliance and availability.

> What about consumerA, consumerB, etc?  Are these theoretically expected to
> be "protected" from harm?  I think this question is pointing toward my
> potential misunderstanding of how consumer push or pull is handled.

Err, well, I don't see from your past emails how you think the consumers
are connected in. Again, you could do a local rabbit to each of the
consumers, again with a shovel, if you wanted to. Or wire the consumers
themselves with knowledge of which central servers you have to connect
to. Or stick a TCP load balancer between the consumers and the central
servers. It all depends on how you provision the central servers - if
it's known that a certain messages will be sent to a particular server
(or set of servers, given an HA setup) then it's sufficient to wire the
consumers to continually try to connect to those servers. If you need more
dynamic control over where the consumers connect to to find the messages
they're interested in, then dynamic DNS, or a TCP load balancer, or even
things like IP or MAC -address stealing may be more appropriate.

Matthew




More information about the rabbitmq-discuss mailing list