[rabbitmq-discuss] High Availability and Load Balancers
mcintoshj at gmail.com
Fri Apr 25 14:30:48 BST 2014
Ditto on Ron. Some things to be aware of:
Most load balancers have monitors that check the status. This half-opens a
connection to the rabbit ports, so rabbit will log all of these as info and
warning level messages. It tends to flood the logs. Also, watch your
heartbeats. We hit an issue initially with our F5's where we had a short
TCP/IP timeout and rabbit's heartbeat was longer, so we were getting
disconnects every 5 minutes. But, this config has worked well for us for
at least two years now :) We back clusters with this, and individual non
clustered nodes as well.
On Thu, Apr 24, 2014 at 2:48 PM, Ron Cordell <ron.cordell at gmail.com> wrote:
> We have a similar setup with our F5 LB - exposing a VIP to the application
> and checking the health of the nodes from the F5 by querying the api.
> On Thu, Apr 24, 2014 at 11:52 AM, Arun Rao <arunrao.seattle at gmail.com>wrote:
>> I am using Load Balancer (F5). I am using Producer VIP separately from
>> Consumer VIP. This gives me a lot of flexibility when doing maintenance.
>> For Producer VIP, I am using a API health http://<hostname>:15672/api/aliveness-test/<vhost>
>> and looking for "status":"ok" result.
>> For consumer VIP, I am just using TCP Half-open.
>> On Thu, Apr 24, 2014 at 10:01 AM, vish.ramachandran <
>> vish.ramachandran at gmail.com> wrote:
>>> Hello RabbitMq Team,
>>> We are caught in a decision point on whether to choose a load balancer in
>>> front of cluster members or to choose a setup where the list of cluster
>>> members is baked into client configuration.
>>> Data points:
>>> 1. We are using clustered rmqs mainly for high availability. Our queues
>>> set up for HA in this setup.
>>> 2. Scalability is not a concern yet. We don't expect to add new members
>>> the cluster dynamically. Dynamic DNS is a possibility for recovering any
>>> failed nodes.
>>> 3. We are using libraries like Spring AMQP and SStone that provide for
>>> automatic reconnect/failover. This takes care of consumption. We also
>>> to design clients to retry publishing upon failure.
>>> 1. We would like to detect failed connections quickly on the client
>>> side. We
>>> wonder whether TCP load balancers do a good job of detecting failed
>>> connections or sit on a bad connection till a real problem is seen.
>>> 2. If clients deal with the cluster members directly, is it any better?
>>> the RMQ client library (like sstone or spring amqp) do a better job at
>>> detecting failures quicker than load balancers?
>>> 3. Can the actual consumers and publishers (clients of the RMQ libraries)
>>> take any special action to detect and recover from failures quickly?
>>> View this message in context:
>>> Sent from the RabbitMQ mailing list archive at Nabble.com.
>>> rabbitmq-discuss mailing list
>>> rabbitmq-discuss at lists.rabbitmq.com
>> rabbitmq-discuss mailing list
>> rabbitmq-discuss at lists.rabbitmq.com
> rabbitmq-discuss mailing list
> rabbitmq-discuss at lists.rabbitmq.com
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the rabbitmq-discuss