[rabbitmq-discuss] High Availability and Load Balancers

Jason McIntosh mcintoshj at gmail.com
Fri Apr 25 14:30:48 BST 2014


Ditto on Ron.  Some things to be aware of:

Most load balancers have monitors that check the status.  This half-opens a
connection to the rabbit ports, so rabbit will log all of these as info and
warning level messages.  It tends to flood the logs.  Also, watch your
heartbeats.  We hit an issue initially with our F5's where we had a short
TCP/IP timeout and rabbit's heartbeat was longer, so we were getting
disconnects every 5 minutes.  But, this config has worked well for us for
at least two years now :)  We back clusters with this, and individual non
clustered nodes as well.

Jason


On Thu, Apr 24, 2014 at 2:48 PM, Ron Cordell <ron.cordell at gmail.com> wrote:

> We have a similar setup with our F5 LB - exposing a VIP to the application
> and checking the health of the nodes from the F5 by querying the api.
>
> Cheers,
>
> -ronc
>
>
> On Thu, Apr 24, 2014 at 11:52 AM, Arun Rao <arunrao.seattle at gmail.com>wrote:
>
>> I am using Load Balancer (F5). I am using Producer VIP separately from
>> Consumer VIP. This gives me a lot of flexibility when doing maintenance.
>> For Producer VIP, I am using a API health http://<hostname>:15672/api/aliveness-test/<vhost>
>> and looking for "status":"ok" result.
>>
>> For consumer VIP, I am just using TCP Half-open.
>>
>> HTH,
>> -arun.
>>
>>
>> On Thu, Apr 24, 2014 at 10:01 AM, vish.ramachandran <
>> vish.ramachandran at gmail.com> wrote:
>>
>>> Hello RabbitMq Team,
>>>
>>> We are caught in a decision point on whether to choose a load balancer in
>>> front of cluster members or to choose a setup where the list of cluster
>>> members is baked into client configuration.
>>>
>>> Data points:
>>>
>>> 1. We are using clustered rmqs mainly for high availability. Our queues
>>> are
>>> set up for HA in this setup.
>>>
>>> 2. Scalability is not a concern yet. We don't expect to add new members
>>> to
>>> the cluster dynamically. Dynamic DNS is a possibility for recovering any
>>> failed nodes.
>>>
>>> 3. We are using libraries like Spring AMQP and SStone that provide for
>>> automatic reconnect/failover. This takes care of consumption. We also
>>> plan
>>> to design clients to retry publishing upon failure.
>>>
>>>
>>> Questions:
>>>
>>> 1. We would like to detect failed connections quickly on the client
>>> side. We
>>> wonder whether TCP load balancers do a good job of detecting failed
>>> connections or sit on a bad connection till a real problem is seen.
>>>
>>> 2. If clients deal with the cluster members directly, is it any better?
>>> Can
>>> the RMQ client library (like sstone or spring amqp) do a better job at
>>> detecting failures quicker than load balancers?
>>>
>>> 3. Can the actual consumers and publishers (clients of the RMQ libraries)
>>> take any special action to detect and recover from failures quickly?
>>>
>>>
>>> Thanks
>>> Vish
>>>
>>>
>>> Thanks
>>> Vish
>>>
>>>
>>>
>>>
>>> --
>>> View this message in context:
>>> http://rabbitmq.1065348.n5.nabble.com/High-Availability-and-Load-Balancers-tp35058.html
>>> Sent from the RabbitMQ mailing list archive at Nabble.com.
>>> _______________________________________________
>>> rabbitmq-discuss mailing list
>>> rabbitmq-discuss at lists.rabbitmq.com
>>> https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss
>>>
>>
>>
>> _______________________________________________
>> rabbitmq-discuss mailing list
>> rabbitmq-discuss at lists.rabbitmq.com
>> https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss
>>
>>
>
> _______________________________________________
> rabbitmq-discuss mailing list
> rabbitmq-discuss at lists.rabbitmq.com
> https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss
>
>


-- 
Jason McIntosh
https://github.com/jasonmcintosh/
573-424-7612
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20140425/b8e87fb5/attachment.html>


More information about the rabbitmq-discuss mailing list