[rabbitmq-discuss] Performance Observations and Interesting Behavior
Laing, Michael
michael.laing at nytimes.com
Wed Feb 12 22:48:11 GMT 2014
I think that's the same - we use R53 DNS for the health checks etc.
On Wed, Feb 12, 2014 at 5:42 PM, Ron Cordell <ron.cordell at gmail.com> wrote:
> Thanks for the detail! Very interesting.
>
> I'm even more curious about the client connection via the DNS name. Isn't
> that essentially the same as exposing a virtual IP to the client and load
> balancing across the nodes in the cluster? Or did I misunderstand and the
> client is routed to different clusters...
>
> Cheers,
>
> -ronc
>
>
> On Wed, Feb 12, 2014 at 11:12 AM, Laing, Michael <
> michael.laing at nytimes.com> wrote:
>
>> All of our inter-cluster connections use shovels, both within and between
>> regions.
>>
>> A cluster picks one of its nodes to run the shovel on. That node takes
>> the configured list of nodes in a remote cluster and picks one to connect
>> to. When local or remote nodes go down, things adjust. Mostly we see this
>> during rolling restarts. We have found it very rugged in production.
>>
>> External clients connect via a DNS name which will round-robin to one of
>> the cluster nodes. We use Route 53 health checks to insure nodes are in
>> service.
>>
>> Our external clients use PHP, Java, node.js, and whatever else to connect
>> - possibly some of them are using clients smart enough to fail over by
>> themselves... so we also expose the DNS name of each node in the cluster as
>> an option.
>>
>> Best,
>>
>> Michael
>>
>>
>> On Wed, Feb 12, 2014 at 1:20 PM, Ron Cordell <ron.cordell at gmail.com>wrote:
>>
>>> Michael,
>>>
>>> Thanks for the response - that's very interesting. We were quite
>>> interested in your setup when you posted to the rabbit list about the setup
>>> for the NYT :)
>>>
>>> How exactly do you distribute the connections? Does the rabbit driver do
>>> that for you by choosing from a list, or do you use some other method?
>>>
>>> Cheers,
>>>
>>> Ron
>>>
>>>
>>> On Tue, Feb 11, 2014 at 4:05 PM, Laing, Michael <
>>> michael.laing at nytimes.com> wrote:
>>>
>>>> That's interesting!
>>>>
>>>> We have removed all the load balancers from our core configurations in
>>>> Amazon EC2 because we found they added no value, and, in fact provided
>>>> troublesome additional points of failure. (We do use ELBs to find websocket
>>>> endpoints in the client-facing retail layer)
>>>>
>>>> Our core clusters in Oregon and Dublin each have 50 - 100 non-local
>>>> connections, randomly distributed, and are very stable.
>>>>
>>>> We use DNS with health checks for internal client connections in lieu
>>>> of load balancers. Simple and rugged.
>>>>
>>>> Michael Laing
>>>> NYTimes
>>>>
>>>>
>>>> On Tue, Feb 11, 2014 at 6:42 PM, Ron Cordell <ron.cordell at gmail.com>wrote:
>>>>
>>>>> Hi all --
>>>>>
>>>>> We've been performance testing RabbitMQ on Linux as we're about to
>>>>> move our RabbitMQ infrastructure from Windows to Linux (as well as other
>>>>> things). I wanted to share some of what we observed and if people have any
>>>>> feedback. All tests were done using a 3-node cluster where most queues are
>>>>> HA, with an F5 configured to provide a virtual IP to the application. There
>>>>> is a single vHost.
>>>>>
>>>>> 1. On the same hardware the Linux installation easily outperforms the
>>>>> Windows installation. It also uses fewer resources for the same throughput.
>>>>>
>>>>> 2. The Windows cluster becomes unstable and nodes start dropping
>>>>> out/partitioning at around 1/3 max tested volume. The Linux cluster showed
>>>>> no instability whatsoever up to maximum throughput.
>>>>>
>>>>> 3. Creating a cluster with 2 RAM nodes and 1 Disc node has the same
>>>>> disk I/O requirements as 3 disc nodes. (This makes sense because as I
>>>>> believe the RAM nodes will persist to disk for HA queues).
>>>>>
>>>>> 4. (here is the interesting one) When the F5 is configured to load
>>>>> balance across the 3 nodes as a round-robin load balancer, maximum
>>>>> throughput is significantly less than if the F5 sends all traffic to a
>>>>> single node.
>>>>>
>>>>> I'd love any feedback, especially on #4.
>>>>>
>>>>> Cheers!
>>>>>
>>>>> -ronc
>>>>>
>>>>> _______________________________________________
>>>>> rabbitmq-discuss mailing list
>>>>> rabbitmq-discuss at lists.rabbitmq.com
>>>>> https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss
>>>>>
>>>>>
>>>>
>>>> _______________________________________________
>>>> rabbitmq-discuss mailing list
>>>> rabbitmq-discuss at lists.rabbitmq.com
>>>> https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss
>>>>
>>>>
>>>
>>> _______________________________________________
>>> rabbitmq-discuss mailing list
>>> rabbitmq-discuss at lists.rabbitmq.com
>>> https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss
>>>
>>>
>>
>> _______________________________________________
>> rabbitmq-discuss mailing list
>> rabbitmq-discuss at lists.rabbitmq.com
>> https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss
>>
>>
>
> _______________________________________________
> rabbitmq-discuss mailing list
> rabbitmq-discuss at lists.rabbitmq.com
> https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20140212/89d03327/attachment.html>
More information about the rabbitmq-discuss
mailing list