[rabbitmq-discuss] Amazon EC2 spurious cluster timeouts
Laing, Michael P.
Michael.Laing at nytimes.com
Sat May 18 16:45:26 BST 2013
We run several clusters at any one time and have not had problems such as you report – yet :)
* 3 nodes in a cluster / 3.04 or 3.10 / instance sizes vary
* Deployed within a VPC and with each node in a different availability zone.
* Amazon Linux and vFabric Erlang/OTP 15B02.
* CloudFormation for automated deployment/autoscaling/DNS (Route53)/etc
Like you, we do not use persistent messages. We persist in Cassandra and S3.
Things I have learned over the years re EC2 that may help:
* Avoid us-east-1: crowded, older infrastructure, bigger swings in capacity, meltdowns. My current favorite: us-west-2.
* Watch IO Wait on your instances: It seems to reflect the current network environment in which you are operating – neighbor instance activity, snapshot activity, and your own IO. The partitions we have had have correlated with high IO Wait.
* If you have a problem with an instance, start a new one to replace it, then diagnose the old.
* Go multi-region. When a zone has big problems usually the regional control plane becomes compromised so resource changes fail. We typically run multi-headed in 3 regions to improve both availability and end user latency.
We also have a 'backup' deployment architecture that uses federation/shovels across zones similar to our multi-region architecture. So far we haven't needed it.
In general, our approach is to ensure that messages are delivered at least once, and that operations are idempotent. Resolvers de-duplicate messages and report message history. History patterns tell us in near real time where problems (missing messages, increased latencies) are occurring.
From: <Maslinski>, Ray <MaslinskiR at valassis.com<mailto:MaslinskiR at valassis.com>>
Reply-To: rabbitmq <rabbitmq-discuss at lists.rabbitmq.com<mailto:rabbitmq-discuss at lists.rabbitmq.com>>
Date: Friday, May 17, 2013 4:03 PM
To: rabbitmq <rabbitmq-discuss at lists.rabbitmq.com<mailto:rabbitmq-discuss at lists.rabbitmq.com>>
Subject: [rabbitmq-discuss] Amazon EC2 spurious cluster timeouts
I’ve been working with several two node clusters running various versions of 3.0.x, hosted on m1.small instances on Ubuntu 12.04.2 LTS in EC2. The setup is essentially as described here http://karlgrz.com/rabbitmq-highly-available-queues-and-clustering-using-amazon-ec2/ with the main exception being that both of the RabbitMQ servers are in the same availability zone. A while back I observed a half dozen or so occurrences over the course of a week where the clusters would become partitioned, accompanied by a messages on each server such as:
=ERROR REPORT==== 17-May-2013::01:56:45 ===
** Node 'rabbit at oemsg-new-29b15241' not responding **
** Removing (timedout) connection **
=INFO REPORT==== 17-May-2013::01:56:45 ===
rabbit on node 'rabbit at oemsg-new-29b15241' down
Looking over the logs and EC2 metrics, I wasn’t able to identify any other anomalies that coincided with these failures. In particular, the load balancers in front of the cluster nodes did not report any health check failures connecting to the amqp port (on a 30 second interval), suggesting that network connectivity was otherwise healthy, and there didn’t appear to be any unexpected spikes in resource consumption (such as excessive cpu/disk/network activity). The rabbit servers were fairly lightly loaded with messaging traffic at the time, and running some load tests against the same servers afterwards didn’t induce any further failures over the course of several days. I tried increasing the net_ticktime to something like 5 or 10 minutes, but still observed a failure with the new value.
I left several clusters running over an extended period, most with little or no load, with one cluster running under an extended load test. Several of the clusters experienced no failures over the course of a couple of months, while others became partitioned after a while (though they seemed to survive for at least a few weeks before partition).
Anyone experience anything similar in EC2, or have any ideas what else might be done to diagnose what’s going on?
Senior Software Developer, Engineering
Valassis / Digital Media
maslinskir at valassis.com<mailto:maslinskir at valassis.com>
Creating the future of intelligent media delivery to drive your greatest success
This message may include proprietary or protected information. If you are not the intended
recipient, please notify me, delete this message and do not further communicate the information
contained herein without my express consent.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the rabbitmq-discuss