[rabbitmq-discuss] Uneven file descriptor use on HA Cluster
dseltzer at tveyes.com
Fri Jan 27 16:30:31 GMT 2012
Thanks for the reply.
We have about 20 queues currently holding a total of 1.5 million messages.
New messages come in at around 400 messages/second and stale messages are
removed by TTL. All of the queues are set to x-ha-policy: all.
Given that, I'm not surprised by volume of the File Handles and I've never
once run out of them. So Rabbit is definitely doing something right!
I was just a bit concerned about the discrepancy.
I think I will upgrade to rule out any of the bugs fixed between 2.6.1 and
2.7.1. Is there a way to do a live upgrade of an HA cluster? The
documentation I've read says to Stop Server1, then Stop Server 2. Upgrade
Server1 and Start it, then upgrade Server2 and Start it.
I'm currently running RabbitMQ on Centos, installed from RPMS. Should I
just do an RPM upgrade?
I'm running on Erlang OTP R14B03. Is it important that I update the version
of Erlang I'm running on?
Thanks for your time!
On Fri, Jan 27, 2012 at 7:01 AM, Simon MacMullen <simon at rabbitmq.com> wrote:
> First of all, a few points: RabbitMQ will try to be smart about its use of
> file descriptors for actual files - while one descriptor will be used for
> each *connection*, it will keep file descriptors open until it starts to
> run out, at which point it will start to swap them.
> So seeing high numbers of file descriptors used with low numbers of
> connections open is not a cause to panic - especially if you have lots of
> Having said that, if you don't have lots of queues then that number of FDs
> used is odd. And the asymmetry is odd too, unless you have lots of
> non-mirrored queues on one node.
> What does lsof say?
> Finally, lots of bugs got fixed around HA between 2.6.1 and 2.7.1 so it
> might be worth upgrading in case that helps.
> Cheers, Simon
> On 26/01/12 16:04, Dave Seltzer wrote:
>> Hello Everyone,
>> My google-fu is failing me, so maybe someone can help me out with an
>> issue I’ve been having.
>> I have two RabbitMQ 2.6.1 servers running on Centos 5.7. The two servers
>> (QueuePool01 and QueuePool02) are running as a cluster as we have a
>> number of durable HA queues. I’m load-balancing client-interactions
>> using HAproxy on a third server.
>> I started noticing that QueuePool02was using ~800 out of 1024 file
>> descriptors (while QueuePool01 was only using ~50). So to decrease the
>> risk of running out of descriptors I increase the number to 4096 and
>> restarted the node.
>> Now, again I see the number of file descriptors on QueuePool02
>> increasing even though QueuePool01 is remaining steady. Is it possible
>> there is some sort of leak? Should the two servers tend to use the same
>> number of file descriptors?
>> Here is what the current load looks like
>> Dave S
>> rabbitmq-discuss mailing list
>> rabbitmq-discuss at lists.**rabbitmq.com<rabbitmq-discuss at lists.rabbitmq.com>
> Simon MacMullen
> RabbitMQ, VMware
> rabbitmq-discuss mailing list
> rabbitmq-discuss at lists.**rabbitmq.com<rabbitmq-discuss at lists.rabbitmq.com>
Dave Seltzer <dseltzer at tveyes.com>
Chief Systems Architect
(203) 254-3600 x222
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the rabbitmq-discuss