[rabbitmq-discuss] beam process using 100% cpu and lots of memory

Matthew Sackman matthew at lshift.net
Sat Feb 20 21:00:21 GMT 2010

Hi Joel,

On Tue, Feb 16, 2010 at 10:51:31AM +1100, Joel Heenan wrote:
> We are having a situation where when we put lots of data through a
> RabbitMQ queue we sometimes we see a beam process running using 100%
> cpu and up to 1.5GB (RSS) of memory. The only way to resolve this
> situation is to restart RabbitMQ.

Nah, if you leave it alone for a while, it'll probably recover. ;)

You may be just seeing a badly overwhelmed queue, which probably would
recover eventually. Or it could be that you're running out of RAM and
it's starting to thrash swap. You may wish to test using the new
persister branch as that is much better at dealing with memory pressure.
However, before you even try that, how long are the queues getting, what
is the average size of the messages being published, and at what rate
are the messages being published?

> We are talking a very large number of connections here, 1807334
> connections in 17 hours, or 29/second.

That's potentially a problem. We've recently discovered that the error
logger, as it logs a couple of messages for every connection, isn't very
fast, and its mailbox can rapidly grow. We only noticed this because of,
what looked like, a DDoS against a public installation - that was 1000s
of connections a second, and the error logger was effectively becoming a
memory leak.

Could you not do any sort of connection pooling? Making a connection +
channel just to publish a couple of messages is spectacularly

> We started using RabbitMQ Stomp Server 1.6.0-5 then moved to version
> latest public umbrella (as of 24 hours ago). OS is Centos 5.4 x86_64
> Xen domU with 4GB of RAM and 1VCPU.
> We are using persistent mode on the queues, currently testing with
> non-persistent.

If you're not using the new persister (branch bug21673), that likely
won't make too much difference as all messages are always held in RAM.

If you could provide a few more details then we may have a better idea
of what to suggest.


More information about the rabbitmq-discuss mailing list