[rabbitmq-discuss] RabbitMQ crashes when messages are being pumped into a queue under win32 envir

Jerry Kuch jerryk at vmware.com
Mon Apr 4 01:35:56 BST 2011

Sorry, Ming...  I hit 'Send' before finishing my response...

Have you tried using the QoS mechanism, as Emile suggested a bit earlier in the thread?  That might be worth a try.

Also, if you can produce some totally self-contained code that we can easily compile and run, we can give things a try and see if we see the same misbehavior...

Best regards,

From: rabbitmq-discuss-bounces at lists.rabbitmq.com [rabbitmq-discuss-bounces at lists.rabbitmq.com] On Behalf Of Jerry Kuch [jerryk at vmware.com]
Sent: Sunday, April 03, 2011 5:26 PM
To: Ming Li; rabbitmq-discuss at lists.rabbitmq.com; Matthias Radestock    (matthias at rabbitmq.com)
Subject: Re: [rabbitmq-discuss] RabbitMQ crashes when messages are being pumped into a queue under win32 envir

> 1.  using RabbitMQ2.4.0(windows bundle).winxp, creates an exchange and
> a queue.
> 2.  use multiple sender to send messages into the created queue, only
> a single receiver receiving message from the same queue.
> 3.  while running, what i discovered is that the ERL's RAM usage
> continue to increase and then crash when it reaches a certain
> percentage
> Is there a way to prevent it from crashing?

Hi, Ming...

We actually have a bug reported, with a fix checked in, but not yet
released, that could account for some of the trouble you've had.

In the past, when a message was delivered to a client, but an ACK was
still pending, the broker would keep the entire message in a pending
ACK table so that if another queue wanted to read the message, it
would get the same Erlang binary data, rather than have a duplicate
created, which would use more memory.

Currently, the broker holds on to the full message in this pending ACK
table, only if the message hasn't already been written to disk;
otherwise, the binary gets thrown away, and the de-duplication
mechanism serves no purpose.  Since the deduplication scheme was a bit
buggy, it was possible for the deduplication cache to end up filled
with the entire contents of a queue under certain conditions.  This
could eat a lot of memory and cause the broker's Erlang VM trouble.
Worse still, because this could happen when only consumption was
occurring, there was nothing that the TCP back pressure and
memory-based flow control mechanism (which can pause an overly
vigorous publisher) could do about it.

The fix will appear in an upcoming release.  You may want to keep an
eye out for mention of it in the release notes, and see if your
symptoms relent after you've upgraded to that version.

Best regards,
rabbitmq-discuss mailing list
rabbitmq-discuss at lists.rabbitmq.com

More information about the rabbitmq-discuss mailing list