[rabbitmq-discuss] server crashes with very fast consumers

Matthew Sackman matthew at rabbitmq.com
Fri Mar 18 17:11:21 GMT 2011

On Fri, Mar 18, 2011 at 09:04:03AM +0000, Matthew Sackman wrote:
> On Thu, Mar 17, 2011 at 08:48:58PM -0700, alex chen wrote:
> > memory limit is 40% * 4GB = 1.6 GB.  the broker would crash if there are more 
> > than 50 GB of messages in 1000 queues.  When this happened, the  memory usage 
> > reached 4 GB.
> > If I run the broker using 64 bit,  it does not crash because it could get more 
> > than 6 GB of memory.   It was able to consume all the messages at rate of 
> > 100-200 MB/sec.
> Good to hear, and that must be one fast running Rabbit! I'll check the
> memory limits code - we can detect whether we're in a 32-bit VM or not I
> think, so if so, it seems likely to make sense we take
> 0.4 * min[4GB, RAM]. Certainly, in general, Rabbit just shouldn't crash.

Gah, clearly, if it's limiting you to 1.6GB then it's already doing the
right thing - it's detecting the 4GB limit already. The crash/malloc
fail is not the right thing however.

Over lunch we did manage to come up with a hypothesis as to what's going
wrong, and a quick subsequent test showed this to be accurate. I've now
fixed the bug and whilst it's pending QA, I'd be interested in whether
it fixes your crash even under a 32-bit Erlang VM. If you have the time
and are comfortable compiling Rabbit from scratch, try the branch called
"bug23968" and repeat you're test. It should not crash, even under a
32-bit VM, and it should use substantially less memory too.


More information about the rabbitmq-discuss mailing list