[rabbitmq-discuss] Node reaches RAM High Watermark quickly

Chris Larsen clarsen at euphoriaaudio.com
Thu Sep 22 23:05:11 BST 2011


Thanks Simon!

> There was a memory leak bug in 2.3.1 when using confirms with the
immediate flag - but "immediate" is pretty rarely used and it was probably a
slow leak. 
> Still, if you are using that combination, that's suspicious.

We weren't using immediate but we were using mandatory.

> What does mgmt say about the memory use for each queue? Unfortunately
there's also a bug in 2.3.1 which makes mgmt report inflated memory use for 
> idle queues but it should be accurate enough to see any gross problems.

Looking at each queue showed around 100K  to 2MB or so of memory usage. 

> Other than that, the standard advice is to upgrade to 2.6.1 and see if the
problem goes away.

We likely will later on, just want to give it some time for other folks to
catch issues :0)

I may have found the issue with some further debugging. I looked up stream
to see the size of the incoming messages and while most were below 1MB,
there were a few at 20MB all the way up to 60MB. Using the C library, I had
added code that set the "frame_max" value to 10MB whenever my producers
connected to a broker. But I wasn't checking on the producer side to see if
my message was larger than that frame_max. I left the max at 10MB and added
code to prevent sending messages over that size and the memory leaking
stopped. 

So should messages always be smaller than the declared frame size or could
there still be an issue if I bump the frame size up to 60MB? Thanks!



More information about the rabbitmq-discuss mailing list