[rabbitmq-discuss] Possible memory leak in the management plugin

Pavel pmaisenovich at blizzard.com
Fri Apr 11 00:17:34 BST 2014


> 2) Because the GC-ing process has constant rate (100 or 1% rows each
> 5 seconds), there is always a possibility that aggregated_stats table
> growth will outpace the clean up efforts.

> No, there isn't. 

Well, something is happening and I'm just trying to get down to the cause.
That theory seemed plausible given my test results.

I've been running a throttled test (RabbitSmasher 1000 1 1000000 1000 false
true) which was publishing at 500/s rate rotating through 1000 channels and
1000 exchanges on every message for ~10 hours. aggregated_stats table was
growing steadily for almost 4 hours straight until it reached 2.5Gb when
Rabbit RAM hit high watermark of 4Gb and publisher channels start getting
throttled. From there Rabbit started flipping between "publisher runs,
memory grows" and "publisher blocked, memory GCed" states for next 6 hours. 

One thing I noticed is that during "publisher blocked" phase the memory used
by aggregated_stats would go down, but it's size wouldn't. The number of
items in that table will always keep growing for the duration of entire test
(albeit much slower since hitting watermark threshold) up to 1006782 records
when I stopped the script. If I understood it right, that table contains at
least one record per each channel x exchange and channel x queue
permutation, which is 2M in my case. 

Perhaps if I had enough memory before high watermark to keep (short) history
for each of 2M permutations of stats data points it will eventually
stabilize at that point? I'll run more tests to check that.

I have ETS memory snapshot logs for this 10 hours test if you are interested
to see it yourself. 





--
View this message in context: http://rabbitmq.1065348.n5.nabble.com/Possible-memory-leak-in-the-management-plugin-tp27414p34790.html
Sent from the RabbitMQ mailing list archive at Nabble.com.


More information about the rabbitmq-discuss mailing list