[rabbitmq-discuss] abnormally large memory use in "binaries"

Brian Hammond brianin3d at yahoo.com
Fri Oct 18 19:20:50 BST 2013

I ran the first command for 24 hours every 10 minutes, but the second command blew up rabbit most of the time:

Error: unable to connect to node rabbit at project2: nodedown


nodes in question: [rabbit at project2]

hosts, their running nodes and ports:
- project2: [{rabbitmqctl2916,46327}]

current node details:
- node name: rabbitmqctl2916 at project2
- home dir: /var/lib/rabbitmq
- cookie hash: g9gANsnT4AWb9xKsznR+nA==

I stopped the collection script after 11 iterations, but only have 5 files with anything in them. 

They range in size from 109k to over 1gig. 

Compressed they are like 280mb (https://www.dropbox.com/s/jvaaohg59cl6gn3/rabbit-memory-details.tgz), not sure if it is useful or not since it was for such a small, and irregular, time interval.

Additionally, a colleague pointed out an interesting failure case where we were out of memory with 4 messages in queues.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20131018/95df1908/attachment.htm>

More information about the rabbitmq-discuss mailing list