[rabbitmq-discuss] abnormally large memory use in "binaries"

Brian Hammond brianin3d at yahoo.com
Tue Oct 22 17:20:16 BST 2013

Running this now:  netstat -t -n | grep :5672

As far as the prefetch goes, we are using a value of 500. When using a value like 1, we saw pretty bad performance.

My understanding of pre-fetch was the consumer asks for that number of messages and then has to send an acknowledge before more messages are sent to it. If we have memory issues when there are almost no messages then our pre-fetch values seems an unlikely cause.

If we set prefetch really low, I could see us having a lot of messages in the queue. Setting it high or unlimited should actually be better for the server.

Nonetheless, I'll try it after this current batch with netstat runs to explosion.

Still no luck reproducing the issue with a simple test program...

On Tuesday, October 22, 2013 7:31 AM, Emile Joubert <emile at rabbitmq.com> wrote:

Hi Brian,

On 21/10/13 19:50, Brian Hammond wrote:
> Do you have a specific set of arguments in mind for netstat? If not I'll
> go look at the man page.

netstat shows this information by default.

> I'll provide more information as my tests progress.

From the reports it appears that all consumers are either using
unlimited prefetch or a value of 500. It is likely that the broker
memory use will be reduced if consumers use a much lower prefetch count,
say 1. Would it be possible for you to try that? The reason why it might
help in your particular case is most of the memory is being consumed by
the Erlang processes that write to the network.

I realise that it might be difficult for you to make this change to all
consumers. Let me know if this is too onerous so we can discuss

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20131022/3920af48/attachment.htm>

More information about the rabbitmq-discuss mailing list