[rabbitmq-discuss] why rabbit eating so much memory?

Emile Joubert emile at rabbitmq.com
Wed Jul 17 12:01:12 BST 2013


Hi,

On 17/07/13 03:18, Rubo Liang wrote:
> Meet this problem again today.
> 
> see 'rabbitmqctl report' output
> at: https://gist.github.com/liangrubo/6017129

Thanks for sending this information. The report shows that queues use
less that 3Mb of RAM. It also shows ~500 connections and channels. If
your application needs that many channels then that is nothing to worry
about. There is nothing there that would explain why 23Gb of RAM is
being used though.

The memory appears to grow only on the slave nodes. I see that
rabbit at in16-011 is the master node for all queues and does not suffer
from excessive memory use. In order to pinpoint the source of the leak,
could you please execute another command on both slave nodes, one after
the other? For each slave node do the following:

If you are able to bring more swapspace online then please do so. If
there are any other non-critical processes on the machine then please
stop them, as this command has the potential to impose heavy load.
Please be aware that this command could cause the node to crash. If you
are not comfortable with this prospect then don't proceed.

Otherwise please run this command:

rabbitmqctl eval 'io:put_chars(standard_error, "this should appear in
startup_err\n").'

and check that the expected text appeared in startup_err (normally in
/var/log/rabbitmq). If the text did not appear then do not proceed.

Otherwise please run this command:

rabbitmqctl eval 'erlang:process_display(element(2,
hd(lists:reverse(lists:sort([{process_info(P, memory), P} || P <-
erlang:processes()])))), backtrace).'

The command will return and the startup_err file should start growing
soon afterwards. It could take a long time until it stops growing. When
it stops growing then repeat the process for the other slave node.
Please compress and send both startup_err files to me.



-Emile





More information about the rabbitmq-discuss mailing list