[rabbitmq-discuss] RabbitMQ Queues memory leak?
Dmitry Saprykin
saprykin.dmitry at gmail.com
Fri Apr 19 10:23:55 BST 2013
Hello, Emile
Thank you very much for you help. I will try to provide anything what can
help to solve this issue.
> The difference between 1Mb and 180Mb is relatively large, even after
> taking expected differences due to garbage collection into account. We
> can't rule out a memory leak, but need some assistance from you to confirm.
>
> Do you see the same asymmetry if the master node for the queues switch
> from one node to the other? So if you shut the cdaemon2 node, let
> cdaemon4 become the master for all the queue, turn cdaemon2 back on (it
> will now be a slave node) does the memory on cdaemon2 now grow?
>
Yes, after current master stops and starts it becomes slave and its memory
starts to grow.
Meantime new selected master memory does not become free. So new master
memory stops to
grow but do not fall back to normal. I have attached memory graphs of our
nodes to this letter.
>
> Have you been able to add a third node to the cluster for testing
> purposes to see if memory grows on more than one slave node?
>
We have not tied to do this yet. But if it can help we can allocate one
more node.
Is is ok to create test node at the same physical host as one of existing
nodes?
>
> How long does it take for the memory use to reach the VM memory high
> watermark?
>
Critical point for our cluster comes much more earlier than VM memory high
watermark.
The same time with memory grow slave node starts to use CPU more and more
active.
In our case when memory consumption reaches ~1Gb broker stops to respond.
After slave restart memory grows linearily some time. After that memory
grow changes its pattern.
At some moments it increases by constant step (~20Mb). I have marked these
steps on graphs attached.
> Can you describe your messaging pattern in a bit more detail for us to
> reproduce the problem - how often are new channels created when publishing?
>
Queues and exchanges:
1 queue and 1 exchange (Durable, ha-mode: all, no autodelete).
Publishers:
We have about 100 web servers which publish messages by next steps:
1) Open connection
2) Create channel
3) Publish message to direct exchange
4) close connection
Publishing rate is between 50 and 600 messages per second at peak hours.
I have attached graph.
Consumers:
We have 3 servers consuming messages in 30 threads each.
Each consuming thread
1) Thread starts
2) Open broker connection
3) Create channel
4) Get message from the queue by basic-get with prefetch=1
5) Process message and acknowledge it.
6) If there are no new messages sleep some time
7) repeat steps 4,5,6 until limit count is reached
8) Close connection
9) Thread stops
>
> In order to investigate further it might be helpful to execute some
> diagnostic commands on the broker. Are you able to replicate the problem
> in a staging or QA environment where it is safe to do this?
>
I will execute diagnostic commands on the broker. If something goes wrong
our messaging
falls back to version without rabbitMQ involved ).
> -Emile
>
>
Kind regards,
Dmitry Saprykin
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20130419/f5a2a732/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: cdaemon2.memory.png
Type: image/png
Size: 37015 bytes
Desc: not available
URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20130419/f5a2a732/attachment.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: cdaemon4.memory.png
Type: image/png
Size: 43829 bytes
Desc: not available
URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20130419/f5a2a732/attachment-0001.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: message.publishing.png
Type: image/png
Size: 73012 bytes
Desc: not available
URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20130419/f5a2a732/attachment-0002.png>
More information about the rabbitmq-discuss
mailing list