[rabbitmq-discuss] Memory leak in STOMP connector or federation?

Matthias Radestock matthias at rabbitmq.com
Sat May 17 04:58:18 BST 2014


On 16/05/14 23:53, Tomas Doran wrote:
> The connections are (or at least can be) very long lived. I can/will
> try resetting all TCP connections to the broker though (tomorrow) and
> see if that causes the memory use to decrease...
>
> However I’m guessing not, as no re queues - it’s all mcollective, so
> there is a persistent server process that listens to client
> messages:
>
> all messages to a ‘broadcast’ queue messages with a specific routing
> key to a ‘direct’ queue
>
> and replies to a specific ‘replies’ queue that the client uses it’s
> name as a routing key on.
>
> Clients are short lived..

Hmm. Your report shows more than 3000 exclusive, auto-delete queues. 
Does that correspond to the number of concurrent client connections?

I see the memory alarm has gone off, so connections are blocked, which 
may skew the result somewhat since such connections won't be closable. 
It would be useful to get a report from a time when rabbit is getting 
close to the memory alarm threshold but hasn't reached it yet.

The reply queues you mention above... Does a new reply queue get created 
for every connecting client, and will the server end up publishing 
(reply) messages to that queue? If so then you have exactly the scenario 
I described, i.e. a stomp connections that over its lifetime publishes 
to an increasing number of queues.

Matthias.


More information about the rabbitmq-discuss mailing list