[rabbitmq-discuss] Blocked/blocking connections & Memory fluctuations
shridharan muthu
shridharan.m at gmail.com
Thu Dec 19 19:38:52 GMT 2013
Hello Simon,
I tried restarting rabbitmq management and I got the following error. I
am not able to use port 15672 at all.
*sudo rabbitmqctl -n rabbit eval
'application:stop(rabbitmq_management),application:start(rabbitmq_management).'*
*{error,*
* {bad_return,*
* {{rabbit_mgmt_app,start,[normal,[]]},*
* {'EXIT',*
* {{{case_clause,*
* {error,*
* {no_record_for_listener,*
* [{port,55672},{ignore_in_use,true}]}}},*
* [{rabbit_mochiweb_registry,handle_call,3},*
* {gen_server,handle_msg,5},*
* {proc_lib,init_p_do_apply,3}]},*
* {gen_server,call,*
* [rabbit_mochiweb_registry,*
* {add,rabbit_mgmt_redirect,*
* [{port,55672},{ignore_in_use,true}],*
* #Fun<rabbit_mochiweb.1.105980535>,*
* #Fun<rabbit_mochiweb.0.34137573>,*
* {[],"Redirect to port 15672"}},*
* infinity]}}}}}}*
*...done.*
So, Is there anyway bring it back up without restarting the app?
Shri
On Thu, Dec 19, 2013 at 9:27 AM, Simon MacMullen <simon at rabbitmq.com> wrote:
> On 19/12/13 16:35, shridharan muthu wrote:
>
>> You are spot on Simon. We are using 3.0.2 version in our
>> cluster. In this case, restarting node will not help I guess. Can I
>> delete these blocked/blocking connections from Management UI?
>>
>
> The only way to do that is to restart the management stats database:
>
> # rabbitmqctl -n <node with the stats db> eval 'application:stop(rabbitmq_
> management),application:start(rabbitmq_management).'
>
>
> Could it be cause of the blocked connections? I will check the
>> memory usage after cleaning up the connections.
>>
>
> I doubt it, those connections don't really exist except as rows in the
> management stats db. But note that we have fixed a few memory leaks since
> 3.0.2.
>
> /You could use a higher priority "nodes" policy to move the master
>>
>> to node 2, then delete it again (albeit the queue would be
>> unmirrored while this was taking place). Oh, but you need to be on
>> at least RabbitMQ 3.1.x for that to work, 3.0.x does not let a
>> mirroring policy move the master./
>>
>>
>> Ah! But that would leave with tightly coupling queues to nodes.
>>
>
> Only until you delete the "nodes" policy - you can use that to move the
> master, but then revert to an "all" policy; at which point the master will
> stay where it is.
>
> /OTOH if the queues are mirrored to all nodes anyway, why worry? A
>>
>> slave takes about as many resources as a master anyway, so all your
>> masters being on one node and all the slaves on the other should be
>> no big deal.
>> /
>>
>> //
>>
>> My understanding is that all operations (publish, consume) on a
>> queue will be forwarded from slaves to the master (where it will be
>> executed and broadcasted to all mirrors) and all the messages in the
>> mirror will be used only when the master goes down.
>>
>
> Yes.
>
>
> This sounds like
>> slaves are just forwarding messages all the time (1 hop penalty) and
>> generates more traffic between slaves & the master.
>>
>
> Sure, but how will distributing the masters around the cluster help that?
> Or are you saying your clients know that some queues are on node 2, and
> expect to connect to that node to use that queue? If so that makes sense...
>
>
> Cheers, Simon
>
> --
> Simon MacMullen
> RabbitMQ, Pivotal
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20131219/e3723731/attachment.html>
More information about the rabbitmq-discuss
mailing list