[rabbitmq-discuss] TypeError: Cannot read property 'ram_msg_count' of undefined
Miller Andrew
Andrew.Miller at sentry.com
Tue Jun 17 22:01:34 BST 2014
Hello,
We have a two-node RabbitMQ cluster running on RabbitMQ 3.2.2, Erlang R16B03-1 on Windows servers. We've had multiple issues over the past few days:
Sunday, 6/15/2014 - Network partition because of some planned network upgrade; restarted both nodes in sequence
Monday, 6/16/2014 - Rouge client created 5000+ concurrent connections to Rabbit; restarted both nodes in sequence and eventually tracked down and killed client
This question is primarily about the problem that occurred on Monday. After the first node was restarted, the following messages were logged:
=INFO REPORT==== 16-Jun-2014::17:00:47 ===
Starting RabbitMQ 3.2.2 on Erlang R16B03-1
Copyright (C) 2007-2013 GoPivotal, Inc.
Licensed under the MPL. See http://www.rabbitmq.com/
...
=ERROR REPORT==== 16-Jun-2014::17:00:49 ===
Discarding message {'$gen_call',{<0.254.0>,#Ref<0.0.0.1304>},{add_on_right,{56,<0.254.0>}}} from <0.254.0> to <0.16165.62> in an old incarnation (1) of this node (2)
=ERROR REPORT==== 16-Jun-2014::17:00:49 ===
Discarding message {'$gen_call',{<0.256.0>,#Ref<0.0.0.1311>},{add_on_right,{56,<0.256.0>}}} from <0.256.0> to <0.16138.62> in an old incarnation (1) of this node (2)
=ERROR REPORT==== 16-Jun-2014::17:00:49 ===
Discarding message {'$gen_call',{<0.258.0>,#Ref<0.0.0.1511>},{add_on_right,{56,<0.258.0>}}} from <0.258.0> to <0.16190.62> in an old incarnation (1) of this node (2)
=ERROR REPORT==== 16-Jun-2014::17:00:49 ===
Discarding message {'$gen_call',{<0.260.0>,#Ref<0.0.0.1536>},{add_on_right,{29,<0.260.0>}}} from <0.260.0> to <0.16200.62> in an old incarnation (1) of this node (2)
=INFO REPORT==== 16-Jun-2014::17:00:49 ===
Adding mirror of queue 'SalesCenter.BPBilling_AccountDelinquent' in vhost 'eventing' on node 'rabbit at SHO-P-EVGAPP-02': <5578.16037.65>
=ERROR REPORT==== 16-Jun-2014::17:00:49 ===
** Generic server <0.250.0> terminating
** Last message in was {init,<0.185.0>}
** When Server state == {q,{amqqueue,
{resource,<<"eventing">>,queue,
<<"SalesCenter.AMS_ErrorsAndOmissions30DayExpiration">>},
true,false,none,
[{<<"x-dead-letter-exchange">>,longstr,
<<"EnterpriseEventDeadLetter">>},
{<<"x-dead-letter-routing-key">>,longstr,
<<"SalesCenter.AMS_ErrorsAndOmissions30DayExpiration">>}],
<0.250.0>,[],[],
[{vhost,<<"eventing">>},
{name,<<"ha-all">>},
{pattern,<<"^.*">>},
{'apply-to',<<"queues">>},
{definition,
[{<<"ha-mode">>,<<"all">>},
{<<"ha-sync-mode">>,<<"automatic">>}]},
{priority,0}],
[],[]},
none,false,undefined,undefined,
{queue,[],[],0},
undefined,undefined,undefined,undefined,
{state,fine,5000,undefined},
{0,nil},
undefined,undefined,undefined,
{state,
{dict,0,16,16,8,80,48,
{[],[],[],[],[],[],[],[],[],[],[],[],[],[],
[],[]},
{{[],[],[],[],[],[],[],[],[],[],[],[],[],
[],[],[]}}},
delegate},
undefined,undefined,undefined,0,running}
** Reason for termination ==
** {{function_clause,
[{rabbit_mirror_queue_slave,terminate,
[{{badmatch,true},
[{rabbit_queue_index,init,2,[]},
{rabbit_variable_queue,init,5,[]},
{rabbit_mirror_queue_slave,handle_go,1,[]},
{rabbit_mirror_queue_slave,handle_call,3,[]},
{gen_server2,handle_msg,2,[]},
{proc_lib,init_p_do_apply,3,
[{file,"proc_lib.erl"},{line,239}]}]},
{not_started,
{amqqueue,
{resource,<<"eventing">>,queue,
<<"SalesCenter.AMS_ErrorsAndOmissions30DayExpiration">>},
true,false,none,
[{<<"x-dead-letter-exchange">>,longstr,
<<"EnterpriseEventDeadLetter">>},
{<<"x-dead-letter-routing-key">>,longstr,
<<"SalesCenter.AMS_ErrorsAndOmissions30DayExpiration">>}],
<0.250.0>,[],[],
[{vhost,<<"eventing">>},
{name,<<"ha-all">>},
{pattern,<<"^.*">>},
{'apply-to',<<"queues">>},
{definition,
[{<<"ha-mode">>,<<"all">>},
{<<"ha-sync-mode">>,<<"automatic">>}]},
{priority,0}],
[{<0.256.0>,<0.250.0>}],
[]}}],
[]},
{gen_server2,terminate,3,[]},
{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]},
{gen_server2,call,[<5578.15838.65>,go,infinity]}}
After this, when going to the queue page (/#/queues/eventing/SalesCenter.AMS_ErrorsAndOmissions30DayExpiration) in the management console, I get TypeError: Cannot read property 'ram_msg_count' of undefined. On the list of queues, this queue is listed as "Active" (most queues are listed as "Idle") and the Read/Unacked/Total messages for this queue shows question marks. The only way we were able to resolve was to stop the cluster entirely and start it back up.
Any thoughts on what went wrong, or how to avoid it in the future?
Thanks,
Andrew Miller
This e-mail is confidential. If you are not the intended recipient, you must not disclose or use the information contained in it. If you have received this e-mail in error, please tell us immediately by return e-mail and delete the document. No recipient may use the information in this e-mail in violation of any civil or criminal statute. Sentry disclaims all liability for any unauthorized uses of this e-mail or its contents. Sentry accepts no liability or responsibility for any damage caused by any virus transmitted with this e-mail.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20140617/b18e501e/attachment.html>
More information about the rabbitmq-discuss
mailing list