[rabbitmq-discuss] Diagnosing errors from SASL log
Patrick Long
pat at munkiisoft.com
Tue Apr 8 14:06:57 BST 2014
One of our clusters had a problem over the weekend. Ops were upgrading the
Load Balancer and restatrted the RabbitMQ service on both nodes at 15:00.
Everything seemed to come back up Ok but the following errors started
showing later in the day at 17:15
I have copied some of the SASL log below. Any ideas why this would have
started happening?
=CRASH REPORT==== 5-Apr-2014::19:14:34 ===
crasher:
initial call: gen:init_it/6
pid: <0.9454.1>
registered_name: []
exception exit: {function_clause,
[{rabbit_channel,handle_info,
[{{#Ref<0.0.1.206248>,rabbit at NWPAPPRMQA01},
[{ok,<8069.506.0>,{ok,0,0}}]},
{ch,running,rabbit_framing_amqp_0_9_1,1,<0.9451.1>,
<0.9451.1>,<0.9445.1>,
<<"<rabbit at NWPAPPRMQA02.1.9445.1>">>,
{lstate,<0.9453.1>,false,false},
none,1,
{[],[]},
{user,<<"guest">>,
[administrator],
rabbit_auth_backend_internal,
{internal_user,<<"guest">>,
<<54,19,230,202,176,82,197,60,61,40,94,249,70,83,81,
243,160,53,79,216>>,
[administrator]}},
<<"/">>,<<"*aliveness-test*">>,.....
=SUPERVISOR REPORT==== 5-Apr-2014::19:14:34 ===
Supervisor: {<0.9446.1>,amqp_channel_sup_sup}
Context: child_terminated
Reason: {function_clause,
[{rabbit_channel,handle_info,
[{{#Ref<0.0.1.206248>,rabbit at NWPAPPRMQA01},
[{ok,<8069.506.0>,{ok,0,0}}]},
{ch,running,rabbit_framing_amqp_0_9_1,1,<0.9451.1>,
<0.9451.1>,<0.9445.1>,
<<"<rabbit at NWPAPPRMQA02.1.9445.1>">>,
{lstate,<0.9453.1>,false,false},
none,1,
{[],[]},
{user,<<"guest">>,
[administrator],
rabbit_auth_backend_internal,
{internal_user,<<"guest">>,
<<54,19,230,202,176,82,197,60,61,40,94,
249,70,83,81,243,160,53,79,216>>,
[administrator]}},
<<"/">>,<<"aliveness-test">>,
=SUPERVISOR REPORT==== 5-Apr-2014::19:14:34 ===
Supervisor: {<0.9452.1>,rabbit_channel_sup}
Context: shutdown
Reason: reached_max_restart_intensity
Offender: [{pid,<0.9454.1>},
{name,channel},
{mfargs,
{rabbit_channel,start_link,
[1,<0.9451.1>,<0.9451.1>,<0.9445.1>,
<<"<rabbit at NWPAPPRMQA02.1.9445.1>">>,
rabbit_framing_amqp_0_9_1,
{user,<<"guest">>,
[administrator],
rabbit_auth_backend_internal,
{internal_user,<<"guest">>,
<<54,19,230,202,176,82,197,60,61,40,94,249,
70,83,81,243,160,53,79,216>>,
[administrator]}},
<<"/">>,
[{<<"publisher_confirms">>,bool,true},
{<<"exchange_exchange_bindings">>,bool,true},
{<<"basic.nack">>,bool,true},
{<<"consumer_cancel_notify">>,bool,true},
{<<"connection.blocked">>,bool,true},
{<<"authentication_failure_close">>,bool,true}],
<0.9448.1>,<0.9453.1>]}},
{restart_type,intrinsic},
{shutdown,4294967295},
{child_type,worker}]
=SUPERVISOR REPORT==== 5-Apr-2014::19:14:34 ===
Supervisor: {<0.362.0>,mirrored_supervisor}
Context: child_terminated
Reason: killed
Offender: [{pid,<0.9479.1>},
{name,rabbit_mgmt_db},
{mfargs,{rabbit_mgmt_db,start_link,[]}},
{restart_type,permanent},
{shutdown,4294967295},
{child_type,worker}]
=SUPERVISOR REPORT==== 5-Apr-2014::19:14:35 ===
Supervisor: {<0.362.0>,mirrored_supervisor}
Context: start_error
Reason: {already_started,<8069.464.0>}
Offender: [{pid,<0.9479.1>},
{name,rabbit_mgmt_db},
{mfargs,{rabbit_mgmt_db,start_link,[]}},
{restart_type,permanent},
{shutdown,4294967295},
{child_type,worker}]
Thanks
--
Patrick Long - Munkiisoft Ltd
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20140408/84a66665/attachment.html>
More information about the rabbitmq-discuss
mailing list