[rabbitmq-discuss] Unclean shutdown followed by upgrade causes cluster to no longer come up.

Simon MacMullen simon at rabbitmq.com
Mon Mar 26 13:52:13 BST 2012


On 26/03/12 13:11, Adam Pollock wrote:
> Hi,

Hi.

> Node1 started up correctly, but doesn't show any of the
> other nodes in the cluster status, even though the rabbitmq.conf file
> has them auto-configured.

This is normal in a cluster upgrade - the upgrade works by breaking the 
cluster on the first node to start, and then having other nodes rejoin.

Do you actually mean rabbitmq.conf? The file is usually called 
rabbitmq.config.

> Then, when I try to start up node2 and node3,
> they both give out the following message in /var/log/rabbit/startup_log:
>
> [root at mq2 /etc/rabbitmq]# cat /var/log/rabbitmq/startup_log
> Activating RabbitMQ plugins ...
> 6 plugins activated:
> * amqp_client-2.8.1
> * mochiweb-1.3-rmq2.8.1-git
> * rabbitmq_management-2.8.1
> * rabbitmq_management_agent-2.8.1
> * rabbitmq_mochiweb-2.8.1
> * webmachine-1.7.0-rmq2.8.1-hg
>
> ****
> Cluster upgrade needed but this is a ram node.
> Please first start the last disc node to shut down.
> ****

So this implies that they think no other cluster nodes are running.

Assuming the config file is really /etc/rabbitmq/rabbitmq.config, the 
only way I can think of for this to happen is if the cluster config file 
itself is corrupt / empty. What does 
/var/lib/rabbitmq/mnesia/cluster_nodes.config (I think that's the path) 
contain on the RAM nodes? It should contain the same list of nodes as 
are in rabbitmq.config.

Cheers, Simon

-- 
Simon MacMullen
RabbitMQ, VMware


More information about the rabbitmq-discuss mailing list