[rabbitmq-discuss] Clustering
Tim Watson
tim at rabbitmq.com
Tue Oct 23 10:31:32 BST 2012
Post me the *exact* commands you're using on each node, in sequence
please. If you follow the 'Breaking up a cluster' transcript from
http://www.rabbitmq.com/clustering.html then you should not have *any*
problems. In particular, note the use of 'force_reset' on the surviving
node, so that it does not attempt to connect to nodes that are no longer
part of the cluster - who's residual configuration is still present on
the extant node we're now resetting.
On 10/23/2012 10:21 AM, chetan dev wrote:
> Yes , but now the node is failing to start because it is not able to
> find other node for clustering
> so how do i reset this node to run it normally?
>
> Thanks
>
> On Tue, Oct 23, 2012 at 2:38 PM, Tim Watson <tim at rabbitmq.com
> <mailto:tim at rabbitmq.com>> wrote:
>
> Don't do that - take the second node out of the cluster whilst the
> other node(s) are still running, so that they can see it leaving.
> See the clustering transcripts in
> http://www.rabbitmq.com/clustering.html for details.
>
>
> On 10/23/2012 09:46 AM, chetan dev wrote:
>> Hi,
>>
>> Thanks you very much for this information
>> another problem i am facing is i have two nodes clustered lets
>> say rabbit at SHIKHARM(ram) and rabbit at CHETANDEV(disc)
>> now if i stop rabbit at SHIKHARM using rabbitmwctl stop command and
>> then i stop rabbit at CHETANDEV using stop_app command and then
>> reset it this way i remove it from cluster and restart the node
>> now rabbit at CHETANDEV starts and works fine but when i try to
>> start rabbit at SHIKHARM it fails .
>> i think it is trying to cluster with rabbit at CHETANDEV but that
>> node is now not in cluster but is there a way so that i can reset
>> rabbit at SHIKHARM and start it normally
>> here is the error that i got:
>> node : rabbit at SHIKHARM
>> app descriptor : c:/Program Files/RabbitMQ
>> Server/rabbitmq_server-2.8.6/sbin/../
>> ebin/rabbit.app
>> home dir : C:\Users\Acer
>> config file(s) : (none)
>> cookie hash : +3xbT32/GKScN3yhCcE0Ag==
>> log :
>> C:/Users/Acer/AppData/Roaming/RabbitMQ/log/rabbit at SHIKHARM.log
>> <mailto:C:/Users/Acer/AppData/Roaming/RabbitMQ/log/rabbit at SHIKHARM.log>
>> sasl log :
>> C:/Users/Acer/AppData/Roaming/RabbitMQ/log/rabbit at SHIKHARM-sasl
>> .log
>> database dir :
>> c:/Users/Acer/AppData/Roaming/RabbitMQ/db/rabbit at SHIKHARM-mnesi
>> a
>> erlang version : 5.9.1
>>
>> -- rabbit boot start
>> starting file handle cache server
>> ...done
>> starting worker pool
>> ...done
>> starting database
>> ...
>>
>> BOOT FAILED
>> ===========
>>
>> Error description:
>> {error,{failed_to_cluster_with,[rabbit at CHETANDEV],
>> "Mnesia could not connect to
>> any disc nodes."}
>> }
>>
>> Log files (may contain more information):
>> C:/Users/Acer/AppData/Roaming/RabbitMQ/log/rabbit at SHIKHARM.log
>> <mailto:C:/Users/Acer/AppData/Roaming/RabbitMQ/log/rabbit at SHIKHARM.log>
>> C:/Users/Acer/AppData/Roaming/RabbitMQ/log/rabbit at SHIKHARM-sasl.log
>> <mailto:C:/Users/Acer/AppData/Roaming/RabbitMQ/log/rabbit at SHIKHARM-sasl.log>
>>
>> Stack trace:
>> [{rabbit_mnesia,init_db,3,[]},
>> {rabbit_mnesia,init,0,[]},
>> {rabbit,'-run_boot_step/1-lc$^1/1-1-',1,[]},
>> {rabbit,run_boot_step,1,[]},
>> {rabbit,'-start/2-lc$^0/1-0-',1,[]},
>> {rabbit,start,2,[]},
>> {application_master,start_it_old,4,
>>
>> [{file,"application_master.erl"},{line,274}]}]
>>
>> {"Kernel pid
>> terminated",application_controller,"{application_start_failure,rabb
>> it,{bad_return,{{rabbit,start,[normal,[]]},{'EXIT',{rabbit,failure_during_boot}}
>> }}}"}
>>
>> Crash dump was written to: erl_crash.dump
>> Kernel pid terminated (application_controller)
>> ({application_start_failure,rabbi
>> t,{bad_return,{{rabbit,start,[normal,[]]},{'EXIT',{rabbit,failure_during_boot}}}
>> }})
>>
>>
>>
>> Thanks
>>
>>
>>
>>
>>
>> On Tue, Oct 23, 2012 at 1:50 PM, Tim Watson
>> <watson.timothy at gmail.com <mailto:watson.timothy at gmail.com>> wrote:
>>
>> Ah of course - sorry i missed that guys. After so recently
>> looking at the 3.0 cluster commands I'd forgotten that the
>> choice of disk/ram was done that way in 2.8.x.
>>
>> On 22 Oct 2012, at 21:26, Matthias Radestock
>> <matthias at rabbitmq.com <mailto:matthias at rabbitmq.com>> wrote:
>>
>> > On 22/10/12 20:28, chetan dev wrote:
>> >>> here is what i am doing
>> >>> 1. two nodes rabbit at SHIKHARM and rabbit at CHETANDEV are
>> clustered
>> >
>> > yes, but...
>> >
>> >>> i clustered them as given on rabbitmq clustering
>> documentation for e.g
>> >>> on the first place both nodes are running
>> >>> now i stopped app using rabbitmqctl stop_app then i reset
>> that node than
>> >>> i clusterd that node with another using rabbitmqctl
>> cluster rabbit at SHIKHARM
>> >>> then i started the node using rabbitmqctl start_app
>> >>> when i checked the cluster status it was showing fine
>> then i did as i
>> >>> mentioned earlier
>> >
>> > That will make rabbit at CHETANDEV a *ram* node. To make both
>> nodes disk nodes the node itself would have to be mentioned
>> in the cluster command too, i.e. 'rabbitmqctl cluster
>> rabbit at SHIKHAR rabbit at CHETANDEV'.
>> >
>> >>> 2. now i stop node rabbit at SHIKHARM using rabbitmqctl
>> stop commnad
>> >>> 3. now i start the node rabbit at SHIKHARM using
>> rabbitmq-server
>> >>> start commnad
>> >
>> > You have just stopped and restarted the clusters' only disk
>> node. Rabbit clusters won't function properly if they have
>> only ram nodes.
>> >
>> >>> 4. and then i check cluster status it shows only one
>> running node
>> >>> Cluster status of node rabbit at CHETANDEV ...
>> >>>
>> [{nodes,[{disc,[rabbit at SHIKHARM]},{ram,[rabbit at CHETANDEV]}]},
>> >>> {running_nodes,[rabbit at CHETANDEV]}]
>> >>> ...done.
>> >
>> > Try running that command against rabbit at SHIKHARM. I suspect
>> it will tell you that it only knows about itself.
>> >
>> > Anyway, I suggest you change your cluster configuration
>> s.t. both nodes are disk nodes.
>> >
>> > Regards,
>> >
>> > Matthias.
>> > _______________________________________________
>> > rabbitmq-discuss mailing list
>> > rabbitmq-discuss at lists.rabbitmq.com
>> <mailto:rabbitmq-discuss at lists.rabbitmq.com>
>> >
>> https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss
>> _______________________________________________
>> rabbitmq-discuss mailing list
>> rabbitmq-discuss at lists.rabbitmq.com
>> <mailto:rabbitmq-discuss at lists.rabbitmq.com>
>> https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss
>>
>>
>>
>>
>> --
>> Cheten Dev
>>
>> B.Tech Final Year
>> Dept. of Electrical Engg.
>> IIT Delhi, New Delhi
>> ph 8527333215
>>
>>
>>
>> _______________________________________________
>> rabbitmq-discuss mailing list
>> rabbitmq-discuss at lists.rabbitmq.com <mailto:rabbitmq-discuss at lists.rabbitmq.com>
>> https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss
>
>
>
>
> --
> Cheten Dev
>
> B.Tech Final Year
> Dept. of Electrical Engg.
> IIT Delhi, New Delhi
> ph 8527333215
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20121023/27fa243d/attachment.htm>
More information about the rabbitmq-discuss
mailing list