[rabbitmq-discuss] Plz give me help about EPMD: Non-local peer connected
Tim Watson
tim at rabbitmq.com
Mon Jul 9 10:17:54 BST 2012
Ah ok great. I'm on CentOS 6, and I found that setting up the hostname
so that Erlang was *happy* with it took a bit of fiddling around.
My configuration looks like this:
t4 at iske $ cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
127.0.0.1 iske
::1 localhost
t4 at iske $ cat /etc/sysconfig/network
NETWORKING=yes
NETWORKING_IPV6=no
HOSTNAME=iske
You can check this with system-config-network as well. Are you seeing
something similar?
Tim
On 07/09/2012 10:13 AM, 何斌 wrote:
> Hi,
>
> My environment info:
> OS: CentOS 5.6
> Erlang: Compiled from otp_src_R15B01.tar.gz
> RabbitMQ: Compiled from rabbitmq-server-2.8.4.tar.gz
>
> ifconfig:
> eth0 Link encap:Ethernet HWaddr 84:2B:2B:73:88:28
> inet addr:183.*.*.* Bcast:183.60.44.127 Mask:255.255.255.192
> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
> RX packets:77904379 errors:0 dropped:0 overruns:0 frame:0
> TX packets:65643095 errors:0 dropped:0 overruns:0 carrier:0
> collisions:0 txqueuelen:1000
> RX bytes:11615134852 (10.8 GiB) TX bytes:61820973373 (57.5 GiB)
> &nb sp; Interrupt:66 Memory:da000000-da012800
>
> eth0:0 Link encap:Ethernet HWaddr 84:2B:2B:73:88:28
> inet addr:112.*.*.* Bcast:112.90.57.191 Mask:255.255.255.192
> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
> Interrupt:66 Memory:da000000-da012800
>
> eth1 Link encap:Ethernet HWaddr 84:2B:2B:73:88:29
> inet addr:10.20.30.1 Bcast:10.20.30.255 Mask:255.255.255.0
> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
> RX packets:94328707 errors:0 dropped:0 overruns:0 frame:0
> TX packets:78961028 errors:0 dropped:0 overruns:0 carrier:0
> collisions:0 txqueuelen:1000
> RX bytes:77337167993 (72.0 GiB) TX bytes:15705231055 (14.6 GiB)
> Interrupt:74 Memory:dc000000-dc012800
>
> eth1:1 Link encap:Ethernet HWaddr 84:2B:2B:73:88:29
> inet addr:10.20.30.251 Bcast:10.20.30.255 Mask:255.255.255.0
> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
> Interrupt:74 Memory:dc000000-dc012800
>
> lo Link encap:Local Loopback
> inet addr:127.0.0.1 Mask:255.0.0.0
> UP LOOPBACK RUNNING MTU:16436 Metric:1
> RX packets:2530738 errors:0 dropped:0 overruns:0 frame:0
> TX packets:2530738 errors:0 dropped:0 overruns:0 carrier:0
> collisions:0 txqueuelen:0
> RX bytes:155346660 (148.1 MiB) TX bytes:155346660 (148.1 MiB)
>
>
> Thank you again.
>
>
> He Bin
>
>
> ------------------------------------------------------------------------
> From: watson.timothy at gmail.com
> Date: Mon, 9 Jul 2012 10:00:33 +0100
> CC: rabbitmq-discuss at lists.rabbitmq.com
> To: rabbitmq-discuss at lists.rabbitmq.com
> Subject: Re: [rabbitmq-discuss] Plz give me help about EPMD: Non-local
> peer connected
>
> What OS and rabbit version are you running? I've not seen this happen
> before but I'll investigate.
>
> On 9 Jul 2012, at 07:32, 何斌 <hebin7611 at hotmail.com
> <mailto:hebin7611 at hotmail.com>> wrote:
>
>
> Hi Tim,
>
> Thanks for your reply.
>
> I tried "erl -sname rabbit", it's OK.
>
> my /etc/hosts looks like following:
> 127.0.0.1 game-01 ZSWY76 localhost.localdomain localhost
> ::1 localhost6.localdomain6 localhost6
>
> EPMD can be started successfully, but always reports "Non-local
> peer connected" then force disconnecting rabbit-server.
>
> Did I forget any necessory configration for RabbitMQ to use
> loopback interface to connect epmd?
>
> Thanks a lot.
>
> He Bin
>
>
> > Date: Fri, 6 Jul 2012 18:34:07 +0100
> > From: tim at rabbitmq.com <mailto:tim at rabbitmq.com>
> > To: rabbitmq-discuss at lists.rabbitmq.com
> <mailto:rabbitmq-discuss at lists.rabbitmq.com>
> > CC: hebin7611 at hotmail.com <mailto:hebin7611 at hotmail.com>
> > Subject: Re: [rabbitmq-discuss] Plz give me help about EPMD:
> Non-local peer conne cted
> >
> > Hi there,
> >
> > On 06/07/2012 06:53, 何斌 wrote:
> > > Hi all,
> > >
> > > I installed RabbotMQ & tried to start it.
> > >
> > > But I always got error as following:
> > >
> >
> > Ok so first of all, let's see if we can get you to start a stand
> alone
> > distributed Erlang node successfully. Normally stack traces like
> that
> > occur when the host environment isn't set up quite right (from
> Erlang's
> > perspective).
> >
> > We need to be able to run `erl -sname rabbit` on the command
> line and
> > see the Erlang emulator start successfully. It should look
> something
> > like this:
> >
> > ##############
> >
> > t4 at malachi:systest $ erl -sname rabbit
> > Erlang R15B01 (erts-5.9.1) [source] [64-bit] [smp:2:2]
> [async-threads:0]
> > [hipe] [kernel-poll:false]
> >
> > Eshell V5.9.1 (abort with ^G)
> > (rabbit at malachi)1>
> >
> > ##############
> >
> > Can you start Erlang like that successfully? I'm assuming not, but
> > please let us k now.
> >
> > I'm also interested in understanding what your hosts configuration
> > (e.g., /etc/hosts) looks like. On some Operating Systems (such
> as CentOS
> > for example), failing to set an explicit host name prevents you
> from
> > starting a distributed Erlang node.
> >
> > > {error_logger,{{2012,7,6},{13,32,21}},"Protocol: ~p: register
> error:
> > >
> ~p~n",["inet_tcp",{{badmatch,{error,epmd_close}},[{inet_tcp_dist,listen,1,[{file,"inet_tcp_dist.erl"},{line,70}]},{net_kernel,start_protos,4,[{file,"net_kernel.erl"},{line,1314}]},{net_kernel,start_protos,3,[{file,"net_kernel.erl"},{line,1307}]},{net_kernel,init_node,2,[{file,"net_kernel.erl"},{line,1197}]},{net_kernel,init,1,[{file,"net_kernel.erl"},{line,357}]},{gen_server,init_it,6,[{file,"gen_server.erl"},{line,304}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,227}]}]}]}
> > >
> {error_logger,{{2012,7,6},{13,32,21}},crash_report,[[{initial_call,{net_kernel,init,
> ['Argument__1']}},{pid,<0.20.0>},{registered_name,[]},{error_info,{exit,{error,badarg},[{gen_server,init_it,6,[{file,"gen_server.erl"},{line,320}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,227}]}]}},{ancest
> > >
> ors,[net_sup,kernel_sup,<0.9.0>]},{messages,[]},{links,[#Port<0.90>,<0.17.0>]},{dictionary,[{longnames,false}]},{trap_exit,true},{status,running},{heap_size,987},{stack_size,24},{reductions,551}],[]]}
> > >
> {error_logger,{{2012,7,6},{13,32,21}},supervisor_report,[{supervisor,{local,net_sup}},{errorContext,start_error},{reason,{'EXIT',nodistribution}},{offender,[{pid,undefined},{name,net_kernel},{mfargs,{net_kernel,start_link,[[rabbitmqprelaunch1077,shortnames]]}},{restart_type,permanent},{shutdown,2000},{child_type,worker}]}]}
> > >
> {error_logger,{{2012,7,6},{13,32,21}},supervisor_report,[{supervisor,{local,kernel_sup}},{errorContext,start_error},{reason,shutdown},{offender,[{pid,undefined},{name,net_sup},
> {mfargs,{erl_distribution,start_link,[]}},{restart_type,permanent},{shutdown,infinity},{child_type,supervisor}]}]}
> > >
> {error_logger,{{2012,7,6},{13,32,21}},std_info,[{application,kernel},{exited,{shutdown,{kernel,start,[normal,[]]}}},{type,permanent}]}
> > > {"Kern el pid
> > >
> terminated",application_controller,"{application_start_failure,kernel,{shutdown,{kernel,start,[normal,[]]}}}"}
> > >
> > > Crash dump was written to: erl_crash.dump
> > > Kernel pid terminated (application_controller)
> > >
> ({application_start_failure,kernel,{shutdown,{kernel,start,[normal,[]]}}})
> > >
> > >
> > > I ran it on a server with public IP 183.*.*.* .
> > >
> > > In Erlang src, I found that epmd checks connection src.
> > >
> > > /* Determine if connection is from localhost */
> > > if (getpeername(s->fd,(struct sockaddr*) &si,&st) ||
> > > s t < sizeof(si)) {
> > > /* Failure to get peername is regarded as non local host */
> > > s->local_peer = EPMD_FALSE;
> > > } else {
> > > /* Only 127.x.x.x and connections from the host's IP address
> > > allowed, no false positives */
> > > s->local_peer =
> > > (((((unsigned) ntohl(si.sin_addr.s_addr)) & 0xFF000000U) ==
> > > 0x7F000000U) ||
> > > (getsockname(s->fd,(struct sockaddr*) &di,&st) ?
> > > EPMD_FALSE : si.sin_addr.s_addr == di.s in_addr.s_addr));
> > > }
> > > dbg_tty_printf(g,2,(s->local_peer) ? "Local peer connected" :
> > > "Non-local peer connected");
> > >
> > >
> > > But unfortunately, si.sin_addr.s_addr was 183.*.*.*, while
> > > di.sin_addr.s_addr was 127.0.0.1
> > >
> & amp; gt; > My log:Checking peer address, getsockname ret: 0,
> si_addr=0xb7??????,
> > > di_addr=0x7f000001
> > >
> > >
> >
> > I could be wrong, but I suspect this is a red herring. You can
> restart
> > epmd with -d to get debugging information as well, but I suspect
> this
> > isn't relevant.
> >
> > Is there any way to force RabbitMQ server connect epmd via a
> specified
> > > address?
> > >
> >
> > I'm not really sure what you mean by this, but I'm fairly
> confident that
> > it is not necessary to even attempt to do something like that.
> Erlang
> > should be able to start up nodes with `-sname <name>` or `-name
> > <name>@<host>` and if either doesn't work, a little tweaking of
> the host
> > configuration should solve it.
> >
> > Based on your original comment (starting rabbitmq but always
> getting an
> > error) my understanding is that you're trying to start rabbit on
> this
> > machine and it fails. AFAIK when a distributed Erlang node
> connects to
> > EPMD on the localhost it should be treated as such. The
> rabbitmq-server
> > script starts rabbit up with `-sname rabbit` which implies that
> the node
> > name will be rabbit@<hostname> so you should make sure that `erl
> -sname
> > rabbit` works first of all.
>
> _______________________________________________
> rabbitmq-discuss mailing list
> rabbitmq-discuss at lists.rabbitmq.com
> <mailto:rabbitmq-discuss at lists.rabbitmq.com>
> https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss
>
>
> _______________________________________________ rabbitmq-discuss
> mailing list rabbitmq-discuss at lists.rabbitmq.com
> https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss
>
>
> _______________________________________________
> rabbitmq-discuss mailing list
> rabbitmq-discuss at lists.rabbitmq.com
> https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20120709/013df493/attachment.htm>
More information about the rabbitmq-discuss
mailing list