[rabbitmq-discuss] HAProxy does not work with Rabbitmq?
sameekmishra
sameek at arosys.com
Fri May 18 13:42:33 BST 2012
Hi
I have configured HA proxy and keepalived for rabbbitmq server.
i have install Debian squeeze 6.04 operating system with rabbbitmq 2.5.0
and enlarge 14.b.2.1,with cluster and mirror queue.
i have check cluster status both end it got:
rabbitmqctl cluster_status
Cluster status of node rabbit at server03 ...
[{nodes,[{disc,[rabbit at server03,rabbit at server05]}]},
{running_nodes,[rabbit at server05,rabbit at server03]}]
...done.
which means that cluster are correctly configured,also i have checked the
mirror with publish a message on other node and got from other node means
it's worked.i have read the following document for Setting Up A HAProxy
Failover
Cluster<http://www.zimbio.com/Linux/articles/YgBDA7b_kmR/Setting+Up+HAProxy+Failover+Cluster>
and also got haproxy configuration from this post
<http://www.joshdevins.net/2010/04/16/rabbitmq-ha-testing-with-haproxy/>.
including both document i have tried the failover with following
configuration:
node-1 server03 rabbit at server03 192.168.1.153
node-2 server05 rabbit at server05 192.168.1.155
assume virtul Ip: 192.168.1.199
/etc/haproxy/haproxy.conf
-------------------------
global
log 127.0.0.1 alert
log 127.0.0.1 alert debug
defaults
log global
mode http
option dontlognull
option redispatch
retries 3
contimeout 5000
clitimeout 50000
srvtimeout 50000
listen rabbitmq *:5672
mode tcp
stats enable
balance roundrobin
option forwardfor
option tcpka
server server03 192.168.1.153:5672 check inter 5000 downinter 500
server server05 192.168.1.155:5672 check inter 5000 backup
listen webfarm 192.168.1.199:80
mode http
stats enable
stats auth someuser:sam
balance roundrobin
cookie JSESSIONID prefix
option httpclose
option forwardfor
option httpchk HEAD /check.txt HTTP/1.0
server server03 192.168.1.153:5672 cookie A check
server server05 192.168.1.155:5672 cookie B check
option httpclose # disable keep-alive
option checkcache # block response if set-cookie & cacheable
rspidel ^Set-cookie:\ IP= # do not let this cookie tell our internal IP
address
#errorloc 502 http://192.168.114.58/error502.html
#errorfile 503 /etc/haproxy/errors/503.http
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
same haproxy configuration on node2 as well.
/etc/keepalived/keepalived.conf
---------------------------------
vrrp_script chk_haproxy { # Requires keepalived-1.1.13
script "killall -0 haproxy" # cheaper than pidof
interval 2 # check every 2 seconds
weight 2 # add 2 points of prio if OK
}
vrrp_instance VI_1 {
interface eth0
state MASTER
virtual_router_id 51
priority 101 # 101 on master, 100 on backup
virtual_ipaddress {
192.168.1.199
}
track_script {
chk_haproxy
}
}
only on difference with node2 is priority 100 .after configuation i have
start haproxy and keepalived,start without any error.i have try to ping
virtual ip with other node it got successful result,also i have publish
message using 192.168.1.199 ,it publish and i consume message from other
node,but when i try to rabbitmqctl stop_app of node1(stop rabbitmq server
which is running on node1) then i try to publish with virtual IP i got
connetion refuesed exception,
means when failover not work for me it does not switch to other node2.
please help me,i can't trace the problem.
Thanks
--
View this message in context: http://old.nabble.com/HAProxy-does-not-work-with-Rabbitmq--tp33870046p33870046.html
Sent from the RabbitMQ mailing list archive at Nabble.com.
More information about the rabbitmq-discuss
mailing list