<html><head></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">Hi Matt,<div><br><div><div>On 6 Nov 2013, at 16:02, Matt Wise wrote:</div><blockquote type="cite"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0.8ex; border-left-width: 1px; border-left-color: rgb(204, 204, 204); border-left-style: solid; padding-left: 1ex; position: static; z-index: auto; ">
"Note that the cluster configuration is applied only to fresh nodes. A fresh nodes is a node which has just been reset or is being start for the first time. Thus, the automatic clustering won't take place after restarts of nodes. This means that any change to the clustering via rabbitmqctl will take precedence over the automatic clustering configuration."<br>
<div class="im"><br></div></blockquote><div><br></div><div>So far we've taken the approach that clustering configuration should be hard-coded into the rabbitmq.config files. This works well in explicitly defining all of the hosts in a cluster on every machine, but it also means that adding a 4th node to a 3-node cluster will cause the 3 running live nodes to do a full service restart, which is bad.</div></div></div></div></blockquote><div><br></div><div>That's not strictly necessary - you can add nodes to a cluster without </div><br><blockquote type="cite"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div> Our rabbitmq.config though is identical on all of the machines (other than the server-list, which may have been in-flux when Puppet was restarting these services)</div>
<div><br></div></div></div><blockquote style="margin:0 0 0 40px;border:none;padding:0px"><div class="gmail_extra"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
[<br> {rabbit, [<br> {log_levels, [{connection, warning}]},<br> {cluster_partition_handling,pause_minority},<br> {tcp_listeners, [ 5672 ] },<br> {ssl_listeners, [ 5673 ] },<br>
{ssl_options, [{cacertfile,"/etc/rabbitmq/ssl/cacert.pem"},<br> {certfile,"/etc/rabbitmq/ssl/cert.pem"},<br> {keyfile,"/etc/rabbitmq/ssl/key.pem"},<br>
{verify,verify_peer},<br> {fail_if_no_peer_cert,true}<br> ]},<br> {cluster_nodes,['rabbit@i-23cf477b', 'rabbit@i-07d8bc5f', 'rabbit@i-a3291cf8']}<br>
]}<br>].</blockquote></div></div></blockquote><div class="gmail_extra"><div class="gmail_quote"><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div class="im">
> Questions:<br>
> 1. We only had ~2500 messages in the queues (they are HA'd and durable). The policy is { 'ha-mode': 'all' }. When serverA and serverB restarted, why did they never come up? Unfortunately in the restart process, they blew away their log files as well which makes this really tough to troubleshoot.<br>
<br>
</div>It's nigh on impossible to guess what might've gone wrong without any log files to verify against. We could sit and stare at all the relevant code for weeks and not spot a bug that's been triggered here, since if it were obvious we would've fixed it already.<br>
<br>
If you can give us a very precise set of steps (and timings) that led to this situation, I can try and replicate what you've seen, but I don't fancy my chances to be honest.<br></blockquote><div><br></div><div>Its a tough one for us to reproduce.. but I think the closest steps would be:</div>
<div><br></div><div> 1. Create a 3-node cluster... configured with similar config to the one I pasted above.</div><div> 2. Create enough publishers and subscribers that you have a few hundred messages/sec going through the three machines.</div>
<div> 3. On MachineA and MachineC, remove MachineB from the config file.</div><div> 4. Restart MachineA's rabbitmq daemon using init script</div><div> 5. Wait 3 minutes... theoretically #4 is still in process.. now issue the same restart to MachineC.</div>
<div><br></div><div> Fail.</div><div><br></div></div></div></div></blockquote><div><br></div><div>We will take a look at that.</div><br><blockquote type="cite"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div>Thats our best guess right now.. but agreed, the logs are a problem. Can we configure RabbitMQ to log through syslog for the future?</div><div><br></div></div></div></div></blockquote><div><br></div><div>Yes, by replacing the standard OTP logging mechanism with lager, and open source logging framework from Basho Technologies. I have developed a simple plugin (see <a href="https://github.com/hyperthunk/rabbitmq-lager">https://github.com/hyperthunk/rabbitmq-lager</a>) that does this for you. See the README for that repository for further details, and README at <a href="https://github.com/basho/lager">https://github.com/basho/lager</a> for information on routing to syslog. You can get a binary of the plugin, compiled against R14B03 from <a href="https://raw.github.com/hyperthunk/rabbitmq-lager/binary-dist/lager-2.0.0.ez">https://raw.github.com/hyperthunk/rabbitmq-lager/binary-dist/lager-2.0.0.ez</a>, though you'll need to compile from source if you're running a newer erlang than that. </div><br><blockquote type="cite"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0.8ex; border-left-width: 1px; border-left-color: rgb(204, 204, 204); border-left-style: solid; padding-left: 1ex; position: static; z-index: auto; ">
<div class="im"><br>
><br>
> 2. I know that restarting serverA and serverB at nearly the same time is obviously a bad idea -- we'll be implementing some changes so this doesn't happen again -- but could this have lead to data corruption?<br>
<br>
</div>It's possible, though obviously that shouldn't really happen. How close were the restarts to one another? How many HA queues were mirrored across these nodes, and were they all very busy (as your previous comment about load seems to suggest)? We could try replicating that scenario in our tests, though it's not easy to get the timing right and obviously the existence of network infrastructure on which the nodes are running won't be the same (and that can make a surprisingly big difference IME).<br>
</blockquote><div><br></div><div>The restarts were within a few minutes of each other. There are 5 queues, and all 5 queues are set to mirror to 'all' nodes in the cluster. They were busy, but no more than maybe 100 messages/sec coming in/out. </div>
<div> </div></div></div></div></blockquote><div><br></div><div>I'll take that into account when trying to reproduce - thanks.</div><br><blockquote type="cite"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0.8ex; border-left-width: 1px; border-left-color: rgb(204, 204, 204); border-left-style: solid; padding-left: 1ex; position: static; z-index: auto; ">
<div class="im"><br>
> Once the entire RabbitMQ farm was shut down, we actually were forced to move the rabbitmq data directory out of the way and start up the farm completely with blank databases. It seemed that RabbitMQ 3.1.3 really did not want to recover from this failure. Any thoughts?<br>
><br>
> 3. Lastly .. in the event of future failures, what tools are there for recovering our Mnesia databases? Is there any way we can dump out the data into some raw form, and then import it back into a new fresh cluster?<br>
><br>
<br>
</div>I'm afraid there are not, at least not "off the shelf" ones anyway. If you are desperate to recover important production data however, I'm sure we could explore the possibility of trying to help with that somehow. Let me know and I'll make some enquiries at this end.<br>
</blockquote><div><br></div><div>At this point we can move on from the data loss... but it does make for an interesting issue. Having tools to analyze the Mnesia DB and get "most of" the messages out in some format where they could be re-injected into a fresh cluster would be an incredibly useful tool. I wonder how hard it is to do?</div></div></div></div></blockquote><div><br></div><div>The messages are not stored in mnesia - we have a "proprietary" on-disk message store. There is a tool that can be used to interact with an offline message store, but it's bit-rotted now and was never fully supported anyway. If a customer does encounter message loss in production, we can offer commercial support to try and resolve the issue, though obviously we're trying very hard to ensure this never happens.</div><div><br></div><div>Cheers,</div><div>Tim</div><div><br></div></div></div></body></html>