<div dir="ltr">Greg:<br><div><br></div><div>I thought I'd share a bit of experience I've got. In general, we've had very little problems with rabbit, but DEFINITELY make sure you're on the latest version (at least, latest minor version). For example, we had a mirrored queue crash due to a bug when we were on 3.2.0 and 3.2.1 fixed that bug. Also, be very careful of your disk IO. Rabbit doesn't seem to handle long pauses on the disk very well (not many things do handle this well). I've had more problems where the disk sub system slows down than anything else. </div>
<div><br></div><div>With those two things under consideration, we've had systems where we've had almost a billion messages backlogged waiting to be processed and we've been running fine under this load. I'm looking at a queue right now with persistent messages (and queue persistent) with over 12 million messages waiting to be processed. We're looking to increase the batch performance, and what else we need to do to increase processing of these messages, but rabbit is handling this fine. </div>
<div><br></div><div>There ARE numerous methods you can use to handle these kinds of situations. For example, x-consistent hashes to distribute messages to additional queues that get routed to different servers in a "tree" style pattern, with additional consumers and distributed processing. </div>
<div><br></div><div>Regarding partitioning - I've not hit a partition yet, but we keep our rabbit clustered nodes really close together to try and prevent any kinds of issues from occurring. In general, they're at least on the same switch or virtual switch. We then use shovels to distribute the messages to remote systems. We setup LTM's in between the shovels and the remote system with round-robin dispatching of connections. Though not a precise load balanced of the messages due to hashing keys and other issues, it gets us a moderately balanced solution that scales really really well. Our big slow down has usually been the database we're writing to on the remote side.</div>
<div><br></div><div>Jason</div><div><br></div><div><br></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Mon, May 19, 2014 at 11:48 AM, Greg Poirier <span dir="ltr"><<a href="mailto:greg.poirier@opower.com" target="_blank">greg.poirier@opower.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Michael,<div><br></div><div>Good to hear from you again. If you don't mind, I have a few questions about your setup. </div>
<div><br></div>I assume for the AMQP proxy, you are referring to something like your Fabrik. Would that be correct? How is open sourcing that going? I've been interested since your original post. <div>
<br></div><div>Being short on resources, but working with developers who have resources, I am inclined to introduce them to the idea of the middle layer and seeing what we can come up with that is equitable for all. </div>
<div><br></div><div>I think being able to not persist messages in RabbitMQ would be a big win for us. This removes the bulk of the io, and I solves our still occasional partitioning problems. I'm going to talk to other service owners about persisting messages themselves in databases and passing only the ids of their messages around. I don't think we can implement a unified middle layer given some time constraints, but I'm going to propose that as well (as I think it is the best way to approach this). Lacking the ability to implement, does shifting persistence to databases and maintaining a batch table (of ids in flight) seem like a reasonable interim solution? Or is there another approach?</div>
<div><br></div><div>A couple of our service owners already do this. Most do not and instead pass entire documents via RabbitMQ to persistent queues. I have a hard time identifying who some are but am working on that as well. I think providing an API for them would make a huge difference in getting them to standardize around a better use of RabbitMQ.</div>
<div><div>
<br></div><div>I was toying with a simpler implementation of your cluster configuration, but (and I think we discussed this) it will require that producers and consumers connect to separate proxy hosts, correct? I am still largely unfamiliar with how federation and shovel work--despite having read the documentation. I am working on a test bed for myself in my spare time (ha). It would be nice to have this proxy layer be single unclustered rabbit nodes. I could then take do no downtime upgrades of RabbitMQ, add capacity for certain vhosts, etc. Am I understanding federation and shovel correctly? Is this even possible?</div>
<div><br></div><div>The idea here being </div><div><br></div><div>publisher - proxy - backing cluster - proxy - consumer </div><div><br></div><div>Where consumers take messages from queues bound to exchanges to which publishers are connected. </div>
<div><br></div><div>I think this requires a database for persistence, because if you publish to a proxy exchange and no consumers are connected, then the message gets lost. </div><div><br></div><div>Is there a reasonable way to avoid this without Fabrik? Publisher confirms don't help if no queues are bound. And if we are sharing a database between producer and consumer, why bother with RabbitMQ at all? </div>
</div><div><br></div>The beauty of message buses is the ability to pass arbitrary messages over them. Without that, what are they for? I realize that we don't want to pass large documents in them, but a small JSON blob seems perfectly reasonable. <div class="HOEnZb">
<div class="h5"><span></span><br>
<div><div><div><br>On Sunday, May 18, 2014, Laing, Michael <<a>michael.laing@nytimes.com</a>> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">I'll respond inline w our experience:<div class="gmail_extra"><br><div class="gmail_quote">On Sun, May 18, 2014 at 2:55 PM, Greg Poirier <span dir="ltr"><<a>greg.poirier@opower.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">I mentioned this on Twitter and a couple of people have requested that I bring this up on the mailing list.<div>
<br></div><div>It seems to be a given that RabbitMQ was not designed for the batch processing use case (i.e. using RabbitMQ as a buffer between large serial steps). We have a system in place that attempts to do just that, however.</div>
</div></blockquote><div><br></div><div>It is not a 'given' as far as we are concerned. We have some processes that result in a million or more messages being queued within a minute or so. These messages are processed over the ensuing several minutes (for 'dismissals' of news items from individual devices) to several hours (for lower-priority individualized 'offers'). This is the new 'batch'.</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">
<div><br></div><div>I have been working with the developers of the software involved in an attempt to help them redesign around a more ideal use of RabbitMQ (or to help them move to a different bus altogether -- database or something like kafka) and some of them have been able to simply operate in smaller batch sizes (thus keeping their queues relatively small).</div>
</div></blockquote><div><br></div><div>We put large message bodies in S3 and pass them by reference. We never use RabbitMQ persistence and compensate for that with replication. For 'real' persistence we use Cassandra. Most importantly, none of our internal users know this, as we provide them with an abstracted interface.</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">
<div><br></div><div>However, I cannot stem the tide of improper RabbitMQ use.</div></div></blockquote><div><br></div><div>We try to make it easier to use us than not. We work hard to be the most reliable, fastest, most scalable, most flexible and cheapest component of our customers technology mix.</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><br></div><div>When things go poorly, millions of messages end up in the queues. </div>
</div></blockquote><div><br></div><div>We target zero length queues. If they grow unexpectedly we: 1) autoscale, 2) shift load, 3) start new regions - usually all those. Then we diagnose.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr"><div><br></div><div>In 3.1.x we saw this regularly cause our clusters to partition.</div></div></blockquote><div><br></div><div>We have never had a partition in production because we always overprovision RabbitMQ so it can maintain cluster communications. We basically avoid disk IO due to the risk of IO wait interfering w the cluster heartbeat.</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">
<div><br></div><div>In 3.1.x and 3.2.x when we would delete large queues (5+ million messages enqueued), this would cause the cluster to become unresponsive, run out of memory, and then crash.</div></div></blockquote><div>
<br></div><div>When we have tested situations like this, we found it best to just wipe out the cluster and restart. Before doing this, we shift the load to other regions operating in parallel.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr"><div><br></div><div>During the 3.1 -> 3.2 upgrade, we had to completely rebuild our clusters. When 3.2 came up, it soon crashed.</div></div></blockquote><div><br></div><div>We have not had that problem.</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">
<div><br></div><div>In the most recent upgrade, we saw a 3.2.3 cluster in our dev environment crash. I performed an opportunistic upgrade to 3.3.1, because hey... downtime already, so let's see if 3.3.1 addresses some of the issues we've been seeing.</div>
<div><br></div><div><a href="https://gist.github.com/grepory/384410ac90186ed0ce2a" target="_blank">https://gist.github.com/grepory/384410ac90186ed0ce2a</a><br></div><div><br></div><div>After the upgrade, 3.3.1 would not startup at all. I removed /var/lib/rabbitmq/mnesia on all of the nodes and brought RabbitMQ back up.</div>
</div></blockquote><div><br></div><div>We are not yet in production w 3.3.1 but 3.2.4 is running solidly in stage and we will upgrade stage to 3.3.1 this coming week.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr">
<div><br></div><div>3.3.1 has been up and running alright so far, but we haven't done another end-to-end test in our development environment in a while. One of these tests can lead to at least a million messages in the queue over a period of time on average.</div>
</div></blockquote><div><br></div><div>A million is not that many - depending on size of course. As I said - our target is 0, but really the question is: what's your rate of change? I try to have enough 'headroom' to easily handle the surges - volumes can vary 20 to 1 depending on the news of the moment etc. If a queue builds and stays high we add resources until it goes down and then investigate.</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">
<div><br></div><div>So, I guess my question is:</div><div><br></div><div>If I know that I have people using RabbitMQ like this, and there is nothing I can do to change that fact... what do I do?</div></div></blockquote><div>
<br></div><div>You need enough resource. And it is good to be able to autoscale. </div><div><br></div><div>A specific suggestion I would make for any internal service provider is to use an amqp proxy. We locate proxy clusters that we control in our internal customers' computing environments. They publish to and subscribe from these proxies. We control the shoveling/federation of the proxies to/from our core pipelines in regions, redirecting as needed. The proxies are an additional buffer and also allow us to 'launder' incoming messages, e.g. by forcing persistence off.</div>
<div><br></div><div>We also track and account for every message using metadata, and can charge back... We are cheap but not free.</div><div><br></div><div>Anyway, I hope this helps.</div><div><br></div><div>ml</div><div>
</div>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>_______________________________________________<br>
rabbitmq-discuss mailing list<br>
<a>rabbitmq-discuss@lists.rabbitmq.com</a><br>
<a href="https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss" target="_blank">https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss</a><br>
<br></blockquote></div><br></div></div>
</blockquote></div>
</div>
</div>
</div></div><br>_______________________________________________<br>
rabbitmq-discuss mailing list<br>
<a href="mailto:rabbitmq-discuss@lists.rabbitmq.com">rabbitmq-discuss@lists.rabbitmq.com</a><br>
<a href="https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss" target="_blank">https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss</a><br>
<br></blockquote></div><br><br clear="all"><div><br></div>-- <br><div dir="ltr">Jason McIntosh<br><a href="https://github.com/jasonmcintosh/" target="_blank">https://github.com/jasonmcintosh/</a><br>573-424-7612</div>
</div>