<div dir="ltr">I'll respond inline w our experience:<div class="gmail_extra"><br><div class="gmail_quote">On Sun, May 18, 2014 at 2:55 PM, Greg Poirier <span dir="ltr"><<a href="mailto:greg.poirier@opower.com" target="_blank">greg.poirier@opower.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">I mentioned this on Twitter and a couple of people have requested that I bring this up on the mailing list.<div>
<br></div><div>It seems to be a given that RabbitMQ was not designed for the batch processing use case (i.e. using RabbitMQ as a buffer between large serial steps). We have a system in place that attempts to do just that, however.</div>
</div></blockquote><div><br></div><div>It is not a 'given' as far as we are concerned. We have some processes that result in a million or more messages being queued within a minute or so. These messages are processed over the ensuing several minutes (for 'dismissals' of news items from individual devices) to several hours (for lower-priority individualized 'offers'). This is the new 'batch'.</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">
<div><br></div><div>I have been working with the developers of the software involved in an attempt to help them redesign around a more ideal use of RabbitMQ (or to help them move to a different bus altogether -- database or something like kafka) and some of them have been able to simply operate in smaller batch sizes (thus keeping their queues relatively small).</div>
</div></blockquote><div><br></div><div>We put large message bodies in S3 and pass them by reference. We never use RabbitMQ persistence and compensate for that with replication. For 'real' persistence we use Cassandra. Most importantly, none of our internal users know this, as we provide them with an abstracted interface.</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">
<div><br></div><div>However, I cannot stem the tide of improper RabbitMQ use.</div></div></blockquote><div><br></div><div>We try to make it easier to use us than not. We work hard to be the most reliable, fastest, most scalable, most flexible and cheapest component of our customers technology mix.</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><br></div><div>When things go poorly, millions of messages end up in the queues. </div>
</div></blockquote><div><br></div><div>We target zero length queues. If they grow unexpectedly we: 1) autoscale, 2) shift load, 3) start new regions - usually all those. Then we diagnose.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr"><div><br></div><div>In 3.1.x we saw this regularly cause our clusters to partition.</div></div></blockquote><div><br></div><div>We have never had a partition in production because we always overprovision RabbitMQ so it can maintain cluster communications. We basically avoid disk IO due to the risk of IO wait interfering w the cluster heartbeat.</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">
<div><br></div><div>In 3.1.x and 3.2.x when we would delete large queues (5+ million messages enqueued), this would cause the cluster to become unresponsive, run out of memory, and then crash.</div></div></blockquote><div>
<br></div><div>When we have tested situations like this, we found it best to just wipe out the cluster and restart. Before doing this, we shift the load to other regions operating in parallel.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr"><div><br></div><div>During the 3.1 -> 3.2 upgrade, we had to completely rebuild our clusters. When 3.2 came up, it soon crashed.</div></div></blockquote><div><br></div><div>We have not had that problem.</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">
<div><br></div><div>In the most recent upgrade, we saw a 3.2.3 cluster in our dev environment crash. I performed an opportunistic upgrade to 3.3.1, because hey... downtime already, so let's see if 3.3.1 addresses some of the issues we've been seeing.</div>
<div><br></div><div><a href="https://gist.github.com/grepory/384410ac90186ed0ce2a" target="_blank">https://gist.github.com/grepory/384410ac90186ed0ce2a</a><br></div><div><br></div><div>After the upgrade, 3.3.1 would not startup at all. I removed /var/lib/rabbitmq/mnesia on all of the nodes and brought RabbitMQ back up.</div>
</div></blockquote><div><br></div><div>We are not yet in production w 3.3.1 but 3.2.4 is running solidly in stage and we will upgrade stage to 3.3.1 this coming week.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr">
<div><br></div><div>3.3.1 has been up and running alright so far, but we haven't done another end-to-end test in our development environment in a while. One of these tests can lead to at least a million messages in the queue over a period of time on average.</div>
</div></blockquote><div><br></div><div>A million is not that many - depending on size of course. As I said - our target is 0, but really the question is: what's your rate of change? I try to have enough 'headroom' to easily handle the surges - volumes can vary 20 to 1 depending on the news of the moment etc. If a queue builds and stays high we add resources until it goes down and then investigate.</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">
<div><br></div><div>So, I guess my question is:</div><div><br></div><div>If I know that I have people using RabbitMQ like this, and there is nothing I can do to change that fact... what do I do?</div></div></blockquote><div>
<br></div><div>You need enough resource. And it is good to be able to autoscale. </div><div><br></div><div>A specific suggestion I would make for any internal service provider is to use an amqp proxy. We locate proxy clusters that we control in our internal customers' computing environments. They publish to and subscribe from these proxies. We control the shoveling/federation of the proxies to/from our core pipelines in regions, redirecting as needed. The proxies are an additional buffer and also allow us to 'launder' incoming messages, e.g. by forcing persistence off.</div>
<div><br></div><div>We also track and account for every message using metadata, and can charge back... We are cheap but not free.</div><div><br></div><div>Anyway, I hope this helps.</div><div><br></div><div>ml</div><div> </div>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>_______________________________________________<br>
rabbitmq-discuss mailing list<br>
<a href="mailto:rabbitmq-discuss@lists.rabbitmq.com">rabbitmq-discuss@lists.rabbitmq.com</a><br>
<a href="https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss" target="_blank">https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss</a><br>
<br></blockquote></div><br></div></div>