<div dir="ltr">I was curious about the cost of polling using basic.get, so I wrote a small Erlang client program to start up 500 consumers, on one TCP/IP channel, listening on one queue, each polling with a basic.get every 100ms. I know this is a high rate, but I wanted to see how it behaved. <br>
<br>Maybe my code was naive or inefficient, but I found the polling program's Linux process to be taking 85% CPU (and off the chart in appmon), while Rabbit's process was about 30% (on a 4-core Intel Q6600 box, hence CPU numbers don't add up to 100%). I haven't tried it with the "native" client. It seemed to make no difference if kernel poll was enabled.<br>
<br>Regards,<br>Edwin<br><br>Environment: Ubuntu Linux 8.04, 2.4 GHz Q6600, 8 GB RAM, Erlang R12B-3, RabbitMQ 1.4.0, RabbitMQ Erlang client as of 2008/06/23.<br><br><div class="gmail_quote">On Wed, Jul 30, 2008 at 2:08 PM, Edwin Fine <span dir="ltr"><<a href="mailto:rabbitmq-discuss_efine@usa.net">rabbitmq-discuss_efine@usa.net</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><div dir="ltr">I just noticed David is using Java, so my Erlang-related comments won't be helpful. Timers and processes have much lower overhead in Erlang than in other languages so what works for Erlang probably won't work as well in these languages. I also noticed that things ground to a halt when using 500 consumers doing basic.get in a polling fashion. I'd be very curious to know what polling interval was used to get RabbitMQ to grind to a halt with 500 consumers.<div>
<div></div><div class="Wj3C7c"><br>
<br><div class="gmail_quote">On Wed, Jul 30, 2008 at 1:52 PM, Edwin Fine <span dir="ltr"><<a href="mailto:rabbitmq-discuss_efine@usa.net" target="_blank">rabbitmq-discuss_efine@usa.net</a>></span> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<div dir="ltr">I think I was the one with the basic.qos problem because in stress testing, my consumers (started with basic.consume) were being flooded with messages and could not keep up, so their Erlang message queues became huge and eventually there were memory problems. What I eventually wound up doing was, indeed, changing the code to use basic.get with a short timeout. This keeps the messages in RabbitMQ's camp. Since they are persistent I imagine they will be stored on disk and not take too much memory. Essentially, when I receive a basic.get_empty, I call erlang.send_after() to kick off a new basic.get in a short while. When the queues are busy, there will be no overhead; it's only when idle that there will be a number of timers expiring.<br>
<br>So far I have not seen significant performance hits, but then again, I am only using 50 queues and and aggregate of 140 messages/second. Maybe Matthias's suggestion will work for you.<br><br>Hope this helps.<br><br>
Edwin<div><div></div><div><br><br><div class="gmail_quote">On Wed, Jul 30, 2008 at 1:26 PM, Matthias Radestock <span dir="ltr"><<a href="mailto:matthias@lshift.net" target="_blank">matthias@lshift.net</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<div><a href="mailto:David.Corcoran@edftrading.com" target="_blank">David.Corcoran@edftrading.com</a> wrote:<br>
</div><div>> I'm not sure how message delivery credits work but having read through a<br>
> little it looks like it might be size oriented.<br>
<br>
</div>basic.qos can do count based windows too. So with a window size of one<br>
you can guarantee that at most one message is "in flight" or being<br>
processed by the consumer.<br>
<br>
This has come up before, and basic.qos is on our todo list. As with<br>
basic.reject, and perhaps more so, there are quite a few challenges in<br>
implementing it, which is why it hasn't been done yet. One difficulty is<br>
that basic.qos operates on an entire channel (or, optionally, even an<br>
entire connection). Since a single channel can consume messages from<br>
multiple queues, and these queues can have consumers on other channels<br>
(with consumption possibly limited by basic.qos) we basically have a<br>
nice little optimisation & consensus problem at our hands: figuring out<br>
which queues should send which messages to which channels such that the<br>
maximum number of messages gets delivered while staying inside the<br>
configured qos limits. And all that in a distributed setting. And the<br>
limits change all the time outside the brokers control - clients ack<br>
messages whenever they please, the topology may change, etc, etc. Great<br>
fun :)<br>
<font color="#888888"><br>
<br>
Matthias.<br>
</font><div><div></div><div><br>
_______________________________________________<br>
rabbitmq-discuss mailing list<br>
<a href="mailto:rabbitmq-discuss@lists.rabbitmq.com" target="_blank">rabbitmq-discuss@lists.rabbitmq.com</a><br>
<a href="http://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss" target="_blank">http://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss</a><br>
<br>
</div></div></blockquote></div><br><br clear="all"><br></div></div>-- <br>For every expert there is an equal and opposite expert - Arthur C. Clarke<br>
</div>
</blockquote></div><br><br clear="all"><br>-- <br>For every expert there is an equal and opposite expert - Arthur C. Clarke<br>
</div></div></div>
</blockquote></div><br><br clear="all"><br>-- <br>For every expert there is an equal and opposite expert - Arthur C. Clarke<br>
</div>