<html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body dir="auto"><div>Yes I use blocking connection. </div><div><br></div><div>I am comparitively new to python as well so Can please u explain the difference between blockin connection n tornedo adaptor connection ?</div><div><br></div><div>Can u share some example with tornedo type connection ?</div><div><br>Sent from my iPhone</div><div><br>On 25 jun 2013, at 15:24, "Laing, Michael" <<a href="mailto:michael.laing@nytimes.com">michael.laing@nytimes.com</a>> wrote:<br><br></div><blockquote type="cite"><div><div dir="ltr">I know a lot about the pika client but I don't see any code.<div><br></div><div style="">On my old mac laptop, using the pika tornado adapter (or the libev adapter not yet released), I can publish ~20K msgs/sec (4KBytes each), saturating my network adapter. My servers are small, and process msgs more slowly, so network buffers fill up, and TCP backpressure is exerted, but everything completes as expected.</div>
<div style=""><br></div><div style="">Other pika async adapters run at about 3K msgs/sec in similar runs on my mac; I don't work at all with the pika BlockingConnection so you are on your own if you are using that.</div><div style="">
<br></div><div style="">ml</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Tue, Jun 25, 2013 at 8:59 AM, Emile Joubert <span dir="ltr"><<a href="mailto:emile@rabbitmq.com" target="_blank">emile@rabbitmq.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
Hi,<br>
<div class="im"><br>
On 25/06/13 13:38, Priyanki Vashi wrote:<br>
<br>
> I have very high end server with 20 CPUs and 120 GB of RAM so I think<br>
> resources wise it's not the bottleneck.<br>
<br>
</div>And what about the network speed? I would still try an independent<br>
bandwidth test from network to disk to give you a comparison reference.<br>
<br>
The numbers you quoted are about 2 orders of magnitude lower than I<br>
would expect for that hardware. Bear in mind that the queue process is<br>
typically the most CPU-intensive process. A queue on a single server can<br>
occupy at most one CPU, so will benefit more from a faster CPU than from<br>
many CPUs.<br>
<div class="im"><br>
> Did u and Tim received my scripts ?<br>
<br>
</div>Not yet, but hopefully someone who knows more about the Pika client than<br>
me will be able to comment.<br>
<div class="im"><br>
> I learnt that basic_consume is better choice than basic_get since server<br>
> will directly send messages to listening consumer without consumer<br>
> polling it.<br>
<br>
</div>Yes, and the asynchronous choice is typically faster.<br>
<br>
<br>
If you are still stuck with very low throughput then I would recommend<br>
you try the MulticastMain utility, included in the RabbitMQ Java client.<br>
It includes support for many of the options that you want to compare,<br>
<div class="HOEnZb"><div class="h5"><br>
<br>
<br>
-Emile<br>
<br>
<br>
<br>
_______________________________________________<br>
rabbitmq-discuss mailing list<br>
<a href="mailto:rabbitmq-discuss@lists.rabbitmq.com">rabbitmq-discuss@lists.rabbitmq.com</a><br>
<a href="https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss" target="_blank">https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss</a><br>
</div></div></blockquote></div><br></div>
</div></blockquote><blockquote type="cite"><div><span>_______________________________________________</span><br><span>rabbitmq-discuss mailing list</span><br><span><a href="mailto:rabbitmq-discuss@lists.rabbitmq.com">rabbitmq-discuss@lists.rabbitmq.com</a></span><br><span><a href="https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss">https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss</a></span><br></div></blockquote></body></html>