<div><div><span><div><span><div><div><div>
<div>Hey Aaron, cool work. Using libevent is a nice approach. Given Pika's modular approach, I'll have to see what that looks like as an option in Pika v2. </div><div><br></div><div>Given Jason's questions, I decided to port your bench to Pika and threw in py-amqplib (single channel only) to boot.</div><div><br></div><div>As I had expected, Haigha benched faster than Pika, but surprisingly only by 2.84% with 500 channels. I does, however, start to show its performance benefits on a single connection with multiple channels when the single socket is not over-saturated. It was 13.29% faster than Pika with 50 channels and 21.57% faster with 10 channels. </div><div><br></div><div>I was somewhat surprised to find that Pika benched 13.08% faster with only 1 channel. So surprised that I had to triple check the numbers ;-) </div><div><br></div><div>I used the 500 number as the initial baseline because it the default value in your test app. I am curious what the use case for so many channels is in the test. Is it to simulate concurrency in a client app?</div><div><br></div><div>Anyway here is the info on my tests:</div><div><br></div><div><span><b>Box:</b></span></div><div><ul><li>Dual quad core AMD 2.2Gz (Model 2356)</li><li>8GB Ram</li><li>Max CPU utilization during all tests: 19%</li><li>Max IOWait during all tests: 0.2%</li><li>Gentoo Linux with a custom compiled 2.6.28 kernel</li></ul></div><div><b>RabbitMQ:</b></div><div><ul><li>2.4.1</li><li>Erlang R14B02</li><li>Default settings</li></ul></div><div><b>Client libraries:</b></div><div><ul><li>Haigha 0.2.1</li><li>Pika 0.9.6p0</li><li>py-amqplib 0.6.1</li></ul></div><div><b>Test Parameters:</b></div><div><ul><li>Duration: 300 seconds</li><li>Channel Count: 1, 10, 50, 500</li><li>No-Ack: True</li><li>Transactions: Off</li><li>Single threaded</li><li>Single Process</li></ul></div><div><div>The 1 channel option was to accommodate py-admqplib and BlockingConnection. BlockingConnection can handle multiple channels but the overhead for setting up 500 connections on BlockingAdapter in Pika was prohibitive. The test ended prior to all queues being bound and a single full setup is roughly 1.2 seconds in my environment. For the Pika/TornadoConnection test, I used Tornado 1.2.</div></div><div><br></div><div>A few important things were uncovered in Pika with this test:</div><div><ul><li>There is an interesting recursion bug under heavy loads in all 0.9.x versions. This bug would only present itself if there was always a frame to read or frame to publish in the buffer. If the IOLoop ever had a cycle without inbound or outbound data, the bug will not present. But if there is always data to process, for up to 1,000 frames, there will be a RuntimeError. 1,000 is the default Python recursion limit. This bug has been fixed.</li><li>BlockingConnection is very slow! It is roughly 1/8th the speed of py-amqplib. While Asyncore can do roughly 1,800 messages a second on a single channel, Blocking can only do roughly 4!</li></ul><div>I didn't make much of an effort to clean up the code or remove the parts I don't use, but the stress_test app hack for pika is at <a href="https://gist.github.com/974021">https://gist.github.com/974021</a></div><div><br></div><div>Haigha looks nice. You're doing some of the things I have been thinking about with future versions of Pika such as in your haigha/classes modules (More natural AMQP mapping of classes for use in the client). I look forward to seeing where you take it.</div></div><div><br></div><div>Regards,</div><div><br></div><div><span><div>Gavin</div>
</span>
<span></span>
<!-- <p style="color: #a0a0a0;">On Saturday, May 14, 2011 at 9:40 PM, Jason J. W. Williams wrote:</p> -->
<p style="color: #a0a0a0;">On Saturday, May 14, 2011 at 9:40 PM, Jason J. W. Williams wrote:</p><blockquote type="cite"><div>
<span><div><div><div>How does this differ from Pika's asyncore or Tornado bindings in terms of performance?</div></div></div></span></div></blockquote><div><br>
</div>
</div>
</div></div></div></span></div></span></div></div>