[rabbitmq-discuss] Pika Question (publish and consume "save" as fast as possible)
josh at gebaschtel.ch
Mon Jul 18 21:38:57 BST 2011
Aware of that, but supposed to work cross-platform while maintaining script character.
Sorry, I've to rephrase, less a question of throughput rather than programmer approach: any simple way to avoid dealing with the buffers itself?
With py-amqp I could set the transaction-flag to control the publisher won't exceed it's capabilities, dito on consumer. How to achieve that in pika?
Just find the setup with blocking-publisher and select-consumer strange, but only configuration that worked stable for me so far without running into buffer warnings.
Von: Marek Majkowski [mailto:majek04 at gmail.com]
Gesendet: Montag, 18. Juli 2011 17:06
An: Josh Geisser
Betreff: Re: [rabbitmq-discuss] Pika Question (publish and consume as fast as possible)
On Mon, Jul 18, 2011 at 14:39, Josh Geisser <josh at gebaschtel.ch> wrote:
> Anyway, I want to write a little AMPQ Ping tool which should allow us to test various delivery patterns over a Rabbit Cluster.
> For that I want to have the possibility to publish and consume as fast as possible, also kind of benchmarking the system.
You are aware that Python is not the fastest language, right?
My rough guess would be that for a decent client you should be able to
get something like 1k - 6k msgs/sec for simple messaging patterns.
> I made some progress in getting into Pika and think I understand the idea of the TCP_Backpressure, but I find it a bit hard to implement it on the 'enduser' side. (I found the old idea of TX_ and TX_commit easier to prevent overflow...)
> Long speech short, I ended up with using 'BlockingConnection' for publishing and 'SelectConnection' for consuming, otherwise i always ran into buffer-warnings and eventually crashes ....
> Could someone provide me with a quick snipped that publish/consumes as fast as possible, the way you'd implement it?
> Cheers & thanks a lot
> (pika in my case latest git-version, Python 2.7.2)
> rabbitmq-discuss mailing list
> rabbitmq-discuss at lists.rabbitmq.com
More information about the rabbitmq-discuss