[rabbitmq-discuss] Message sending/receiving overhead.
michael.laing at nytimes.com
Thu Aug 1 01:59:25 BST 2013
Have you experimented with prefetch?
Which client are you using and which versions of rabbitmq and erlang.
Message size is important too, of course.
We use the python pika client in asynch mode and have consistent processing
times (fetch/process/publish) under 5ms.
In general, we have found event loop based approaches faster than threads,
esp. on the small (cheap) machines we use in the cloud.
On Wed, Jul 31, 2013 at 6:16 PM, Pavel Kogan <pavel.kogan at cortica.com>wrote:
> Hi all,
> I have simple subscriber that receives message, process it in single
> thread and publish response.
> Processing must be done in single thread (logic limitations) but is very
> fast (~5ms) so I expect to be able to process 200 requests per second.
> However publishing response in line with processing slows the things as
> network latency is considerable (even though it is internal 1Gbit network).
> To resolve it I moved responses publishing to separate threads (via
> internal concurrent queue) and it helped.
> The question is about receiving overhead. Is there any pre-caching of
> messages or I should also integrate internal concurrent queue for requests
> Many Thanks,
> rabbitmq-discuss mailing list
> rabbitmq-discuss at lists.rabbitmq.com
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the rabbitmq-discuss