[rabbitmq-discuss] Messages bigger than 150kb

Thomas Spycher thomas.spycher at tech.swisssign.com
Thu Oct 4 11:34:10 BST 2012

I understand! I see the Problem…

Thanks a lot for bringing a bit light into the dark!


On Oct 4, 2012, at 10:50 , Michael Klishin <michael.s.klishin at gmail.com> wrote:

> 2012/10/4 Thomas Spycher <thomas.spycher at tech.swisssign.com>
> timeStart = time.time()
> self.rpcchannel.basic_publish(exchange='', routing_key='secureObjects', properties=pika.BasicProperties(reply_to=self.rpcqueue , correlation_id=self._correlation_id,content_type="text/plain",delivery_mode=2), body=serializedObject)
> print time.time() - timeStart
> This does not measure actual end-to-end throughput, only how fast Pika can push things down the socket.
> In that case, of course more data will take more time. In fact, Pika may even delay sending the data
> (I am not familiar with its internals but it is event loop-based AFAIR).
> So what you are seeing is most likely Pika's internal buffer being completely filled once you go above 150K,
> in which case it has to do extra work, unlike with tiny messages. The same can be true for buffers in layers below (OS does buffering for network I/O, drivers may do that, etc).
> When benchmarking a messaging system throughput, you need to do it end-to-end, with [at least] 2 machines, a realistic
> network link between them and understanding about how exactly your client does publishing.
> TCP settings like Nagle's algorithm [1] and phenomenons like TCP incast [2] can make a noticeable difference in some cases.
> 1. http://boundary.com/blog/2012/05/02/know-a-delay-nagles-algorithm-and-you/
> 2. http://www.snookles.com/slf-blog/2012/01/05/tcp-incast-what-is-it/
> -- 
> MK
> http://github.com/michaelklishin
> http://twitter.com/michaelklishin

More information about the rabbitmq-discuss mailing list