[rabbitmq-discuss] Latency of publish confirm
Pierpaolo Baccichet
pierpaolo at dropbox.com
Tue Dec 4 15:30:27 GMT 2012
Hello Gavin,
I am using the select based adapter (with a layer on top to manage
reconnection logics to different nodes in the cluster and thread-safety). I
instrumented the code right around the select call to the socket and as far
as I can tell, client side is ok. As per my last email, it definitely looks
like some timeout is triggering with mirrored queues.
Pier
On Tue, Dec 4, 2012 at 7:21 AM, Gavin M. Roy <gmr at meetme.com> wrote:
> Out of curiosity, which adapter are you using? There's a loop in
> BlockingConnection which will stop what a producer is doing to make sure
> that there are no pending RPC requests from RabbitMQ. For your type of
> test, I'd make sure you're using one of the async adapters to be 100% sure
> it's not on pika's side.
>
> On Tuesday, December 4, 2012 at 10:11 AM, Pierpaolo Baccichet wrote:
>
> Hello Matthias,
>
> Thanks for your quick response!
>
> I double checked the code to make sure that I am not marking messages as
> persistent and indeed that's the case. The queues and the exchanges are
> marked as durable but the individual messages I send are not setting the
> delivery_mode=2.
>
> I am a little skeptical the issue here is sync on disk because adding
> producers does not change the behavior. I ran a test with 5 producers
> sending 10 messages per second each and I am still seeing exactly the same
> results. Each producer observes latencies that are multiple of 31
> milliseconds (though based on wireshark capture, this latency seems to be
> dominated by the 25 milliseconds we see in rabbitMQ side). Example output
> of what I am seeing on the producer side is below:
>
> 1354633782.0697601 - completing send 15 took 63.736915588378906
> 1354633782.233757 - completing send 16 took 63.80009651184082
> 1354633782.3976469 - completing send 17 took 63.717842102050781
> 1354633782.5615449 - completing send 18 took 63.707828521728516
> 1354633782.725692 - completing send 19 took 63.929080963134766
> 1354633782.8579049 - completing send 20 took 31.997919082641602
> 1354633783.0219419 - completing send 21 took 63.837051391601562
> 1354633783.1538589 - completing send 22 took 31.718969345092773
> 1354633783.285862 - completing send 23 took 31.77189826965332
> 1354633783.4498329 - completing send 24 took 63.776016235351562
>
> Also, in my previous email I forgot to specify my environment. I am
> running on rabbitMQ 3.0 Erlang R15B02. Python and pika 0.9.8 on the
> client side
>
> Pierpaolo
>
>
> On Tue, Dec 4, 2012 at 1:22 AM, Matthias Radestock <matthias at rabbitmq.com>wrote:
>
> On 04/12/12 08:55, Matthias Radestock wrote:
>
> I am guessing your messages are marked as persistent. The 25ms is indeed
> an aggregation interval, but for the disk (in particular fsyncs) rather
> than the network.
>
>
> However, fsyncs also happen when queues and the storage sub-system go
> idle, so the interval only kicks in when the system is busy (thus ensuring
> that fsyncs aren't delayed indefinitely).
>
> So I am pretty sure what you are seeing is simply the cost of performing
> an fsync per message. There's nothing that can be done about that except
> buying faster disks / switching to SSDs.
>
> Regards,
>
> Matthias.
>
>
> _______________________________________________
> rabbitmq-discuss mailing list
> rabbitmq-discuss at lists.rabbitmq.com
> https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss
>
>
>
> _______________________________________________
> rabbitmq-discuss mailing list
> rabbitmq-discuss at lists.rabbitmq.com
> https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20121204/1ad81ed3/attachment.htm>
More information about the rabbitmq-discuss
mailing list