Found something interesting. I tried to temporarily disable the policy for mirroring of my test queue and now I am consistently getting 7/7.5 milliseconds latency. Seems like the issue is triggered by the mirroring of the queue.<div>
<br></div><div>Pier</div><div class="gmail_extra"><br><br><div class="gmail_quote">On Tue, Dec 4, 2012 at 7:11 AM, Pierpaolo Baccichet <span dir="ltr"><<a href="mailto:pierpaolo@dropbox.com" target="_blank">pierpaolo@dropbox.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hello Matthias,<div><br></div><div>Thanks for your quick response!</div><div><br></div><div>I double checked the code to make sure that I am not marking messages as persistent and indeed that's the case. The queues and the exchanges are marked as durable but the individual messages I send are not setting the delivery_mode=2.</div>
<div><br></div><div>I am a little skeptical the issue here is sync on disk because adding producers does not change the behavior. I ran a test with 5 producers sending 10 messages per second each and I am still seeing exactly the same results. Each producer observes latencies that are multiple of 31 milliseconds (though based on wireshark capture, this latency seems to be dominated by the 25 milliseconds we see in rabbitMQ side). Example output of what I am seeing on the producer side is below:</div>
<div><br></div><div><div>1354633782.0697601 - completing send 15 took 63.736915588378906</div><div>1354633782.233757 - completing send 16 took 63.80009651184082</div><div>1354633782.3976469 - completing send 17 took 63.717842102050781</div>
<div>1354633782.5615449 - completing send 18 took 63.707828521728516</div><div>1354633782.725692 - completing send 19 took 63.929080963134766</div><div>1354633782.8579049 - completing send 20 took 31.997919082641602</div>
<div>1354633783.0219419 - completing send 21 took 63.837051391601562</div><div>1354633783.1538589 - completing send 22 took 31.718969345092773</div><div>1354633783.285862 - completing send 23 took 31.77189826965332</div>
<div>
1354633783.4498329 - completing send 24 took 63.776016235351562</div></div><div><br></div><div>Also, in my previous email I forgot to specify my environment. I am running on rabbitMQ 3.0 <span style="color:rgb(68,68,68);font-family:Verdana,sans-serif;font-size:12px;text-align:right">Erlang R15B02. Python and pika 0.9.8 on the client side</span></div>
<span class="HOEnZb"><font color="#888888">
<div><span style="color:rgb(68,68,68);font-family:Verdana,sans-serif;font-size:12px;text-align:right"><br></span></div><div><span style="color:rgb(68,68,68);font-family:Verdana,sans-serif;font-size:12px;text-align:right">Pierpaolo</span></div>
</font></span><div class="HOEnZb"><div class="h5">
<div class="gmail_extra"><br><br><div class="gmail_quote">On Tue, Dec 4, 2012 at 1:22 AM, Matthias Radestock <span dir="ltr"><<a href="mailto:matthias@rabbitmq.com" target="_blank">matthias@rabbitmq.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div>On 04/12/12 08:55, Matthias Radestock wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
I am guessing your messages are marked as persistent. The 25ms is indeed<br>
an aggregation interval, but for the disk (in particular fsyncs) rather<br>
than the network.<br>
</blockquote>
<br></div>
However, fsyncs also happen when queues and the storage sub-system go idle, so the interval only kicks in when the system is busy (thus ensuring that fsyncs aren't delayed indefinitely).<br>
<br>
So I am pretty sure what you are seeing is simply the cost of performing an fsync per message. There's nothing that can be done about that except buying faster disks / switching to SSDs.<br>
<br>
Regards,<br>
<br>
Matthias.<br>
</blockquote></div><br></div>
</div></div></blockquote></div><br></div>