<div dir="ltr">Hi ,<div><br></div><div>Simon - Thanks for comparitive values and some more insight into my questions. The discussion is going in a direction to understand </div><div><br></div><div>1) how good or bad my pika clients efficiciency is and then how exactly my system resources are used by both client and rabbit server.</div>
<div><br></div><div>So far I can only see that if I send 100 byes of payload, maximum throughput I get is around 30,000 msg/sec and maximum bandwidth usage is 160-170 mbps. Trying to understand what is happening in system with RAM, memory etc. etc.... </div>
<div><br></div><div>As you mentioned I see comapritively better bandwidth usage with 500 bytes and 1000 bytes and they are in range of 350 Mbps and 580 Mpbs. Which I don't know is appropriate or not.</div><div><br></div>
<div>Do you know if one can saturate 8Gbps link with Rabbit ? </div><div>I mean does the other fastest client could really show bandwidth usage like 2, 3 or 5 Gbps, given there are 20 cores of CPU and 10GB of RAM exclusively given to rabbit or even higher RAM. </div>
<div><br></div><div>My goal is to understand and hopefully to pin point what could be the bottleneck in the environment I am currently running. </div><div><br></div><div><br></div><div><br></div><div>For the earlier question of Micheal on number of cores, here is some more info. </div>
<div><br></div><div>I have 10 physical cores and they are hyperthreaded so that's how I have 20 cores. I have such two machines. So then,</div><div><br></div><div>Machine-1 with 20 cores running exclusively my Producer & Receiver based on pika</div>
<div>Machine-2, again with 20 cores running exclusively RABBITMQ server</div><div><br></div><div>I us KVM hypervisor and create virtual machine and through this control both cores, RAM and DISK resources for both client and rabbit server.</div>
<div><br></div><div>I had noticed one more thing, may be worth mentioning. When I run </div><div><br></div><div>dstat --top-latency ( which I understand will show me a process with highest latency) it most of the time display 'rcu_sched' process. After some googling, I learnt that it's Read,Copy update scheduling available with linux kernel > 2.6.x (I guess). This is a kind of improvement, which allow to read memory while it's being updated. I don't know much of it.</div>
<div><br></div><div>Thank you again !</div><div><br></div><div>Best Regards,</div><div>Priyanki.</div><div><br></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Fri, Aug 16, 2013 at 6:10 PM, Simon MacMullen <span dir="ltr"><<a href="mailto:simon@rabbitmq.com" target="_blank">simon@rabbitmq.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">I'm not completely sure where this discussion is going :-) but I'll pitch in anyway...<div class="im"><br>
<br>
On 16/08/13 10:33, Priyanki Vashi wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
In my wireshark traces, I see following frames for every message I send.<br>
I have publisher confirm and consumer ack enabled with prefetch_count<br>
not set.<br>
</blockquote>
<br></div>
Of course publisher confirm is optional. Also if the broker is busy enough it can start to issue acks with multiple=true. So you don't have to pay the cost of an ack for every message published.<div class="im"><br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
content header = 88 bytes ( does not show any direct info so not sure)<br>
basic.publish frame = 85 bytes ( shows details of queue, exchange,<br>
routing key etc)<br>
content body = 137 bytes ( in this my payload is 100 bytes of string +<br>
37 bytes might be checksum etc.)<br>
</blockquote>
<br></div>
I don't see this much. Publishing with:<br>
<br>
$ runjava.sh com.rabbitmq.examples.<u></u>MulticastMain -C 1 -y0 -s 100<br>
<br>
(i.e. publish 1 message of 100 bytes, don't consume anything, then stop)<br>
<br>
I get a 255 byte (Ethernet) frame sent for the publish:<br>
<br>
14 bytes Ethernet II header<br>
20 bytes IPv4 header<br>
32 bytes TCP header<br>
59 bytes AMQP basic.publish method (variable size, including exchange name and routing key)<br>
22 bytes content header (i.e. minimal basic.properties)<br>
108 bytes content (100 bytes payload)<br>
<br>
So I suspect there's something different with your client and / or test app. Note that things like whether you set anything in basic.properties can make quite a difference here, the AMQP encoding is designed to be shorter in the common case.<br>
<br>
For most people the framing costs per message are not excessive. I don't think there is any particular waste there. If the costs are excessive for you, then MQTT would offer lower (but still present) per-message framing costs.<br>
<br>
But you can't expect to send tiny messages and fill your network bandwidth anyway. Each message's framing has a fixed cost to decode, and then each message has a fixed cost for routing logic as well. (Even with fanout exchanges we still have to look up bindings after all.)<br>
<br>
tl;dr: you'll notice much higher bandwidth utilisation if you start sending longer messages.<br>
<br>
Cheers, Simon<span class="HOEnZb"><font color="#888888"><br>
<br>
-- <br>
Simon MacMullen<br>
RabbitMQ, Pivotal<br>
</font></span></blockquote></div><br></div></div>