[rabbitmq-discuss] Direct vs Topic exchange - Question

Konstantin Kalin konstantin.kalin at gmail.com
Fri May 4 21:51:00 BST 2012

Hello, Simon.

Thank you for your help here. See inline. 

On May 4, 2012, at 3:11 AM, Simon MacMullen wrote:

>    amqp_channel:cast(Channel, Publish, Msg).
> Since this is pushing messages as fast as possible into the client channel process, you end up filling its mailbox as you can give it messages faster than it can send them out on the wire.

Yes. I know about this and this is why I created rate control on Client side. I thought that amqp_channel:cast(Channel, Publish, Msg) is better way to create a load :) Well, I have changed it to amqp_channel:call. Thank you for the advice. 

> direct, 1 producer,   1 consumer:   ~25kHz
> topic,  1 producer,   1 consumer:   ~14kHz
> direct, 10 producers, 10 consumers: ~25kHz
>        (but queues grew since consumers could not keep up)
> topic,  10 producers, 10 consumers: ~16kHz
> on my workstation. This is very roughly what I would expect - with tiny messages quite a lot of the cost of each message is routing, and topic routing *is* more complex.

Well. It looks much better than I have :) And this is why I have asked to guide me what I'm doing wrong. I believe that RabbitMQ is great product and this is why I spend time on different tests to gain better understanding how it works. Thus I will use RabbitMQ more effectively. 
> So I'm afraid I don't really trust the results you got - how were you choosing a rate limit for the client? Too low and you're not driving the broker as hard as you can, and with that cast in place, too high and you'll start to eat all memory and eventually cause RabbitMQ to get swapped out.
Understood. Usually the load test results causes a lot of confusions and questions since people makes load-tests for particular cases. 
Please don't treat me wrong. I don't blame RabbitMQ. I rather will blame me for these strange results. 

Let me explain my test setup:
I have two physical servers: 16 AuthenticAMD CPU, 16 Gb memory each. I use one physical server to virtualize RabbitMQ using KVM. RabbitMQ has 8 CPU and 4Gb. And there are no other VMs running on this physical server. And I use second physical server to run Consumer/Publisher code that you tried. 

I repeated the test with direct and topic exchanges after I modified the code as mentioted above. 
10 consumers and publishers for each type of exchange. I started publishing using following commands:
Test 1 - publisher:start_publish(<<"kkalin_test1">>, <<"topic">>, 20000, 20000, 10, 0).
Test 2 - publisher:start_publish(<<"kkalin_test">>, <<"direct">>, 20000, 20000, 10, 0). 

I got: ~4k for Topic and ~60k for Direct :) See attached screenshots. I don't understand why. And I think it's hard to blame that publisher_wrk works slow for Topic exchange. There is no dedicated logic in software code that can affect messages rate depending on Exchange type. Same code, same machines and results with big differences. :(

May be you can provide a few advices what I can check to understand where is a bottle neck? 
According to vmstat and htop, RabbitMQ and Client servers were not loaded in case of Topic Exchange. Looks like that RabbitMQ and Publisher are waiting each other. 

Thank you for patience,

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20120504/cab52660/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Screen Shot 2012-05-04 at 12.57.24 PM.png
Type: image/png
Size: 92029 bytes
Desc: not available
URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20120504/cab52660/attachment.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Screen Shot 2012-05-04 at 1.09.10 PM.png
Type: image/png
Size: 95550 bytes
Desc: not available
URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20120504/cab52660/attachment-0001.png>

More information about the rabbitmq-discuss mailing list