[rabbitmq-discuss] RabbitMQ scaling with OpenStack
ask at rabbitmq.com
Tue Jan 22 13:58:34 GMT 2013
On Jan 22, 2013, at 1:43 PM, Ask Solem <ask at rabbitmq.com> wrote:
> On Jan 20, 2013, at 5:47 AM, Ray Pekowski <pekowski at gmail.com> wrote:
>> Thanks for commenting.
>> On Fri, Jan 18, 2013 at 17:14 GMT, Alexis Richardson
>> <alexis at rabbitmq.com> wrote:
>>> Some of the numbers you saw seem oddly low, and I wonder if there is
>>> more going on that is easily fixed. The config of Rabbit in OpenStack
>>> may be very borked.
>> Some possible things that could make the results lower than you would
>> expect are:
>> 1) All machines are virtual machines, including the RabbitMQ servers.
>> Virtualization typically costs around 40%.
>> 2) All VMs were only assigned 2 CPUs. The CPUs are Intel Xeon E5620 @ 2.4 GHz.
>>> Can I ask what clients you used? It may be worth testing a few different ones.>>
>> OpenStack uses the kombu. Unless there is a client with a compatible
>> API, I don't think it would be easy to change.
> If by an "RPC call" you mean a simple request-reply then 35/s is indeed terribly low,
> even with persistent messages you should be able to do a few thousands a sec (depending
> on hardware). With non-persistent messages and using kombu+librabbitmq as the underlying transport
> I'm able to do 20,000 messages/s on my laptop...
And oh, just noticed you said it creates a new queue for every response.
This is not a very efficient way to implement RPC, but not just that.
In my benchmarks I suddenly noticed queue_declare calls taking several seconds to complete,
something which did not happen in earlier RabbitMQ versions.
I discussed it with other rabbit developers and they told me that this could
be caused by the broker having to do fsync's, and it may be more likely
if the queue is declared with the auto_delete flag.
I could help you review the code as I have lots of experience optimizing
this pattern and Python consumers.
More information about the rabbitmq-discuss