[rabbitmq-discuss] RabbitMQ cluster with potentially millions of connections

Alexis Richardson alexis at rabbitmq.com
Thu Feb 21 14:01:43 GMT 2013


"Any slow consumer issues will not come up as it does not access disc
for persistence"

How on earth does that make sense?  Slow consumer issues arise when
production rates exceed consumption rates.  Not accessing a disk means
that you are memory limited, which is the primary cause of slow
consumer failures.

If you are going to post advertisements for commercial products,
please be accurate, and please quote all prices for hardware or
software.

Thanks

alexis


On Thu, Feb 21, 2013 at 12:28 PM,  <oxxyyd at hotmail.com> wrote:
> Hi there,
>
> If you need performance and scale: One single solace appliance can handle
> 200k concurrent connections for web streaming /box and do 230k fully
> guaranteed messages s or 11m reliable messages/s.  Any slow consumer issues
> will not come up as it does not access disc for persistence  You can monitor
> the FPG card without impacting the performance.
> Not sure if it will fit into your architecture thought...
>
> Cheers,
> D
>
> Am Dienstag, 29. Januar 2013 21:12:38 UTC+1 schrieb Sandy:
>>
>> To RabbitMQ gurus - Could you please post the replies here instead of just
>> replying to the original person (Roman in this case). This is something of
>> interest to me as well and I would like to follow this thread. Thx!
>>
>> On Saturday, January 26, 2013 6:58:36 AM UTC-5, Romanas S. wrote:
>>>
>>> Hi guys,
>>>  I'm trying to come up with an architecture for a RabbitMQ cluster that
>>> could potentially handle millions of (persistent) connections. I've set up a
>>> cluster of Rabbits with 3 nodes (16GB RAM, 4x cpu cores each) and a Load
>>> Balancer in front of them, increased all of the system limits on sockets and
>>> file descriptors, but can't seem to scale to more then 30k connections on
>>> the entire cluster.
>>>
>>> Each connections has it's own exchange and a temporary direct queue, RAM
>>> usage is about 2-3GB on each node and CPU usage peaks at about 200% (which
>>> on 4 cores isn't all that bad). At first all is quite well and the cluster
>>> manages upwards of 1k msgs/second (I haven't tried more, pretty sure it
>>> would handle it). However, after scaling to that many connections, the
>>> cluster seems to grind to a halt and eventually nodes become unresponsive.
>>> The management API takes upwards of 5 (FIVE!) minutes to retrieve a server
>>> summary report. Am I missing something here? Or is this just a bad idea
>>> altogether? I was expecting that if each node was able to handle 30k+
>>> connections, I could have a cluster of 50 or so (with multiple balancers, of
>>> course!) and live a happy life  ;-)
>>>
>>> Thanks,
>>> Roman
>
>
> _______________________________________________
> rabbitmq-discuss mailing list
> rabbitmq-discuss at lists.rabbitmq.com
> https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss
>


More information about the rabbitmq-discuss mailing list