[rabbitmq-discuss] millions of unack'd messages in a day -- disk store instead of ram?

Alexis Richardson alexis.richardson at gmail.com
Thu Apr 30 16:16:54 BST 2009


Brian

That is pretty excellent.  Do you think you might be tempted to blog about it?

alexis


On Thu, Apr 30, 2009 at 4:11 PM, Brian Whitman <brian at echonest.com> wrote:
> Just fyi to follow up to this
> I created a 32GB swap on an EBS volume on an amzn mid-level instance (8GB
> RAM)
> I was able to hammer rabbit with far more messages than I've ever been able
> to store. Once it crossed into swap amazingly write speed only suffered
> slightly (probably because the old messages were swapped out, not the new
> ones), but get speed did suffer. The important part is that the server
> stayed up, even after using 20GB of virtual RAM (millions of large messages)
> I did not alter rabbit or the configuration in any way. I am using the
> version in apt. As a matter of fact, that memory warn config parameter was
> not recognized by rabbit and would put a warning in the log when it started.
>
>
>
> On Mon, Apr 27, 2009 at 11:51 AM, Brian Whitman <brian at echonest.com> wrote:
>>
>> On Mon, Apr 27, 2009 at 10:04 AM, Alexis Richardson
>> <alexis.richardson at gmail.com> wrote:
>>>
>>> Brian
>>> You may find this blog post relevant:
>>>
>>> http://www.lshift.net/blog/2009/04/02/cranial-surgery-giving-rabbit-more-memory
>>
>> Ah great, putting an EBS swap behind it is probably a good temp solution.
>> I'll give that a go and run some tests. Thanks for the pointer. I wouldn't
>> recommend anyone use S3 for this-- we max out at about 40-50 "keys"/files a
>> second from other ec2 instances.
>>
>
>
> _______________________________________________
> rabbitmq-discuss mailing list
> rabbitmq-discuss at lists.rabbitmq.com
> http://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss
>
>




More information about the rabbitmq-discuss mailing list