[rabbitmq-discuss] millions of unack'd messages in a day -- disk store instead of ram?

Brian Whitman brian at echonest.com
Thu Apr 30 16:11:08 BST 2009


Just fyi to follow up to this
I created a 32GB swap on an EBS volume on an amzn mid-level instance (8GB
RAM)

I was able to hammer rabbit with far more messages than I've ever been able
to store. Once it crossed into swap amazingly write speed only suffered
slightly (probably because the old messages were swapped out, not the new
ones), but get speed did suffer. The important part is that the server
stayed up, even after using 20GB of virtual RAM (millions of large messages)

I did not alter rabbit or the configuration in any way. I am using the
version in apt. As a matter of fact, that memory warn config parameter was
not recognized by rabbit and would put a warning in the log when it started.




On Mon, Apr 27, 2009 at 11:51 AM, Brian Whitman <brian at echonest.com> wrote:

> On Mon, Apr 27, 2009 at 10:04 AM, Alexis Richardson <
> alexis.richardson at gmail.com> wrote:
>
>> Brian
>> You may find this blog post relevant:
>>
>> http://www.lshift.net/blog/2009/04/02/cranial-surgery-giving-rabbit-more-memory
>>
>
>
> Ah great, putting an EBS swap behind it is probably a good temp solution.
> I'll give that a go and run some tests. Thanks for the pointer. I wouldn't
> recommend anyone use S3 for this-- we max out at about 40-50 "keys"/files a
> second from other ec2 instances.
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20090430/3bd05d3a/attachment.htm 


More information about the rabbitmq-discuss mailing list