[rabbitmq-discuss] memory usage

Alexis Richardson alexis.richardson at cohesiveft.com
Tue Feb 10 07:27:10 GMT 2009


Valentino

One question (not very helpful..)

Is this in your local system?  I recall you mentioned EC2 and would
like to ask whether it is on there.

alexis


On Tue, Feb 10, 2009 at 1:17 AM, Valentino Volonghi <dialtone at gmail.com> wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> On Feb 9, 2009, at 3:12 AM, Philip Stubbings wrote:
>
>> Hi Matthias, Thank you for your reply.
>>
>> I used rabbitmqctl to determine the status of my queues and my
>> messages are flagged as persistent.
>> I have run the same test again, but this time I inject/consume a few
>> messages later on, as you suggested.
>> However, the memory footprint remains the same:
>
>
> I kind of have the same behavior, I'm not sure if this was there with
> rabbitmq 1.5.0 though.
>
> Basically I have this configuration in a single erlang process:
>
> One box
> Outside world
> - --------------------------------------------------
>       geoip lookup                                    |
>      /                                                            |
>  mochiweb  ---> rabbitmq ---> shovel |  ---> central rmq ---> consumers
>            \                                                      |
>               app logic                                   |
> - --------------------------------------------------
>
>
> Basically the system works fine and is sort of resistant to certain
> system failures.
> Under medium/high load though (about 1000 messages per second peak, each
> message about 600-800 bytes) memory usage starts to ramp up and when
> it comes
> down it's only for a couple of seconds.
>
> The interesting thing is that I'm running this behind an haproxy load
> balancer
> and the client that is under the most load shows this behavior while
> the other
> remains more or less constant at around 200MB (still a lot but better
> than 1.8GB).
> Of course when kswapd starts running it takes up 100% of the CPU and
> every
> gen_server call goes timeout and brings the system to its knees. I
> then stop everything
> and restart the server that showed the problems (I also have to stop
> the benchmark).
> Shovel then starts transmitting some logs back and apparently memory
> usage goes
> back to normal (as much as the other instance that didn't show
> problems).
>
> Now... Through haproxy stats interface I can see that 628790 requests
> were
> completed (and about 1500 had an error due to the timeouts and so on
> right during
> the failure). On the other hand when I check the logfiles I received
> from those
> machines I only see 504400 logfiles. 124k of them are missing. If
> every message
> is about 600 bytes that's roughly 74MB of data missing.
>
> If I list the mnesia dir in the server that failed I see the following:
>
> - -rw-r--r-- 1 root root 83M 2009-02-10 00:41 rabbit_persister.LOG
> - -rw-r--r-- 1 root root 79M 2009-02-10 00:38
> rabbit_persister.LOG.previous
>
> So basically those 124k messages are there and not being delivered even
> if shovel is connected and is waiting for messages using basic.cosume.
>
> Instead on the other machine where rabbitmq didn't fail I see this:
>
> - -rw-r--r-- 1 root root 6.9M 2009-02-10 00:36 rabbit_persister.LOG
> - -rw-r--r-- 1 root root  29M 2009-02-10 00:35
> rabbit_persister.LOG.previous
>
> which I suppose means that it delivered everything and its queue is
> empty.
>
> Considering that this seems to be a load issue I take it's because
> shovel
> (or the queue processes) don't get nearly enough cpu time from erlang
> to process the messages. If this is the case even adding a second shovel
> gen_server wouldn't solve anything because rabbitmq doesn't send
> anything.
>
> Unfortunately I can't use rabbitmqctl in this configuration for some
> reason
> that I can't explain, I get the {badrpc,nodedown} and this time it's
> not related
> to the path from where I start rabbitmqctl but I suppose it's related
> to the fact
> that I start everything inside a single erlang process.
>
> Also if the case is actually about not getting enough cpu time I
> suppose that
> having one cpu per 'kind' of process in rabbitmq would help alleviate
> this
> issue. For example a quad core cpu would a cpu for each of queue,
> channel,
> exchange, etc.
>
> - --
> Valentino Volonghi aka Dialtone
> Now running MacOS X 10.5
> Home Page: http://www.twisted.it
> http://www.adroll.com
>
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.9 (Darwin)
>
> iEYEARECAAYFAkmQ1bUACgkQ9Llz28widGWYRACgidhqXDzWNLC5sXU+9rWW+t3O
> uXQAoLVSZW71gXdANRqemfVGmrytCmPf
> =aA6z
> -----END PGP SIGNATURE-----
>
> _______________________________________________
> rabbitmq-discuss mailing list
> rabbitmq-discuss at lists.rabbitmq.com
> http://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss
>




More information about the rabbitmq-discuss mailing list