[rabbitmq-discuss] Disk/Memory Usage with RabbitMQ
Matthias Radestock
matthias at lshift.net
Fri Dec 21 20:28:12 GMT 2007
Barry,
Barry Pederson wrote:
> I've been experimenting with RabbitMQ 1.2.0 on a FreeBSD 6.2 box,
> installed using the Generic Unix instructions from
> http://www.rabbitmq.com/install.html - (not doing any clustering), and
> am seeing my memory and disk usage growing pretty steadily as I'm
> starting to make use of persistent messages and durable queues. I
> should also mention I'm using my alternate Python client library.
>
> In my mnesia/rabbit directory, rabbit_persister.LOG has grown to 23Mb,
> and "top" shows the SIZE of the process to be 113M, up from about 18M
> when the daemon first starts.
How are you consuming messages from the queues? Delivered messages
(persistent or not) need to be acknowledged explicitly with basic.ack
unless the noAck flag for basic.get/basic.consume was set to true.
Also, what happens when you do one of the following:
- close and re-open the channel on which messages have been consumed
- close and re-open the connection on which messages have been consumed
- restart the server
and then start consuming messages from the same queues? Do you get
messages back that you have received before? If so that would be an
indication that you are not sending acknowledgments.
> I found some earlier messages in Google about rabbit memory usage, for
> example:
>
> http://www.nabble.com/beam-instances-and-growing-memory-usage-td13805341.html
>
> and used
>
> rabbit_amqqueue:stat_all().
>
> To look at the queues, and just see
>
> [{ok,{resource,<<"/">>,queue,<<"syslog_interpret">>},0,1}]
>
> Which I take to mean just one queue named 'syslog_interpret' with zero
> messages and 1 consumer?
Correct, though the message count does not include unacknowledged
delivered messages.
> Does having a large and growing rabbit_persister.LOG file indicate
> messages are piling up somewhere?
Yes.
> Are there any other Erlang commands for examining the system to see
> where any logjams may be?
Try
[process_info(P) || P <- processes(),
process_info(P, memory) > {memory, 100000}].
which will list the details of all processes with a memory consumption
greater than 100000 bytes.
> Would publishing persistent messages to a durable exchange with no
> queues bound to it at all (yet) cause the messages to stick around?
You might be on to something there. My own testing shows a surprising
growth in memory and the persister log in this scenario. I will do some
more digging.
Matthias.
More information about the rabbitmq-discuss
mailing list