[rabbitmq-discuss] ulimit and cgroup
Michael Klishin
mklishin at gopivotal.com
Thu Mar 13 09:18:23 GMT 2014
On 13 Mar 2014, at 12:47, ratheesh kannoth <ratheesh.ksz at gmail.com> wrote:
> This is the case even if we don't use durable queues...right ?
This is true for non-persistent messages, too.
>
>> There should be no giant files, though, on disk
>> message store will split them into multiple smaller files (16 MB by default, AFAIR).
>
> All these files would be opened and kept. Or will be opened on demand.
> Because linux ulimit can set limit on
> no of opened files.
They are created as needed. However, during compaction (garbage collection
of messages on disk), the number of files may temporarily become up to 2 times greater
than usual.
It is rarely a good idea to set a limit on how many files RabbitMQ (or any other infrastructure service, e.g. PostgreSQL or Cassandra)
can open. Best case scenario it will stop accepting new connections (there will be messages
in the log about this).
In general RabbitMQ tries to reduce the number of file descriptors it uses.
>
>> There is a fixed (a few dozens of bytes) RAM cost per message until you use a
>> custom queue index plugin, such as rabbitmq-toke [1].
>
> Can you list out other plugins ( and tuning parameters ) through which
> i can tune Broker's system resource usage ?
http://www.rabbitmq.com/configure.html#configuration-file
>
>> If RabbitMQ cannot allocate memory or write to a file, of course, there is little it can do, so be
>> careful about setting RSS and file size limit.
> Will it gracefully shutdown ? Can you predict the behavior ? it just crash ?
If it fails to write to a file, the OS process won’t crash. If VM fails to allocate memory,
it will exit and RabbitMQ will have no chance to shut down.
Graceful shutdown involves disk I/O, too.
--
MK
Software Engineer, Pivotal/RabbitMQ
More information about the rabbitmq-discuss
mailing list