[rabbitmq-discuss] blocked producers
Michael Klishin
michael.s.klishin at gmail.com
Thu Jul 18 23:19:30 BST 2013
2013/7/19 Kobi Biton <kbiton at outbrain.com>
> this is to our understanding
> in order to protect the broker and the consumer as the producers are
> faster then the consumer ?
>
Yes, because messages have certain RAM cost.
> The problem is that at some point (when the READY messages count is high ~
> 1M) rabbitmq blocks most of our producers and does not seem to release
> them , until we restart them (logstash daemon) on every one of them , we
> tried purging the queue / restarting rabbitmq , only restarting the
> producers seems to bring things to normal state.
>
It's driven by memory watermark and/or available disk space. See
http://www.rabbitmq.com/memory.html
RabbitMQ log should make it clear what limit (RAM or disk) was reached.
>
> I guess my questions are:
>
> - Is my problem on the consumer side ? I am unable to debug the consumer
> speed or state
>
Yes, consumers do not keep up. Try with a larger number of them.
> - Can I tune rabbitmq for lots of connections and high message rate?
>
It's not really about the number of connections or message rates. See
http://www.rabbitmq.com/memory.html
> - We use fanout exchange , when a consumer creates a new queue under this
> exchange and does not consume fast enough can he effect the producers from
> the other queue (i.e cause them to be blocked?)
>
All connections that attempt to publish anything when one of the limits is
hit
will be throttled.
You can reduce RAM footprint of individual messages by using
https://github.com/rabbitmq/rabbitmq-toke
which stores message index in Tokyo Cabinet. Yes, it does not even have a
README :(
--
MK
http://github.com/michaelklishin
http://twitter.com/michaelklishin
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20130719/7e1772de/attachment.htm>
More information about the rabbitmq-discuss
mailing list