[rabbitmq-discuss] blocked producers
michael.s.klishin at gmail.com
Thu Jul 18 23:19:30 BST 2013
2013/7/19 Kobi Biton <kbiton at outbrain.com>
> this is to our understanding
> in order to protect the broker and the consumer as the producers are
> faster then the consumer ?
Yes, because messages have certain RAM cost.
> The problem is that at some point (when the READY messages count is high ~
> 1M) rabbitmq blocks most of our producers and does not seem to release
> them , until we restart them (logstash daemon) on every one of them , we
> tried purging the queue / restarting rabbitmq , only restarting the
> producers seems to bring things to normal state.
It's driven by memory watermark and/or available disk space. See
RabbitMQ log should make it clear what limit (RAM or disk) was reached.
> I guess my questions are:
> - Is my problem on the consumer side ? I am unable to debug the consumer
> speed or state
Yes, consumers do not keep up. Try with a larger number of them.
> - Can I tune rabbitmq for lots of connections and high message rate?
It's not really about the number of connections or message rates. See
> - We use fanout exchange , when a consumer creates a new queue under this
> exchange and does not consume fast enough can he effect the producers from
> the other queue (i.e cause them to be blocked?)
All connections that attempt to publish anything when one of the limits is
will be throttled.
You can reduce RAM footprint of individual messages by using
which stores message index in Tokyo Cabinet. Yes, it does not even have a
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the rabbitmq-discuss