[rabbitmq-discuss] How to handle extremely large queues

Greg Poirier greg.poirier at opower.com
Sun Feb 16 21:04:03 GMT 2014


So I am using the management interface to verify that the memory
utilization is as I expect. We do not page messages to disk (set the paging
watermark to 1.1) so the management plugin should be reporting a number
similar to message size * number of messages.

On Sunday, February 16, 2014, Alvaro Videla <videlalvaro at gmail.com> wrote:

> What do you mean by RabbitMQ inaccurately reporting the memory that is
> using?
>
> On Sun, Feb 16, 2014 at 8:57 PM, Greg Poirier <greg.poirier at opower.com>
> wrote:
> > If Rabbit inaccurately reports the amount of memory it's using, how are
> we
> > to provision systems? With some understanding of normal load, however
> > suboptimal the use case may be, we should have an understanding of memory
> > requirements... Particularly when rabbit provided a reporting mechanism
> to
> > confirm out expected memory usage.
> >
> > I guess I am confused.
> >
> >
> > On Sunday, February 16, 2014, Alvaro Videla <videlalvaro at gmail.com>
> wrote:
> >>
> >> Hi,
> >>
> >> Keep in mind that there's small memory footprint per message, even if
> >> the message has been paged to disk.
> >>
> >> Regards,
> >>
> >> Alvaro
> >>
> >> On Sun, Feb 16, 2014 at 8:28 PM, Greg Poirier <greg.poirier at opower.com>
> >> wrote:
> >> > They are many small messages. Each node in the cluster has 8 gigs but
> is
> >> > only using maybe 2. Is memory really the problem?
> >> >
> >> >
> >> > On Sunday, February 16, 2014, Michael Klishin
> >> > <michael.s.klishin at gmail.com>
> >> > wrote:
> >> >>
> >> >>
> >> >> 2014-02-16 22:32 GMT+04:00 Greg Poirier <greg.poirier at opower.com>:
> >> >>>
> >> >>> In our current configuration, we have a 3-node cluster with 2 disc
> and
> >> >>> 1
> >> >>> ram node with HA mirroring to all nodes in the cluster. In periods
> of
> >> >>> high
> >> >>> utilization of the cluster, we are noticing frequent partitioning.
> We
> >> >>> have
> >> >>> narrowed it down to this particular use case as none of our other
> >> >>> clusters
> >> >>> (running on the same physical hardware with the same cluster
> >> >>> configuration)
> >> >>> experience this kind of partitioning.
> >> >>>
> >> >>> Is there some better way that we can configure RabbitMQ to handle
> this
> >> >>> kind of load pattern? I understand this is perhaps not the best way
> to
> >> >>> use
> >> >>> RabbitMQ, but it is unavoidable for the time being. Any suggestions
> >> >>> would be
> >> >>> appreciated.
> >> >>
> >> >>
> >> >> Short answer is: give it more RAM.
> >> >>
> >> >> Relevant blog posts:
> >> >>
> >> >>
> >> >>
> >> >>
> http://blog.travis-ci.com/2013-08-08-solving-the-puzzle-of-scalable-log-processing/
> >> >>
> >> >>
> >> >>
> http://www.rabbitmq.com/blog/2014/01/23/preventing-unbounded-buffers-with-rabbitmq/
> >> >> --
> >> >> MK
> >> >>
> >> >> http://github.com/michaelklishin
> >> >> http://twitter.com/michaelklishin
> >> >
> >> >
> >> > _______________________________________________
> >> > rabbitmq-discuss mailing list
> >> > rabbitmq-discuss at lists.rabbitmq.com
> >> > https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss
> >> >
> >> _______________________________________________
> >> rabbitmq-discuss mailing list
> >> rabbitmq-discuss at lists.rabbitmq.com
> >> <https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20140216/3a3626ae/attachment.html>


More information about the rabbitmq-discuss mailing list