So I am using the management interface to verify that the memory utilization is as I expect. We do not page messages to disk (set the paging watermark to 1.1) so the management plugin should be reporting a number similar to message size * number of messages. <span></span><br>
<br>On Sunday, February 16, 2014, Alvaro Videla <<a href="mailto:videlalvaro@gmail.com">videlalvaro@gmail.com</a>> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
What do you mean by RabbitMQ inaccurately reporting the memory that is using?<br>
<br>
On Sun, Feb 16, 2014 at 8:57 PM, Greg Poirier <<a>greg.poirier@opower.com</a>> wrote:<br>
> If Rabbit inaccurately reports the amount of memory it's using, how are we<br>
> to provision systems? With some understanding of normal load, however<br>
> suboptimal the use case may be, we should have an understanding of memory<br>
> requirements... Particularly when rabbit provided a reporting mechanism to<br>
> confirm out expected memory usage.<br>
><br>
> I guess I am confused.<br>
><br>
><br>
> On Sunday, February 16, 2014, Alvaro Videla <<a>videlalvaro@gmail.com</a>> wrote:<br>
>><br>
>> Hi,<br>
>><br>
>> Keep in mind that there's small memory footprint per message, even if<br>
>> the message has been paged to disk.<br>
>><br>
>> Regards,<br>
>><br>
>> Alvaro<br>
>><br>
>> On Sun, Feb 16, 2014 at 8:28 PM, Greg Poirier <<a>greg.poirier@opower.com</a>><br>
>> wrote:<br>
>> > They are many small messages. Each node in the cluster has 8 gigs but is<br>
>> > only using maybe 2. Is memory really the problem?<br>
>> ><br>
>> ><br>
>> > On Sunday, February 16, 2014, Michael Klishin<br>
>> > <<a>michael.s.klishin@gmail.com</a>><br>
>> > wrote:<br>
>> >><br>
>> >><br>
>> >> 2014-02-16 22:32 GMT+04:00 Greg Poirier <<a>greg.poirier@opower.com</a>>:<br>
>> >>><br>
>> >>> In our current configuration, we have a 3-node cluster with 2 disc and<br>
>> >>> 1<br>
>> >>> ram node with HA mirroring to all nodes in the cluster. In periods of<br>
>> >>> high<br>
>> >>> utilization of the cluster, we are noticing frequent partitioning. We<br>
>> >>> have<br>
>> >>> narrowed it down to this particular use case as none of our other<br>
>> >>> clusters<br>
>> >>> (running on the same physical hardware with the same cluster<br>
>> >>> configuration)<br>
>> >>> experience this kind of partitioning.<br>
>> >>><br>
>> >>> Is there some better way that we can configure RabbitMQ to handle this<br>
>> >>> kind of load pattern? I understand this is perhaps not the best way to<br>
>> >>> use<br>
>> >>> RabbitMQ, but it is unavoidable for the time being. Any suggestions<br>
>> >>> would be<br>
>> >>> appreciated.<br>
>> >><br>
>> >><br>
>> >> Short answer is: give it more RAM.<br>
>> >><br>
>> >> Relevant blog posts:<br>
>> >><br>
>> >><br>
>> >><br>
>> >> <a href="http://blog.travis-ci.com/2013-08-08-solving-the-puzzle-of-scalable-log-processing/" target="_blank">http://blog.travis-ci.com/2013-08-08-solving-the-puzzle-of-scalable-log-processing/</a><br>
>> >><br>
>> >><br>
>> >> <a href="http://www.rabbitmq.com/blog/2014/01/23/preventing-unbounded-buffers-with-rabbitmq/" target="_blank">http://www.rabbitmq.com/blog/2014/01/23/preventing-unbounded-buffers-with-rabbitmq/</a><br>
>> >> --<br>
>> >> MK<br>
>> >><br>
>> >> <a href="http://github.com/michaelklishin" target="_blank">http://github.com/michaelklishin</a><br>
>> >> <a href="http://twitter.com/michaelklishin" target="_blank">http://twitter.com/michaelklishin</a><br>
>> ><br>
>> ><br>
>> > _______________________________________________<br>
>> > rabbitmq-discuss mailing list<br>
>> > <a>rabbitmq-discuss@lists.rabbitmq.com</a><br>
>> > <a href="https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss" target="_blank">https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss</a><br>
>> ><br>
>> _______________________________________________<br>
>> rabbitmq-discuss mailing list<br>
>> <a>rabbitmq-discuss@lists.rabbitmq.com</a><br>
>> <a href="https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss" target="_blank"></a></blockquote>