[rabbitmq-discuss] Memory watermark alert not resetting.

Raviv Pavel raviv at gigya-inc.com
Thu Oct 4 16:57:22 BST 2012


On a hunch I went on and upgraded the other node in the cluster which
didn't start either.
Trying to start the first node again worked and now both are up.

Back to the original problem:
1. I set the watermark very low so I can reach it, effectively  78MB
2. server starts with empty queues and memory usage is 66MB
3. I start publishing messages and reach the watermark
4. publisher gets blocked
5. I purge the queue but memory stays the same

Thanks
*--*Raviv
*
*



On Thu, Oct 4, 2012 at 5:38 PM, Raviv Pavel <raviv at gigya-inc.com> wrote:

> Upgraded the other node (there are two) and it fails to start.
> Here is the startup log:
>
>
> +---+   +---+
> |   |   |   |
> |   |   |   |
> |   |   |   |
> |   +---+   +-------+
> |                   |
> | RabbitMQ  +---+   |
> |           |   |   |
> |   v2.8.7  +---+   |
> |                   |
> +-------------------+
> AMQP 0-9-1 / 0-9 / 0-8
> Copyright (C) 2007-2012 VMware, Inc.
> Licensed under the MPL.  See http://www.rabbitmq.com/
>
> node           : rabbit at mongo-qa2
> app descriptor :
> /usr/lib/rabbitmq/lib/rabbitmq_server-2.8.7/sbin/../ebin/rabbit.app
> home dir       : /var/lib/rabbitmq
> config file(s) : (none)
> cookie hash    : pu0BlS9+G2N9yLfd51TkmA==
> log            : /var/log/rabbitmq/rabbit at mongo-qa2.log
> sasl log       : /var/log/rabbitmq/rabbit at mongo-qa2-sasl.log
> database dir   : /var/lib/rabbitmq/mnesia/rabbit at mongo-qa2
> erlang version : 5.8.4
>
> -- rabbit boot start
> starting file handle cache server
> ...done
> starting worker pool
>  ...done
> starting database                                                     ...
>
> BOOT FAILED
> ===========
>
> Error description:
>    {error,{failed_to_cluster_with,['rabbit at mongo-qa1'],
>                                   "Mnesia could not connect to any disc
> nodes."}}
>
> Log files (may contain more information):
>    /var/log/rabbitmq/rabbit at mongo-qa2.log
>    /var/log/rabbitmq/rabbit at mongo-qa2-sasl.log
>
> Stack trace:
>    [{rabbit_mnesia,init_db,3},
>     {rabbit_mnesia,init,0},
>     {rabbit,'-run_boot_step/1-lc$^1/1-1-',1},
>     {rabbit,run_boot_step,1},
>     {rabbit,'-start/2-lc$^0/1-0-',1},
>     {rabbit,start,2},
>     {application_master,start_it_old,4}]
>
> {"Kernel pid
> terminated",application_controller,"{application_start_failure,rabbit,{bad_return,{{rabbit,start,[normal,[]]},{'EXIT',{rabbit,failure_during_boot}}}}}"}
>
>
> Thanks
> *--*Raviv
> *
> *
>
>
>
> On Thu, Oct 4, 2012 at 5:34 PM, Simon MacMullen <simon at rabbitmq.com>wrote:
>
>> Hard to believe that's to do with the server not starting - can you post
>> a more complete log?
>>
>>
>> On 04/10/12 16:32, Raviv Pavel wrote:
>>
>>> After upgrading the server won't start. from the logs:
>>>
>>> =ERROR REPORT==== 4-Oct-2012::17:11:59 ===
>>> ** Generic server <0.341.0> terminating
>>> ** Last message in was {'$gen_cast',
>>>                             {method,
>>>
>>>   {'queue.declare',0,<<"es1">>,**false,true,false,
>>>                                     false,false,
>>>
>>>   [{<<"x-ha-policy">>,longstr,<<**"all">>}]},
>>>                                 none,noflow}}
>>>
>>>
>>> Thanks
>>> /--/Raviv
>>> /
>>>
>>> /
>>>
>>>
>>>
>>> On Thu, Oct 4, 2012 at 5:05 PM, Raviv Pavel <raviv at gigya-inc.com
>>> <mailto:raviv at gigya-inc.com>> wrote:
>>>
>>>     All queues are empty and "rabbitmqctl list_queues name memory" shows
>>>     they use about 3K each.
>>>     VM memory stats are much lower then the one shown in the overview.
>>>     We're using 2.8.1 - I'll try upgrading to 2.8.7
>>>
>>>     Thanks
>>>     /--/Raviv
>>>     /
>>>     /
>>>
>>>
>>>
>>>     On Thu, Oct 4, 2012 <tel:2012> at 4:57 PM, Matthias Radestock
>>>
>>>     <matthias at rabbitmq.com <mailto:matthias at rabbitmq.com>**> wrote:
>>>
>>>         On 04/10/12 15:53, Simon MacMullen wrote:
>>>
>>>             On 04/10/12 13:01, Raviv Pavel wrote:
>>>
>>>                 Based on the management UI, memory usage doesn't drop
>>>                 and publishers are
>>>                 still blocked.
>>>
>>>
>>>             Hmm. In that case my first guess would be that the queue you
>>>             deleted /
>>>             purged wasn't the queue which was using all the memory.
>>>             Check the memory
>>>             use of other queues - this can be found on the queue details
>>>             page, or
>>>             with "rabbitmqctl list_queues name memory". Also, check the
>>>             VM-wide
>>>             memory statistics (on the node details page, under
>>>             "advanced". This
>>>             might give a clue as to where the memory is going.
>>>
>>>
>>>         ...and please make sure you are running the latest version of
>>>         RabbitMQ (2.8.7, atm), since we have fixed a few memory leaks in
>>>         the in recent versions.
>>>
>>>         Matthias.
>>>
>>>
>>>
>>>
>>
>> --
>> Simon MacMullen
>> RabbitMQ, VMware
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20121004/a1fd0efb/attachment.htm>


More information about the rabbitmq-discuss mailing list