[rabbitmq-discuss] chan.flow and vm_memory_high_watermark

Nicolás César nico at nicocesar.com
Wed Jun 9 15:50:07 BST 2010

El 8 de junio de 2010 17:04, Nicolás César <nico at nicocesar.com> escribió:

> 2010/6/8 Matthew Sackman <matthew at rabbitmq.com>
>> Hi Nicolás,
>> On Tue, Jun 08, 2010 at 08:33:24AM -0300, Nicolás César wrote:
>> (..)
>> > What i'm doing wrong?
>> When you hit the memory watermark, the *server* sends to *you* the
>> channel.flow{active=false} which you're meant to acknowledge by sending
>> back a channel.flow_ok{active=false}. However, depending on which python
>> client you're using, this may not be implemented - I'm aware Tony's
>> produced an experimental patch for pyamqplib, and I'm not sure about the
>> status of flow control in Pika.
> Thanks Matthew for the excelent response. You probably are talking about
> this:
> http://code.google.com/p/py-amqplib/issues/detail?id=19
>> I think what you're doing is sending a channel.flow to the server which
>> is telling the server whether or not to send message to you, not the
>> other way around.
> Thanks for the explanation. I'll see that patch and get my things going
> smoothly

I've applied the patch... works just fine. If I use chan.publish( ...,
block_on_flow_control=True ) I get the expected result and (if I set it to
false I get a convienent exception)

But now I've hitted the "vm_memory_high_watermark set." ; every producer is
blocked but the memory stills allocated. (using >400M for 50 empty queues)
for more than an hour.

When vm_memory_high_watermark will be cleared?  Can I force something to get
that memory back to normal. (I refuse to shut down rabbit, since I'm
debugging to get this into production)


Nico César
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20100609/6192a0ca/attachment.htm>

More information about the rabbitmq-discuss mailing list