[rabbitmq-discuss] Java client Channel.Close blocking indefinitely

Iain Hull iain.hull at workday.com
Thu Sep 1 08:49:11 BST 2011


Hi Matthias,

Thanks for your response, yes you are right. I reran the test last night
with 2.6.0 and the same thing happened.  

Below is the output from the rabbit log file.  It clearly shows an alarm
set at 19:28:20 and then cleared at 23:45:45, then set at 23:45:46 and
cleared at 03:49:42 then finally set again at 03:49:43 and not cleared.

This matches the activity in test also.

I have obviously overwhelming the RabbitMQ server, this is ok (it is the
purpose of my test), and this is running on my laptop not a production
server.  Here are the details from the web interface

Name                rabbit at IHULL-01    
File descriptors    install_handle_from_sysinternals / 1024
Socket descriptors  3 / 829    
Erlang processes    28323 / 32768 
Memory              1.0GB (?) 819.2MB high watermark
Uptime              16h 13m
Version             2.6.0 / R14B03
Type                Disc Stats *

I am very impressed that the web interface continues to work when the
server is not responding (this is good news for our Nagios scripts). 

The next questions are
* How to prevent this happening to the server? I will start researching
this now, but any pointers would be appreciated.
* How to help the client react to this situation? Blocking indefinitely
is not very nice, my primary use case is best effort delivery, and I
cannot block up application threads I need to log a failure and move on.
Either I move all RabbitMQ calls to their own thread or I find a way to
notice a potential blockage and drop messages early.  Again any
suggestions of how people have handled this situation would be
appreciated.

Regards,
Iain.

=INFO REPORT==== 31-Aug-2011::19:25:50 ===
    alarm_handler:
{set,{{vm_memory_high_watermark,'rabbit at IHULL-01'},[]}}

=INFO REPORT==== 31-Aug-2011::19:28:19 ===
vm_memory_high_watermark clear. Memory used:858765904 allowed:858993459

=INFO REPORT==== 31-Aug-2011::19:28:19 ===
    alarm_handler: {clear,{vm_memory_high_watermark,'rabbit at IHULL-01'}}

=INFO REPORT==== 31-Aug-2011::19:28:20 ===
vm_memory_high_watermark set. Memory used:874341656 allowed:858993459

=INFO REPORT==== 31-Aug-2011::19:28:20 ===
    alarm_handler:
{set,{{vm_memory_high_watermark,'rabbit at IHULL-01'},[]}}

=INFO REPORT==== 31-Aug-2011::23:45:45 ===
vm_memory_high_watermark clear. Memory used:857987848 allowed:858993459

=INFO REPORT==== 31-Aug-2011::23:45:45 ===
    alarm_handler: {clear,{vm_memory_high_watermark,'rabbit at IHULL-01'}}

=INFO REPORT==== 31-Aug-2011::23:45:46 ===
vm_memory_high_watermark set. Memory used:890960640 allowed:858993459

=INFO REPORT==== 31-Aug-2011::23:45:46 ===
    alarm_handler:
{set,{{vm_memory_high_watermark,'rabbit at IHULL-01'},[]}}

=INFO REPORT==== 1-Sep-2011::03:49:42 ===
vm_memory_high_watermark clear. Memory used:858210744 allowed:858993459

=INFO REPORT==== 1-Sep-2011::03:49:42 ===
    alarm_handler: {clear,{vm_memory_high_watermark,'rabbit at IHULL-01'}}

=INFO REPORT==== 1-Sep-2011::03:49:43 ===
vm_memory_high_watermark set. Memory used:872273912 allowed:858993459

=INFO REPORT==== 1-Sep-2011::03:49:43 ===
    alarm_handler:
{set,{{vm_memory_high_watermark,'rabbit at IHULL-01'},[]}}


-----Original Message-----
From: Matthias Radestock [mailto:matthias at rabbitmq.com] 
Sent: 31 August 2011 16:43
To: Iain Hull
Cc: rabbitmq-discuss at lists.rabbitmq.com
Subject: Re: [rabbitmq-discuss] Java client Channel.Close blocking
indefinitely

Iain,

On 31/08/11 10:46, Iain Hull wrote:
> During one of my tests a call to Channel.Close blocked and never
> returned.
> [...]
> I am not sure if this is relevant but I noticed some information in
the
> logs indicating that the memory high_watermark was hit around the time
> the test stopped.

When the high watermark is hit all connections on which a publish is 
received get blocked. So no data will make it through from these 
connections, and that includes Channel.Close.

The server should eventually recover from this, by paging messages to 
disk or delivering them to consumers. Did that happen in your case, i.e.

did the logs show the alarm getting cleared?

Regards,

Matthias.


More information about the rabbitmq-discuss mailing list