[rabbitmq-discuss] Poor performance using a single RabbitMQ connection on high-latency networks

Matthew Sackman matthew at lshift.net
Mon Feb 1 14:50:37 GMT 2010

On Mon, Feb 01, 2010 at 02:59:16PM +0100, Holger Hoffstaette wrote:
> On Mon, 01 Feb 2010 13:33:16 +0000, Matthew Sackman wrote:
> I've always been curious why the default values were so small - any good
> reasons? What I find even more curious is why fixing the server side
> affects TCP window handling, which should normally be negotiated by the
> client. IIRC that only works properly when the server does not explicitly
> configure anything and uses the OS defaults.

Hmm. Our defaults are
SERVER_ERL_ARGS="+K true +A30 \
-kernel inet_default_listen_options
[{nodelay,true},{sndbuf,16384},{recbuf,4096}] \
-kernel inet_default_connect_options [{nodelay,true}]"

so the send buffer you've increased by 4, and the receive buffer by 16.
I'm sure we have had some reasons as to why they're set as they are, but
I'm afraid I don't know what they were. We have recently done some
investigation into the effect of different buffer sizes in the Java
client, but nothing recently going on wrt the server. I wonder whether
Matthias or Tony may recall why our server options are set as they are.

> > We have fixed the problem by set send/receive buffers on server side.
> > 
> > -kernel inet_default_listen_options
> > [{nodelay,true},{sndbuf,65535},{recbuf,65535}]
> Good to know - I wanted to suggest that next as I always do that on my
> rabbits even on LAN.

Indeed, I'm glad that's fixed. I'll raise a bug internally to have a
look at these options again. It's really a matter of providing some
sensible defaults - I'm sure that on every platform, things can be
improved by specific tunings, but it's a case of finding something that
works well out of the box.

If you have any comments or advice for us on these options then we're
certainly all ears.


More information about the rabbitmq-discuss mailing list