[rabbitmq-discuss] Catching channel disconnect

Ben Hood 0x6e6562 at gmail.com
Wed Oct 15 09:08:47 BST 2008


On Wed, Oct 15, 2008 at 12:31 AM, Valentino Volonghi <dialtone at gmail.com> wrote:
> Well, I can see that with the patch what would happen is that at the exact
> moment
> of the disconnect the process would crash and then would be restarted
> instead of
> waiting until a tx_select() fails to then restart everything.

That patch addresses the issue of the connection to the other peer's
socket being lost.

I don't quite know what you mean with the failure of a tx_select
command. How does this differ from anything any other pending RPC
whose bottom half doesn't arrive because the underlying socket
connection has died?

> This is fine and more in line with what I would expect, it's just that even
> in this case
> erlang appears to not have any kind of exponential backoff reconnection
> strategy.

Sorry, I don't follow this either - are you saying that the socket
abstraction in Erlang should try to reconnect a few times before
telling higher level code that the underlying TCP connection has died?

> Basically when it restarts it tries to connect to the remote rabbitmq
> instance and of course,
> since it's down (in my test), it raises an econnrefused because no port is
> open. This is
> all good and fine. I suspect that since econnrefused is an exception I
> should plug my
> logic right there and catch it and start a loop there that runs until the
> connection is
> established and use an exponential backoff to ask for new connections.
> Do you think this makes sense?

If I you are meaning to say that retries are a concern of the client
app rather than the client itself, then yes.

>From an architectural perspective, the role of the client is to
provide a transparent and correct propagation of events between a
client app and the broker. It should not try to address the problems
of application flow that are best left to higher level code.



More information about the rabbitmq-discuss mailing list