[rabbitmq-discuss] Linking to a channel process (erlang client)
huntermorris+rabbitmq at gmail.com
Wed Jun 15 13:05:44 BST 2011
On Wed, Jun 15, 2011 at 12:45 AM, Matthew Sackman <matthew at rabbitmq.com> wrote:
> Hmm, that's tricky. Depends what you want to do really - you could have
> the supervisor link to the base process that is the amqp_client
> application if you want (before starting any workers), but it depends if
> you want death of your application to tear down the whole client -
> sequencing of those actions to achieve a safe, controlled shutdown, is
> not trivial.
Currently, my application's top-level supervisor starts amqp_sup as a
child itself which gives the behaviour I want when my application
crashes. That works quite well and behaves as I expect.
> My understanding is that on the whole, libraries that spawn processes
> are linked to, or monitored, rather than linking / monitoring their
> clients. However, this may be wrong and/or not best practise. If you
> have ideas how this can be improved, please do let us know.
My main concern is leaking processes since client-side channels are
composed of at least 3 processes each (and I expect hundreds of
channels in this particular application).
I've seen other "client" applications which manage socket connections
treat this issue very differently. For example, lhttpc
(https://github.com/esl/lhttpc) spawn_links a simple lhttpc_client
process, passing along self() in order to unlink when a request is
finished. Obviously, HTTP clients are largely stateless which differs
greatly from the situation we have with AMQP connections and channels.
Based on a suggestion, I will most likely trap exits in the worker
process while setting up the link to the channel in order to clean up
if it receives an exit signal from elsewhere. Because the worker
processes are quite simple, this should be fine.
Thanks for your help,
More information about the rabbitmq-discuss