Ben,<br><br>I was monitoring my running system, and unless I am mistaken, it looks as if amqp_network_connection:reader_loop/4 has a non-tail recursive call, which is making it eat stack and heap like there's no tomorrow. There's a call to gen_tcp:close(Sock) at the end of the function. I suggest you move this to the last line of start_reader as shown:<br>
<br>start_reader(Sock, FramingPid) -><br> process_flag(trap_exit, true),<br> put({channel, 0},{chpid, FramingPid}),<br> {ok, Ref} = prim_inet:async_recv(Sock, 7, -1),<br> reader_loop(Sock, undefined, undefined, undefined),<br>
<b> gen_tcp:close(Sock).</b><br><br>reader_loop(Sock, Type, Channel, Length) -><br> receive<br> {inet_async, Sock, _, {ok, <<Payload:Length/binary,?FRAME_END>>} } -><br> case handle_frame(Type, Channel, Payload) of<br>
closed_ok -><br> ok;<br> _ -><br> {ok, Ref} = prim_inet:async_recv(Sock, 7, -1),<br> reader_loop(Sock, undefined, undefined, undefined)<br>
end;<br> {inet_async, Sock, _, {ok, <<_Type:8,_Channel:16,PayloadSize:32>>}} -><br> {ok, Ref} = prim_inet:async_recv(Sock, PayloadSize + 1, -1),<br> reader_loop(Sock, _Type, _Channel, PayloadSize);<br>
{inet_async, Sock, Ref, {error, Reason}} -><br> io:format("Have a look into this one: ~p~n",[Reason]);<br> {heartbeat, Heartbeat} -><br> rabbit_heartbeat:start_heartbeat(Sock, Heartbeat),<br>
reader_loop(Sock, Type, Channel, Length);<br> {ChannelPid, ChannelNumber} -><br> start_framing_channel(ChannelPid, ChannelNumber),<br> reader_loop(Sock, Type, Channel, Length);<br>
timeout -><br> io:format("Reader (~p) received timeout from heartbeat, exiting ~n");<br> {'EXIT', Pid, Reason} -><br> [H|T] = get_keys({chpid,Pid}),<br> erase(H),<br>
reader_loop(Sock, Type, Channel, Length);<br> Other -><br> io:format("Other ~p~n",[Other])<br> end,<br><b> gen_tcp:close(Sock). %% Non-tail recursive call</b><br><br><br><div class="gmail_quote">
On Mon, May 12, 2008 at 2:07 PM, Ben Hood <<a href="mailto:0x6e6562@gmail.com">0x6e6562@gmail.com</a>> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<div style="">Ed,<div><div class="Ih2E3d"><br><div><div>On 10 May 2008, at 01:02, Edwin Fine wrote:</div><br><blockquote type="cite">Thanks, Ben, I will take a look and give you some feedback.<br><br>In the meantime, I have done the following:<br>
<ul><li>Changed my consumer code (I use the term "consumer" loosely as "anything that eats the output of a producer") to use basic.get instead of basic.consume. Actually, it's set up so that I can select basic.get or basic.consume behavior at run-time. I didn't want to throw away working basic.consume code :)<br>
</li><li>Changed the process that creates consumers so that it now creates one channel per consumer. Previously, there was one channel only for all consumers. One-channel-per-consumer was the only way I could get the code to work with the network client; in the one-channel scenario I was getting back responses to messages destined for different consumers. I assume that with your changes I will be able to again use one channel for all consumers.<br>
</li><li>I tested with 50 queues (each with its own consumer and channel) and it seemed reasonably performant, even with the get. I need to try a full-blast test soon.<br></li></ul></blockquote><div><br></div></div></div>
I have now commited fix 2 of 3 to the mtn repo which addresses the issue of not being able to subscribe concurrently. So the 2 issues you mention here should be addressed.</div><div><br></div><div>The outstanding issue is to close the writer down properly in the network case.</div>
<div><br></div><div>HTH,</div><div><br></div><font color="#888888"><div>Ben</div></font></div></blockquote></div><br>