<div dir="ltr">Hi ,<div><br><div>I m using rabbitmqserver v 3.2.0</div><div>Scenario 1:</div><div>I continously sent messages to rabbitMQ Server using basic publish() without "publisher confirms".</div><div>when the memory limit is breached by the server, the connections are getting blocked.</div>
<div>Even after the connections are blocked, for some interval of time, i see publish() being successful .</div><div>Those successfully published messages which are sent after blocked connection are neither in queue nor in rdq files </div>
<div><br></div><div>Is the message loss expected?</div><div><br></div><div>Scenario 2:<br></div><div><br></div><div><div>I continously sent messages to server using basic publish() with "publisher confirms" for each message.</div>
<div>when the memory limit is breached by the server the connections are getting blocked.</div><div><div>Even after the connections are blocked, for some interval of time, i see publish() being successful .</div><div>Those successfully published messages which are sent after blocked connection are neither in queue nor in rdq files </div>
</div><div>There is message loss and the rate of message publishing is dropping drastically by 80 to 90 percent if we use " publisher confirms"</div><div><br></div><div>Is this expected?</div></div><div><br></div>
<div>If these are expected behaviours, how do we avoid message loss ?</div><div><br></div><div><br></div></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Mon, Dec 30, 2013 at 3:33 PM, sandeep kumar <span dir="ltr"><<a href="mailto:sandy.sandeeep@gmail.com" target="_blank">sandy.sandeeep@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hi ,<div><br></div><div>I have used single rabbitmq node , And I set memory_high_watermark to a small value(0.05)</div>
<div>and messages are being sent to the rabbitmq node at a higher rate.</div><div>When memory is about to be breached because of higher rate of messages to the node creation of .rdq files is observed.</div>
<div>when the memory consumtion crosses the limit all connections to the node getting blocked is observed.</div><div>At this point of time there is neither increase in queue size nor .rdq size, but publish to queue statemt is getting executed.</div>
<div><br></div><div>Questions are :</div><div><br></div><div>where are the published messages are getting store when connections to queue are in blocked state</div><div><br></div><div>how to handle this kind of scenario gracefully</div>
<div><br></div></div>
</blockquote></div><br></div>