<div dir="ltr">I could have sworn at one point, flow control would kick in when consumers couldn't keep up with publish rates. I recall running tests about a year ago where consumer slow downs would slow down publishes. But I could also be out of my mind with more than a few loose screws (actually I know I've got a few loose). But anyways, while testing publishing, it appears flow control only kicks in when the queues can't keep up due to disk/io or ram utilization. The rabbit docs seem to verify this. Did something change on this or am I imaging how things used to be?<div>
<br></div><div>This is my test setup:<br>1 node, publishes via shovel to two other servers.</div><div>Publish at 800 msgs/sec to node 1, shovel at 800/sec, consume at 15 messages/sec from other two servers. </div><div>Publish at 800 msgs/sec to node 1, no shovel, consume at 15 messages/sec from node 1</div>
<div><br></div><div>Observations: </div><div> 1) It appears that as long as rabbit has ram and disk space, i can continue to publish at 800/sec no problem. Even if the remote side slows down to 400/sec over the shovel, rabbit on node 1 continues to publish and backlog locally as long as node 1 has ram/disk capacity. </div>
<div> 2) Remote sides going up and down have no impact on publishing rates, other than possibly slowing things down due to having to read disks to get the messages backlogged to disk.</div><div><br></div><div>Was there ever a time where flow control was based upon consumer rate not queue rates?</div>
<div>Jason</div><div><br clear="all"><div><br></div>-- <br><div dir="ltr">Jason McIntosh<br><a href="https://github.com/jasonmcintosh/" target="_blank">https://github.com/jasonmcintosh/</a><br>573-424-7612</div>
</div></div>