<div dir="ltr"><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Jul 11, 2013 at 5:40 PM, Michael Klishin <span dir="ltr"><<a href="mailto:firstname.lastname@example.org" target="_blank">email@example.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div id=":1du" style="overflow:hidden">O(1) appends and other queue characteristics in CS literature do not take into account<br>
the fact that your consumers can fall well behind your producers and RAM is a limited resource.<br>
Both are purely operational issues.</div></blockquote></div><br>In fact it kinda does take disk into account. O(1) just means a constant disk overhead. It does not say that the overhead of pulling out the next message from the queue may be 5 seconds. Just that it is constant and has an upper bound of some unknown value. The fact that the average case is sub-millisecond delivery is not really addressed by algorithmic complexity.</div>
<div class="gmail_extra"><br></div><div class="gmail_extra" style>Most functional languages have queues which are *amortized* O(1) to boot. That is, you do not have O(1) guarantee unless you amortize over several operations.</div>
<div class="gmail_extra" style><br></div><div class="gmail_extra" style>I think the key problem here has more to do with queue theory and backpressure than it has to do with raw throughput of the system.</div><div class="gmail_extra">
<br clear="all"><div><br></div>-- <br>J.