<html><head></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><br><div><div>On Oct 25, 2012, at 11:54 , Tim Watson wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite"><div>Hi<br><br>On 10/25/2012 06:33 AM, Srdan Kvrgic wrote:<br><blockquote type="cite">Hi,<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">Short story:<br></blockquote><blockquote type="cite">Are there any plans to support batching of messages within the rabbitmq eco-system?<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">Long story:<br></blockquote>[snip]<br><blockquote type="cite">Optimally you would want a system that sends and receives batches of messages but allows you to reject single messages. Have your cake and eat it, as it were...<br></blockquote><br>Sounds good, but...<br><br>AMQP already provides batching delivery capabilities and these have been discussed previously, for example <a href="https://groups.google.com/forum/?fromgroups=#!topic/rabbitmq-discuss/CidFKmJrBFI">https://groups.google.com/forum/?fromgroups=#!topic/rabbitmq-discuss/CidFKmJrBFI</a>. But what you seem to be running up against is the fact that your many small messages limit throughput, so you want to bundle them up. I would suggest putting a processor between the publisher and consumer which unpacks the batches and forwards individual messages to the target exchange. This could be achieved by writing a custom exchange type, configured to understand how to 'unpack' the batch and where to route the forwarded messages to. The consumers would then receive these individual messages (possibly in batches!) and could reject some or all of them, but that rejection would not be apparent to the outside world in any way! So you'd need to then change your design somewhat, perhaps by putting 'failed' messages onto a different 'error queue' *or* by rejecting them and setting up a TTL and Dead Letter eXchange on the destination queue so that you can identify failures transparently. In either case, you'd want to add some kind of x-batch-id header to the 'broken up' messages so that your reprocessing procedure is able to identify the source batch from which they came.<br></div></blockquote><div><br></div><div>Are you talking about writing an extension to rabbit-mq as such? I'm not sure I follow your solution. Or rather, it doesn't seem like something it's very easy or practical. =)</div><div><br></div><br><blockquote type="cite"><div>Doing this 'in the broker' doesn't really make sense to me. How would transactions and/or confirms continue to work, for example?<br></div></blockquote><div><br></div><div>Actually the consumer parts seems to be to be already working like this.. You already get prefetches and what not, I don't see why those shouldn't be delivered in a single data transfer. Perhaps they already are?</div><div><br></div><div>The publisher side of this on the other hand doesn't do this by default. I'll have to try disabling Nagle's algorithm to see if that gives use the wings..</div><div><br></div><br><blockquote type="cite"><div><blockquote type="cite">Also automatic de-/compression in the driver. That also was a great performance boost when the batches started getting big. Lzf ftw!<br></blockquote><br>Hmn, that's an interesting one! Which client library are you using?<font class="Apple-style-span" color="#007421"><br></font></div></blockquote><br></div><div>Running jruby we use java libraries for anything performance critical. Like com.ning.compress.lzf.LZFEncoder for compressions. Unless you're talking about the entire library for tracking, batching, routing and compressing. In that case we use autobahn <a href="https://github.com/burtcorp/autobahn/">https://github.com/burtcorp/autobahn/</a></div><div><br></div><div>//S</div><div><br></div><div><br></div><br></body></html>