[rabbitmq-discuss] looking for a design pattern for an aggregator (AMQP)

Laing, Michael michael.laing at nytimes.com
Sat Jan 18 12:41:24 GMT 2014


I do this sort of thing in pika using an asynchronous connection. I would
suggest tornado_connection.

Then use pika's connection.add_timeout method.

While doing any operation, like the summarization, you won't be yielding to
pika's event loop. This can cause heartbeat problems if the time taken goes
over the heartbeat interval, so watch for that.

ml


On Fri, Jan 17, 2014 at 8:07 PM, <kgoess at bepress.com> wrote:

> We've been wrestling with this for a couple days and aren't any nearer a
> solution, so any suggestions would be helpful.
>
> We have a queue of unaggregated data, hits per article.  We'd like to have
> a listener on that queue collect messages for, say 10,000 messages, or ten
> minutes, whichever comes first, before aggregating/coalescing the data and
> submitting that to a second queue.
>
> We've been working with python AMQP (pika) code using the
> pika.BlockingConnection and basic_consume, using a SIGALRM timer. We're
> seeing lots of conflicts between the basic_consume callback, the SIGALRM
> callback, sending acknowledgments on the incoming data, and sending the
> outgoing aggregated data to the second queue.
>
> After a couple days experimenting with different approaches, we thought it
> might be productive to ask if anybody else has already solved this problem
> and can suggest a strategy, or if anybody things pika.BlockingConnection is
> even the right tool for this?
>
> Thanks for any suggestions...
>
> _______________________________________________
> rabbitmq-discuss mailing list
> rabbitmq-discuss at lists.rabbitmq.com
> https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20140118/14e86282/attachment.html>


More information about the rabbitmq-discuss mailing list