[rabbitmq-discuss] Message Aggregating Queue

Jason Zaugg jzaugg at gmail.com
Thu Apr 28 15:18:22 BST 2011


On Thu, Apr 28, 2011 at 3:44 PM, Irmo Manie <irmo.manie at gmail.com> wrote:
> The problem is actually a bit more tricky than this. When it comes to
> market data you would route all the data to a client specific
> exclusive queue based on a subscription because every client has its
> own authorization, authentication and quality of service. (real-time,
> delayed, only eu stocks, tick by tick, tick only every minute, etc,
> etc).

But each client specific queue could implement the 'obsolete tick'
discarding in the broker, right?

> So the easy way is still just to have the consumer do the filtering
> itself by storing all messages in a key value store and empty this
> based on a ordered queue of ids to process.
> That way the consumer can 'consume' as fast as it can put the values
> in the K/V store which probably always is fast enough :-)

I would feel more comfortable knowing that the number of messages in
the broker is naturally bounded, even if the consumer misbehaves. It
would also be nice to have the possibility to have a pool of consumers
processing from a single queue, be able to restart the consumers
without losing unprocessed messages etc. Anyway, we should discuss
this some more internally; as always these sort of cats are amenable
to being skinned in multitude of ways :)

> Only if you can process the data (ticks) independently from each other
> it makes sense to have this filtering on the broker because then it
> would be useful for a cluster of consumers apps processing the data.
> But 9 out of 10 times you need the ticks of more than one instrument
> to do your business logic so you'd already keep a cache with the last
> values anyway.

> Still there could be other usecases of course where having this
> functionality at the broker can be really useful and powerful.

> /2cents

And well worth both of them :)

-jason


More information about the rabbitmq-discuss mailing list