[rabbitmq-discuss] Message Aggregating Queue

Irmo Manie irmo.manie at gmail.com
Thu Apr 28 14:44:19 BST 2011


The problem is actually a bit more tricky than this. When it comes to
market data you would route all the data to a client specific
exclusive queue based on a subscription because every client has its
own authorization, authentication and quality of service. (real-time,
delayed, only eu stocks, tick by tick, tick only every minute, etc,
etc).

So the easy way is still just to have the consumer do the filtering
itself by storing all messages in a key value store and empty this
based on a ordered queue of ids to process.
That way the consumer can 'consume' as fast as it can put the values
in the K/V store which probably always is fast enough :-)

Only if you can process the data (ticks) independently from each other
it makes sense to have this filtering on the broker because then it
would be useful for a cluster of consumers apps processing the data.
But 9 out of 10 times you need the ticks of more than one instrument
to do your business logic so you'd already keep a cache with the last
values anyway.

Still there could be other usecases of course where having this
functionality at the broker can be really useful and powerful.

/2cents

- Irmo

On Thu, Apr 28, 2011 at 3:06 PM, Alvaro Videla <videlalvaro at gmail.com> wrote:
> Jason,
>
> What I understood yesterday is that you will have one queue for UBS, another queue for Credit Suisse and so on. You mentioned something about unique identifiers per tick type which could be used as routing keys, then having a direct exchange bound to all those queues based on the tick type. My custom queue proposal is based on such assumptions.
>
> -Alvaro
>
>
> On Apr 28, 2011, at 3:01 PM, Alexis Richardson wrote:
>
>> Jason
>>
>> On Thu, Apr 28, 2011 at 1:54 PM, Jason Zaugg <jzaugg at gmail.com> wrote:
>>> Thanks for the great ideas and discussion so far!
>>>
>>> Another requirement I should make explicit: we have a single consumer
>>> reading the stream of price updates for all stocks; and it should
>>> process them the order. That is, we don't want to 'UBS' to move to the
>>> back of the queue if it updates before the previous message is
>>> processed. So a data structure like as LinkedHashMap would be needed
>>> for the custom queue proposed by Alvaro.
>>
>> I'm confused.  Would this stream contain exactly one price per ticker
>> symbol?  If so then why does it have to be a stream and not a set?  If
>> it must be a stream with multiple values per ticker, then just use
>> direct exchanges and one queue for the full stream.
>>
>>
>>> The Last Value Caching Exchange seems to solve a slightly different
>>> problem (although one that we also have!). If I understand it, a queue
>>> attached to the LVC exchange would receive *every* newly delivered
>>> message, regardless of how fast it is able to process them.
>>
>> Correct.  The LVC exchange is just a map from ticker to last value.
>> It does not affect delivery of subsequent messages.
>>
>> Are you sure you wouldn't be better off using a cache?
>>
>>> In
>>> addition, as soon as the queue is bound the the exchange, it would
>>> receive the last value for each routing key.
>>
>> Correct.
>>
>> alexis
>
> Sent form my Nokia 1100
>
>
>
> _______________________________________________
> rabbitmq-discuss mailing list
> rabbitmq-discuss at lists.rabbitmq.com
> https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss
>


More information about the rabbitmq-discuss mailing list