[rabbitmq-discuss] RabbitMQ, STOMP, Python and Orbited - design question.

Marek Majkowski majek04 at gmail.com
Fri Aug 27 14:21:19 BST 2010


Tim,

Two comments:
1) does Tornado speak AMQP? I don't know any python AMQP client that
will work correctly in Tornado environment. Have I missed anything?

2) I'm afraid the idea of one queue per job and one queue for all the
jobs is not good. As that implies that only a  _single_ consumer will
receive a message produced by a job. What you probably want is a
transient queue _for a page view_. That way, when someone views your
page, you create a queue in the background and bind it to a proper job
(via separate exchange for every job, or via a single amq.topic
exchange but with routing_key set in the messages from the jobs). In
that scenario if you had two browsers seeing the same page, they will
access two different queues. And a message send to an exchange will be
copied to both queues. This is what you may need.

Cheers,
  Marek

On Fri, Aug 27, 2010 at 12:41, Tim Child <tim at cantemo.com> wrote:
> Thanks for your answer.
>
> A job (task) is something that our web based system monitors - external system(s) do things for us as a Job, and we provide feedback on that. At the moment, its very old-school, if we want to see the status of a job we make a request to the external system and get the status and display it in our web interface. I want to move to a system where the external system sends us callbacks as the job progresses, and if anyone is on the web interface looking at that job they get updates on the status.
>
> I am actually considering dropping STOMP, and writing consumers and broadcasters in AMQP. But the overall functionality will be the same,
>
> I will have processes that will receive external notifications of updates on jobs (the callbacks) and should then place them onto a queue in RabbitMQ for that particular job, so any consumer that is listening to the queue gets a notification. (Broadcast, not round-robin).
>
> Each Job will have its own queue, plus a queue for all Jobs.
>
> The consumer, or customer is a browser session at the end of the day, using browser (Javascript) websocket - to a  proxy - to a long polling handler capable of listen to AMQP queues (thus getting rid of my STOMP need). The long polling handler listens to a queue as told by the browser, sending any feedback to the browser.
>
> In more detail:
>
> Browser ( Javascript, websocket)
>       |
> Orbited2.0
>       |
> Tornado
>       |
> RabbitMQ
>
>
> Tornado will handle the websockets, user authentication and connect to RabbitMQ using AMQP.
>
> Thats it in theory. My question is really is there any glaring errors in this, and also about the queue setup - but I will look at your example.
>
> Thanks,
>
> Tim.
>
>
>
> On 27 Aug 2010, at 13:20, Marek Majkowski wrote:
>
>> Tim,
>>
>> On Tue, Aug 24, 2010 at 14:15, Tim Child <tim at cantemo.com> wrote:
>>> One part of my application is to provide realtime feedback on jobs going on in the system, there will be a job overview page, which will list the status of x number of jobs (probably about 40), and a job 'detail' page which will have more in-depth information on that job. I have been working with Orbited and Morbidq, and basically it works today by creating STOMP channels for each job, and on going to the detail page of a particular job my Javascript subscribes to the job channel /jobs/jobID  using STOMP. I have a publisher that when there are any updates to the jobs publishes to the channel, and if any consumers are listening on that particular channel the page is updated with the new details.
>>>
>>> Now moving over to using RabbitMQ, and the stomp plugin, I understand that my setup will change slightly.
>>>
>>> I currently have a VHOST / , and I publish to an exchange, my understanding is that I have to send to a particular exchange "amq.topic" to be able to broadcast to more than one consumer that may or may not be listening. I don't care if my message doesn't ever get received as someone will not always be on the system.
>>>
>>> And then my STOMP consumer will listen to VHOST / and amq.topic ? But if I want to replicate my jobs channels, should I create a routing key, binding and different queues for each job? Records of jobs can stay on my system forever, but the period that they get updated is sporadic - much in the first minutes of their life, hardly ever after they have been finished.
>>
>> I don't thin I understand what you mean by 'replicating jobs channels'.
>> Instead, let me focus on our STOMP plugin semantics.
>>
>> First, the basic flow of AMQP messages goes like that:
>>
>> [Producer]--->Exchange--->(using binding)--->Queue--->[Consumer]
>>
>> You need to decide where to do dispatching of messages from multiple producers,
>> do you have a queue per customer or maybe many queues for every customer?
>> Does a 'customer' mean a browser session and or is it just a single page view?
>>
>> Next problem is that the mapping of AMQP to STOMP is not straightforward,
>> it requires pretty good understanding of AMQP routing.
>>
>> The best examples of STOMP and rabbitmq-stomp semantics that I'm aware of
>> are in the test file:
>> http://hg.rabbitmq.com/rabbitmq-stomp/file/0404cb2620df/test/test.py
>>
>> For example the "test_broadcast" may be useful to you. The next example
>> "test_roundrobin" shows how to declare a named queue. Take look
>> at the subtle difference in 'destination:' headers between this and a
>> previous example.
>>
>> Cheers,
>>  Marek
>
> Tim Child
> +46 (0) 7602 17785
> Skype: timchild
> http://www.cantemo.com
>
>


More information about the rabbitmq-discuss mailing list