[rabbitmq-discuss] RabbitMQ, STOMP, Python and Orbited - design question.
tim at cantemo.com
Fri Aug 27 15:32:42 BST 2010
I was going to use py-amqplib with Tornado.
You are right I don't want just a single consumer per job, I want to be able to attach multiple consumers. I don't care if the messages are lost because no one is looking at them in this case.
My use case for queue for all jobs isn't very strong - getting an overview of what is happening on multiple jobs at the same time. Another use case for me, is that upon a certain message on the queue, I can hook up jobs - such as starting other jobs off when a job is finished..
So basically what you are saying is that I can't have multiple consumers in the way I want? What about if I was building a stock ticker watcher - hundreds of clients viewing what is happening to the same 20 stocks? Its the same scenario. Would this be done in using the exchange mechnism you are proposing?
Thanks for your time.
On 27 Aug 2010, at 15:21, Marek Majkowski wrote:
> Two comments:
> 1) does Tornado speak AMQP? I don't know any python AMQP client that
> will work correctly in Tornado environment. Have I missed anything?
> 2) I'm afraid the idea of one queue per job and one queue for all the
> jobs is not good. As that implies that only a _single_ consumer will
> receive a message produced by a job. What you probably want is a
> transient queue _for a page view_. That way, when someone views your
> page, you create a queue in the background and bind it to a proper job
> (via separate exchange for every job, or via a single amq.topic
> exchange but with routing_key set in the messages from the jobs). In
> that scenario if you had two browsers seeing the same page, they will
> access two different queues. And a message send to an exchange will be
> copied to both queues. This is what you may need.
> On Fri, Aug 27, 2010 at 12:41, Tim Child <tim at cantemo.com> wrote:
>> Thanks for your answer.
>> A job (task) is something that our web based system monitors - external system(s) do things for us as a Job, and we provide feedback on that. At the moment, its very old-school, if we want to see the status of a job we make a request to the external system and get the status and display it in our web interface. I want to move to a system where the external system sends us callbacks as the job progresses, and if anyone is on the web interface looking at that job they get updates on the status.
>> I am actually considering dropping STOMP, and writing consumers and broadcasters in AMQP. But the overall functionality will be the same,
>> I will have processes that will receive external notifications of updates on jobs (the callbacks) and should then place them onto a queue in RabbitMQ for that particular job, so any consumer that is listening to the queue gets a notification. (Broadcast, not round-robin).
>> Each Job will have its own queue, plus a queue for all Jobs.
>> In more detail:
>> Tornado will handle the websockets, user authentication and connect to RabbitMQ using AMQP.
>> Thats it in theory. My question is really is there any glaring errors in this, and also about the queue setup - but I will look at your example.
>> On 27 Aug 2010, at 13:20, Marek Majkowski wrote:
>>> On Tue, Aug 24, 2010 at 14:15, Tim Child <tim at cantemo.com> wrote:
>>>> Now moving over to using RabbitMQ, and the stomp plugin, I understand that my setup will change slightly.
>>>> I currently have a VHOST / , and I publish to an exchange, my understanding is that I have to send to a particular exchange "amq.topic" to be able to broadcast to more than one consumer that may or may not be listening. I don't care if my message doesn't ever get received as someone will not always be on the system.
>>>> And then my STOMP consumer will listen to VHOST / and amq.topic ? But if I want to replicate my jobs channels, should I create a routing key, binding and different queues for each job? Records of jobs can stay on my system forever, but the period that they get updated is sporadic - much in the first minutes of their life, hardly ever after they have been finished.
>>> I don't thin I understand what you mean by 'replicating jobs channels'.
>>> Instead, let me focus on our STOMP plugin semantics.
>>> First, the basic flow of AMQP messages goes like that:
>>> [Producer]--->Exchange--->(using binding)--->Queue--->[Consumer]
>>> You need to decide where to do dispatching of messages from multiple producers,
>>> do you have a queue per customer or maybe many queues for every customer?
>>> Does a 'customer' mean a browser session and or is it just a single page view?
>>> Next problem is that the mapping of AMQP to STOMP is not straightforward,
>>> it requires pretty good understanding of AMQP routing.
>>> The best examples of STOMP and rabbitmq-stomp semantics that I'm aware of
>>> are in the test file:
>>> For example the "test_broadcast" may be useful to you. The next example
>>> "test_roundrobin" shows how to declare a named queue. Take look
>>> at the subtle difference in 'destination:' headers between this and a
>>> previous example.
>> Tim Child
>> +46 (0) 7602 17785
>> Skype: timchild
+46 (0) 7602 17785
More information about the rabbitmq-discuss