[rabbitmq-discuss] RabbitMQ, STOMP, Python and Orbited - design question.
tim at cantemo.com
Fri Aug 27 12:41:00 BST 2010
Thanks for your answer.
A job (task) is something that our web based system monitors - external system(s) do things for us as a Job, and we provide feedback on that. At the moment, its very old-school, if we want to see the status of a job we make a request to the external system and get the status and display it in our web interface. I want to move to a system where the external system sends us callbacks as the job progresses, and if anyone is on the web interface looking at that job they get updates on the status.
I am actually considering dropping STOMP, and writing consumers and broadcasters in AMQP. But the overall functionality will be the same,
I will have processes that will receive external notifications of updates on jobs (the callbacks) and should then place them onto a queue in RabbitMQ for that particular job, so any consumer that is listening to the queue gets a notification. (Broadcast, not round-robin).
Each Job will have its own queue, plus a queue for all Jobs.
In more detail:
Tornado will handle the websockets, user authentication and connect to RabbitMQ using AMQP.
Thats it in theory. My question is really is there any glaring errors in this, and also about the queue setup - but I will look at your example.
On 27 Aug 2010, at 13:20, Marek Majkowski wrote:
> On Tue, Aug 24, 2010 at 14:15, Tim Child <tim at cantemo.com> wrote:
>> Now moving over to using RabbitMQ, and the stomp plugin, I understand that my setup will change slightly.
>> I currently have a VHOST / , and I publish to an exchange, my understanding is that I have to send to a particular exchange "amq.topic" to be able to broadcast to more than one consumer that may or may not be listening. I don't care if my message doesn't ever get received as someone will not always be on the system.
>> And then my STOMP consumer will listen to VHOST / and amq.topic ? But if I want to replicate my jobs channels, should I create a routing key, binding and different queues for each job? Records of jobs can stay on my system forever, but the period that they get updated is sporadic - much in the first minutes of their life, hardly ever after they have been finished.
> I don't thin I understand what you mean by 'replicating jobs channels'.
> Instead, let me focus on our STOMP plugin semantics.
> First, the basic flow of AMQP messages goes like that:
> [Producer]--->Exchange--->(using binding)--->Queue--->[Consumer]
> You need to decide where to do dispatching of messages from multiple producers,
> do you have a queue per customer or maybe many queues for every customer?
> Does a 'customer' mean a browser session and or is it just a single page view?
> Next problem is that the mapping of AMQP to STOMP is not straightforward,
> it requires pretty good understanding of AMQP routing.
> The best examples of STOMP and rabbitmq-stomp semantics that I'm aware of
> are in the test file:
> For example the "test_broadcast" may be useful to you. The next example
> "test_roundrobin" shows how to declare a named queue. Take look
> at the subtle difference in 'destination:' headers between this and a
> previous example.
+46 (0) 7602 17785
More information about the rabbitmq-discuss