[rabbitmq-discuss] Newbie question
Hari K T
kthari85 at gmail.com
Fri Dec 27 01:56:27 GMT 2013
This is my first post to RabbitMQ. It is mainly some technological
difficulties I am facing, and would like to know how rabbitmq can solve it.
Currently let us assume you have 1000's of feeds. And you need to parse the
feed and process on those articles. ( I know about superfeedr.com, take
this an example )
The fetch feeds is currently running on gearman.
1) the problem with gearman is handling of job-server fail over
2 ) When you process a data, I try to keep the data connecting to mongo
from the same worker. The mongodb fails to connect may be due to the
What I am trying to achieve is store it for a few minutes if the client
failed due to certain problem, the callback will not work.
Either you want to add all the data back to the queue and start processing
the same. Not a good idea.
An intermediary save, and get back the data is what I was thinking.
So when you look at rabbitmq the current way is similar to the RPC .
ie what will happen when the client ( the one who initialized the worker )
was killed, or drops the connection in between processing?
If you need any more information I am happy to give
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the rabbitmq-discuss