[rabbitmq-discuss] REST, RabbitMQ, and Scalability

Jason McIntosh mcintoshj at gmail.com
Thu Jun 13 04:15:47 BST 2013

You can use RabbitMQ to do this - there's nothing that says you can't.
 Whether it's the right solution depends on the complexity of the overall
architecture, goals in that architecture, etc.  My initial thought was
"memcached" that both process and rest server publish and read from.  In
our case due to our requirements, we use a shoveled set of rabbit queues to
transmit data changes to an "enterprise system" in a different geographical
location the the source rabbit servers.  This enterprise system is where
the "source of truth" data is stored as well as for tracking information
and other data.  The AMQP consumers after updating the central database,
publishes changes back out to the various locations for high performance
read only use.  If we hadn't had the need to do distributed data publishes
from multiple locations, required support for networking
failures/interrupts, wanted high performance and minimal impact to send
such messages, we'd have just had local processes talking to local
replicated/clustered databases instead.  Our current architecture gives us
a distributed HA asynchronous processing system.  There are always, always
questions though - how durable is the job ids?  If a publishing fails and
your app fails, do you just rerun, etc. etc. etc.


On Tue, Jun 11, 2013 at 2:06 PM, Robert DiFalco <robert.difalco at gmail.com>wrote:

> I am writing a REST server in Java and am trying to come up with the best
> approach for scalability. The server is deployed on Heroku. I am new to
> RabbitMQ/AMQ so I need some advice.
> From an architectural point of view I want to have one or more REST dynos
> running that delegate all of their work to one or more worker dynos. The
> REST server dynos must be stateless.
> I have two basic patterns to support.
>    1. REST RPC. Basically, the client makes a call like GET /user/{id}. I
>    delegate this call to a worker dyno that does a database lookup in the USER
>    table for the specified ID and returns it. The REST interface then returns
>    this. This to me seems like basic RPC.
>    2. REST ASYNC. In this model the client may call my REST server with a
>    POST or a GET. Let's use the classic case of a long running task to format
>    a graphic. I will create a JOB ID, return it to the client in an URI. The
>    client calls the URI to poll the job until it is done and then eventually
>    gets redirected to a URI that has the result. In this case, when I get the
>    initial REST call I will generate a UID for it, submit it to the worker.
>    There are several ways I can do the polling and returning of the result. A
>    lot of that depends on how I decide to interact with the worker.  Is it
>    RPC? AKKA? AMQP?  Because there may be more than one REST service dyno, the
>    request to post the job may happen on one REST server and the polling
>    request may be routed to another. So it must be stateless.
> So here's my question. First, should I be using RabbitMQ for this or is
> there a better solution? If I am using RabbitMQ must I implement the RPC
> and ASYNC patterns myself directly on top of RabbitMQ or are there
> libraries out there that create a layer of abstraction on top of RabbitMQ
> so I can do both of these simply? If I'm going to use RabbitMQ I'm hoping
> to use CloudAMQP on Heroku.
> Thanks!
> Robert
> _______________________________________________
> rabbitmq-discuss mailing list
> rabbitmq-discuss at lists.rabbitmq.com
> https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss

Jason McIntosh
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20130612/d06805bb/attachment.htm>

More information about the rabbitmq-discuss mailing list