[rabbitmq-discuss] New lua-resty-rabbitmq library

Rohit Yadav rohit.yadav at wingify.com
Sat Jun 1 13:57:26 BST 2013


Hi,

I've finished the first production ready iteration v0.1 of
lua-resty-rabbitmqstomp library. I've renamed the library after concerns
from Brian;

https://github.com/wingify/lua-resty-rabbitmqstomp

After correct usage of cosocket api, I'm able to publish concurrently
without any errors as seen in our lab tests. The throughput rate is still
low when receipts (or confirms) are enabled due to STOMP over AMQP overhead
so we've decided to go with a different architecture which involves Redis,
message aggregation and RabbitMQ for fast and reliable messaging. For this
work, we've written a transport agent called agentredrabbit which is
already being used in our production pipelines and we are planning to
opensource it soon.

Regards.



On Fri, May 31, 2013 at 12:40 AM, agentzh <agentzh at gmail.com> wrote:

> Hello!
>
> On Thu, May 30, 2013 at 11:56 AM, Rohit Yadav wrote:
> >
> > I think this could be a reason, due to RabbitMQ's disk io for persisting
> > queue/messages.
> > In that case would it be too much performance penalty to sleep for a
> while
> > and retry with a new connection in the request?
>
> Explicit sleeping may be too much, because the waiting latency is not
> under complete control.
>
> I've been thinking about some kind of automatic request queueing in
> ngx_lua's cosocket connection pool so that your tcpsock:connect() call
> can just temporarily hang up waiting for other connections to complete
> when the backend request concurrency limit is hit. That way, the
> waiting latency involved cannot be any longer than necessary and
> client traffic peaks cannot overload the backend services.
>
> Before that happens, the standard ngx_limit_req and ngx_limit_conn
> modules can also temporarily block exceeding client connections or
> request, thus effectively limiting the backend request concurrency
> level.
>
> > To make the request finish fast I can do a pcall(ngx.eof) and end the
> > request and then carry on with publishing message to the RabbitMQ broker.
> >
>
> This is a common trick used by the community. Alternatively, you can
> also use ngx.timer.at() to do async job processing:
>
> http://wiki.nginx.org/HttpLuaModule#ngx.timer.at
>
> But always keep in mind, async processing can be a devil because jobs
> can accumulate really fast and eventually exhaust the system resources
> if the backend cannot catch up with the frontend. This is also an
> effective DoS attack.
>
> Happy hacking!
>
> Best regards,
> -agentzh
>
> --
> You received this message because you are subscribed to the Google Groups
> "openresty-en" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to openresty-en+unsubscribe at googlegroups.com.
> For more options, visit https://groups.google.com/groups/opt_out.
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20130601/7d576c13/attachment.htm>


More information about the rabbitmq-discuss mailing list