[rabbitmq-discuss] Fwd: Access control documentation

Anthony anthony-rabbitmq at hogan.id.au
Mon Sep 29 08:26:07 BST 2008


(Whoops.. sent this from the wrong address initially)

Hi all..

After Ben suggested that some discussion was perhaps warranted on this
task, I went away and spoke to my colleagues about our experiences and
such. I wrote up the following and ran it past them.. Yes, I approach
this very much from the perspective of someone who doesn't code
regularly, but is often the one installing/configuring/maintaining
stuff based upon supplied requirements. This isn't intended to belt
anyone over the head or kick up a stink, but to demonstrate there are
some real world needs for finer grained ACLs. Hopefully this inspires some
thought/commentary on what an ACL system might include.


AMQP, and specifically the implementation through RabbitMQ, provides a
flexible, fast, reliable link between multiple front and backend
systems reducing the complexity and number of connections the front
end software needs to make. This is a good thing, and already the
developers with whom I work have noticed significant speed
improvements in the tests they've done.

Moving forward, however, it would seem that without some sort of finer
grained security, the system places far too much reliance on the front
end being in a secure environment and places more onus on the back end
to check every request. Not every client is going to be sitting on a
server under one's complete control, feeding into a "2.0" app with
which a user then interacts. In some cases for example a remote site
may have an Rabbit server as a concentrator, clustered in, and clients
at this remote site may be either connecting to this remote site's
server, or potentially to a central fallback server. Of course, if a
remote site has a server clustered in, then that site would need a
copy of the Erlang cookie which would grant access to "The channel"
between the sites, but it would be a little easier to secure a server
machine than each client machine.

An "all or nothing" approach here places a LOT more onus on making
sure each site is "safe" and also means that should something go
wrong, integrity of data going over the common channel could be
compromised.

Take, for example, financial market information streams - information
multicasts from a service provider

* Not every subscribing front-end client will need or be authorised to
receive every stream put out - but often there will be overlap between
these streams in terms of client need
* Rarely in this one-to-many or fixed group-to-many, will the "many"
need any rights above "read" on these informational streams, and
indeed should not have any chance of altering or injecting false data
into these streams
* It is extremely unlikely that a simple information subscriber should
have privileges to publish their own arbitrary streams

Let's look at the other side of a financial market - action requests
to a service provider

* Not everyone will have something to buy or sell
* Not everyone will be authorised to use every market (they might have
privileges to buy and sell Widget Futures, but not standard Widget
manufacturer stocks)
* Not everyone will be authorised to provide services to the bus

Now, arguably, signing messages with private keys for verification
with known public keys by each end could prove
authenticity/authorisation - but this seems to reduce overhead in the
AMQP layer and multiply it many times - for every message - at the
front and back end layers. Arguably speaking if a person is identified
by an access credential at their point of entry to the AMQP bus, and
we know what they're allowed to do at this point, then it should be
enforced at this point (not to say that for high value transactions,
some additional checking shouldn't be done but that an "open pipe" to
multiple sites for them to have to spend time checking keys is perhaps
something best avoided).

Coming from using apache httpd, when I think of virtual hosts, I know
that resources can be mapped in from other places - indeed shared
between vhosts (like common CGI script folders for example). The AMQP
model of a vhost allows for no "common elements" between virtual hosts
unless the source of those common elements or some intermediate
service re-feeds those streams into the individual vhosts (needing a
connection for the service and client vhosts).. and then you start to
get away from "multicast" and get into a vhost per individual group of
needs.. Vhosts seem exclusively focussed on sharing infrastructure
between businesses for wholly separate purposes - especially given
this lack of ability to publish data to multiple vhosts and that one
must open multiple connections to access multiple vhosts concurrently
(increasing connecting application complexity).




More information about the rabbitmq-discuss mailing list