[rabbitmq-discuss] Publish won't work without transaction?

tsuraan tsuraan at gmail.com
Wed Sep 24 04:03:46 BST 2008


> BTW, you will most likely get better results with a consumer as
> opposed to polling the queue.

Yeah, it sort of seemed like that's the desired way to use the system.
 I assume setting no_ack in basic_consume will let me acknowledge the
consumed message and create new ones in a transaction.  Where are
things like the no_ack option on py-amqplib's basic_consume
documented?

> Almost. The queue process will try to deliver the message to a waiting
> consumer before it decides to buffer it in a message queue.
>
> This is why it is cooler to have consumers rather than polling the queue.

Ok, that makes sense.

> Well it does deal with backlogs and it won't accept responsibility for
> a message it can't deliver (if you ask it behave in this way).
>
> I just said that it is currently memory bound and doesn't overflow to disk
> yet.
>
> But what do you do when you disk or SAN fills up?
>
> Every resource is finite.

That is true, but with a 32-bit process, memory is a ton more
constrained than drive space, especially on the machines we use :-)

> Why not process it during the day as well?

Well, it does process all day as well, but some machines need all
night to work on their backlogs.  Some machines are very busy...

> Pluggable queues will allow you implement custom queue behaviour
> without having to hack on the server.
>
> ...
>
> We want pluggable queues for a variety of use cases, and one
> particular one is to be able to cleanly implement disk overflow
> without hacking too much on the core of Rabbit. We try to reduce the
> server codebase to a minimum.
>
> If somebody shouts loud enough, we might shift it up the list of
> things to do, but remember, this is open source software. Either wait,
> ask very nicely, fix it yourself or pay somebody to implement it for
> you :-)

Yup, those are the options :-)  Is there a plan for pluggable queues,
or is it just a dive in and write it sort of thing?  It looks like all
the behaviour of a queue is in rabbit_amqqueue_process right now, and
the persistence of queues is in rabbit_persister.  Would just
replacing the queue (message_buffer) in rabbit_amqqueue_process with a
file-backed object work for what I'm doing?  Persistence is still
handled by rabbit_persister for when things die, but a file-backed
queue wouldn't have to use so much system memory.  Is there a problem
with doing it that way?




More information about the rabbitmq-discuss mailing list