[rabbitmq-discuss] Publish won't work without transaction?
Ben Hood
0x6e6562 at gmail.com
Wed Sep 24 08:06:19 BST 2008
Tsuraan,
On Wed, Sep 24, 2008 at 4:03 AM, tsuraan <tsuraan at gmail.com> wrote:
> Where are
> things like the no_ack option on py-amqplib's basic_consume
> documented?
I think Barry could answer this one. In the meantime, just try to add
the no_ack flag as an argument to the consume function.
> Yup, those are the options :-) Is there a plan for pluggable queues,
> or is it just a dive in and write it sort of thing? It looks like all
> the behaviour of a queue is in rabbit_amqqueue_process right now, and
> the persistence of queues is in rabbit_persister. Would just
> replacing the queue (message_buffer) in rabbit_amqqueue_process with a
> file-backed object work for what I'm doing? Persistence is still
> handled by rabbit_persister for when things die, but a file-backed
> queue wouldn't have to use so much system memory. Is there a problem
> with doing it that way?
Not especially. You could just patch rabbit_amqqueue_process without
bothering about pluggability. The latter is just about having more
flexibility and better factoring.
One crude way to do this is to set a hard limit on queue depth and
then after that just overflow to disk rather than putting new messages
into the memory queue. Keep the persister the same, because it logs to
disk anyway. Sounds simple, but it may get a bit fiddly when you try
to things properly. Which is why we're not just jumping in and hacking
the functionality to diffuse this whole discussion - it takes effort
to get things write and think of all of the knock-on effects of what
you are doing.
Having said this, pluggable queues are no silver bullet either - you
still need to implement them somehow :-)
There were some other options discussed on this previous thread:
http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/2008-September/001849.html
Ben
More information about the rabbitmq-discuss
mailing list