[rabbitmq-discuss] Publish won't work without transaction?
0x6e6562 at gmail.com
Wed Sep 24 01:18:01 BST 2008
On Tue, Sep 23, 2008 at 8:28 PM, tsuraan <tsuraan at gmail.com> wrote:
> The response from the passive declaration gives how many messages are
> in the queue. That's how I'm checking the number of messages that
> exist. There isn't a consumer yet; I've just been trying to make
> message insertion work, or at least understand how it's supposed to
> work. Maybe I'm not making sense here - see below.
This is the definition of the message-count field from the spec:
"This field reports the number of messages pending on the queue,
excluding the message being delivered.
Note that this figure is indicative, not reliable, and can change
arbitrarily as messages are added to the
queue and removed by other clients."
As much as anything it depends on the point of time that you poll the queue.
BTW, you will most likely get better results with a consumer as
opposed to polling the queue.
> Ok, I'm definitely having terminology issues, and I can't find the
> definitions in the AMQP-0.8 spec. A producer publishes a message to
> an exchange. An exchange routes a message into a queue (or many
> queues). A consumer gets a message from a queue, and once it has
> acknowledged that message, then the message is delivered. Is that
Almost. The queue process will try to deliver the message to a waiting
consumer before it decides to buffer it in a message queue.
This is why it is cooler to have consumers rather than polling the queue.
> If all consumers are using basic_get instead of basic_consume, will
> immediate delivery always fail? In other words, if a queue has no
> "consumer" on it, will all messages marked for immediate delivery be
Pretty much, although I don't know what the exact semantics are from
the protocol perspective.
For all intents and purposes though, a get will dequeue a buffered
message and report the resulting length of the queue at the time that
that one particular message was read.
> Yeah, the backlog is exactly what I need to manage though, because my
> backlog management isn't the best. That's essentially the problem I
> was hoping that RabbitMQ would solve; apparently it doesn't do that
> yet, but it's still much better designed than my way, and I'm guessing
> it's probably higher performance. I have a bit of Erlang experience;
> how difficult do you think it would be to implement a custom queue
> that allows for ludicrous backlogs, once the upcoming RabbitMQ version
> with pluggable queues is done?
Well it does deal with backlogs and it won't accept responsibility for
a message it can't deliver (if you ask it behave in this way).
I just said that it is currently memory bound and doesn't overflow to disk yet.
But what do you do when you disk or SAN fills up?
Every resource is finite.
> I can probably hack my way around this, but the tasks my system gets
> aren't requested; it just gets tasks as they are generated. Often, a
> system will get a lot of work during the day, and then just process it
> all night.
Why not process it during the day as well?
> Sometimes, a system will just get too much data, and never
> be able to process it all. Then, the customer calls tech support, but
> no data is lost. The pluggable queues would probably be the right
> answer for this, I guess.
Pluggable queues will allow you implement custom queue behaviour
without having to hack on the server.
>> There is a roadmap item to address this - but no ETA.
> Is that item the pluggable queues, or an actual implementation of
> queues that can grow indefinitely?
We want pluggable queues for a variety of use cases, and one
particular one is to be able to cleanly implement disk overflow
without hacking too much on the core of Rabbit. We try to reduce the
server codebase to a minimum.
If somebody shouts loud enough, we might shift it up the list of
things to do, but remember, this is open source software. Either wait,
ask very nicely, fix it yourself or pay somebody to implement it for
More information about the rabbitmq-discuss