[rabbitmq-discuss] Running StressPersister.java client against RabbitMQ and finding the stress_persister.log file is being "beaten to death"
Matthew Sackman
matthew at lshift.net
Tue Jan 19 14:03:22 GMT 2010
Hi John,
On Tue, Jan 19, 2010 at 10:22:29AM +0100, John Apps wrote:
> What parameters should one tweak in order to reduce the I/O to the log file
> in the StressPersister test? And others with similar behaviour, of course.
A quick look at the code suggests changing the sample interval with s
should do the trick.
> The file grows to about 140MB using the default StressPersister parameters,
> only to be renamed to stress_persister.log.previous and the whole cycle
> starts again.
> With smaller parameters than the default, i.e., smaller message size, lower
> backlog etc., the pattern is the same, but the frequency with which the log
> is renamed and recreated is faster.
That's very odd. I must confess to being totally unfamiliar with this
example - it's not one of the tests I've written to stress the new
persister.
Just running it, I get rather different results to you:
rabbitmq-java-client/build/dist$ sh runjava.sh com/rabbitmq/examples/StressPersister -C blah
... things happen ...
rabbitmq-java-client/build/dist$ ls -l
total 756
... stuff ...
-rw-r--r-- 1 matthew matthew 5077 Jan 19 13:54 stress-persister-b00005000-B0000016384-c00025000-s000100-blah.out
The results there are in a format suitable for plotting with graphviz or
similar. Digging back through our hg repo, Tony wrote this test in order
to replicate some of the results I was seeing using my Erlang-client
tests with the new persister.
> What would be ideal would be to be able to specify an initial (large)
> allocation for the file (so it does not extend it all the time) along with
> an extension quantity, should it have to be extended. In addition, if one
> could specify how much should be retained in memory prior to be flushed to
> disk, that would also make things move along.
Not really. Most modern file systems support holes in files, thus an
initial preallocation is virtually a noop, and buys you nothing later on.
Appending to a file is by far the best way to utilise disk bandwidth in
our experience. The OS is highly likely to be buffering pages of
outstanding writes anyway to disk.
Wrt buffering, the code is using a Java PrintWriter and using println on
it. Now it does call flush appropriately and not on every line.
Hopefully the Java libraries underneath understand that they're writing
to a file and not a terminal and are thus not doing a flush every line
themselves, but I'm not 100% sure about that.
I'm afraid I just can't replicate what you're seeing. How are you
running the example?
Best wishes,
Matthew
More information about the rabbitmq-discuss
mailing list