Resource behavior questions, and overall comments
WECaputo at thoughtworks.COM
WECaputo at thoughtworks.COM
Fri Mar 1 19:16:03 UTC 2002
Hi,
Myself, and a couple of my co-workers here at ThoughtWorks have been
evaluating and trying out QMTest now since the 1.0 release.
First off, a big thumbs up to the design. We have been writing our own
testing frameworks for Extreme Programming style Story (acceptance) testing
for some time now, and QMTest's feature set and design is very close (and
in some ways ahead) of where the emerging framework of our various testers
was heading.
So, naturally we were excited to see a tool like QMtest go open source, as
duplicate effort is wasteful, and standards are useful.
First the good news, we have been able to get QMTest working nicely with
the open source continuous integration tool CruiseControl that ThoughtWorks
released last year (cruisecontrol.sourceforge.net). In particular, QMTest's
xml output was very simple to adopt to our reporting XSL, and I am even
considering recommending that we make this a standard format for Acceptance
test output in the reporting xsl distributed with cruisecontrol.
Now the not so good news -- which in turn leads to my subject question:
We have found the behavior of Resources in QMtest to be problematic.
According to the docs, tests associated with a resource will have the
following behavior:
R1.setUp
t_1.run
t_2.run
t_3.run
...
t_n
R1.cleanUp
Presumably, if a second resource (R2) existed, and was associated with a
different set of tests this resource would be called *after* the cleanup of
R1:
R1.cleanUp
R2.setUp
t_n+1.run
t_n+2.run
t_n+3.run
...
t_n+n.run
R2.cleanUp
But this isn't what happens, it seems that instead we see something like
this:
R1.setUp
R2.setUp
t_1.run
t_2.run
t_3.run
...
t_n.run
t_n+1.run
t_n+2.run
t_n+3.run
...
t_n+n.run
R1.cleanUp <-- note the order
R2.cleanUp
This also happens when R1 and R2 are both depended upon by the same tests -
IOW the isolation of the tests in an environment isn't possible with QMTest
as it would be in say Junit (with fixtures)
What we find we need for all but the simplest testing, is nesting and
atomic resource execution, along with the option to have a resource isolate
each test in a set:
R1.setUp
test
test
test
R1.cleanUp
R1.setUp
R2.setUp
test
test
test
R2.cleanUp <-- order is important
R1.cleanUp
R1.setUp
test
R1.cleanUp
R1.setUp
test
R1.cleanUp
R1.setUp
test
R1.cleanUp
So my questions are, is this by design, or is this a defect? second, are
you aware of this behavior? third, are there any plans to change it?
A bigger issue for us, is in regard to change. The structure of the
resource calling code in QMTest is very difficult to change because the
ordering seems tied to the database class and the loading of the resource,
which in turn is tied to the directed graph architecture of the engine
itself. We really can't plan around a tool that is hard to modify.
This is complicated by the lack of tests for the engine itself. "Eating our
own dogfood" has been a big part of the usefulness of the testing rigs
we've written before, even if you don't build them test first. We really
can't rely on an engine that does not have testing.
Truthfully, we would love to contribute code to this project, and also to
the use of QMTest in general, but at this point we are finding it easier to
modify our existing rigs than to modify QMTest for our needs, and the above
behavior makes most of the testing we do very difficult. If I had to start
a new project tomorrow I would probably roll a simple test rig, and only
adopt your output standard (a truly excellent fit with our CI app) and
possibly your test and resource import standard (also well done) and
otherwise stop using QMTest at this point. because of the above mentioned
issues.
Thus, any code we wrote wouldn't be patches to QMTest but more probably a
simplified version of a testing framework that emphasized ease of
modification. Given our company's support for open sourcing, we would most
likely be able to release such code, but as that is more like a fork than a
patch, and I wouldn't want that to happen, if we *do* open source any of
more of our testing code (we released a C++ unit test framework that is not
xUnit like a while back) it would most likely be a separate project from
QMTest at this time (with due credit given for the input/output formats if
we used them of course).
One thing I will say, I am hooked on Python for tool writing. It blows Java
away, and while I really like Ruby (we have used it a lot for tool writing)
its acceptance is not as wide, and some clients like having tools in
languages friendly to their platform (i.e. even though Ruby is fine on
Windows, MS shops seem to trust Python more, while the Java world is more
agnostic).
So as to end this on an upbeat, I have to commend you on the speed of fixes
you are producing -- the stability of the tool (already good when we first
tried it) is significantly better each release. Your recent mention of
improvements to the GUI are also good news, we stopped using it after a
couple of days, because it was easier to modify the xml directly, but we
find a GUI to be a good idea if we are utilizing our Customer directly for
test writing (and we often do write front ends to our testing rigs for that
reason)
Sorry for providing so much critical feedback, I hope this doesn't sound
too down on you -- overall I would definitely give you guys a thumbs up,
and if it weren't for those issues, and we would definitely be evangelizing
this tool through the XP community as we in the XP and agile world could
use a good AT framework. We will keep watching qmtest with interest.
Best,
Bill
More information about the qmtest
mailing list