[qmtest] The state of GUI
Vladimir Prus
ghost at cs.msu.su
Tue Apr 5 06:35:10 UTC 2005
Hi Stefan,
> Vladimir Prus wrote:
> > Most of the above are glitches, not very terrible bugs. But, given that
> > they exist for quite some time, I start to wonder how QMTest GUI is
> > supposed to be used. Maybe, the idea is that most users will create
> > custom databases and use them, directly creating tests in the
> > filesystem/DB/whatever? And GUI is just for running tests?
>
> I don't know about the original intend of the various access points, but as
> far as I am concerned as a user, I have indeed mostly been working with
> custom databases that are non-modifiable. I'v only recently started to use
> the GUI.
>
> I'd be interested into whether many QMTest users use the interactive GUI
> mode to modify test databases, too, as that would give us an indication
> about what features to work on.
Fair enough. I'm too not entirely sure if it's best to improve the existing
GUI, use custom databases, or just my own GUI with PyQt.
I tend to like the GUI for running tests, at least when developing (nightly
build surely uses "run + summarize"). However, creation of tests is less
convenient. Say, I'd like to list a set of source files and automatically
compute expected results, and create a test. Whether it's better to use
custom database and a shell script, or standard XML database with some Python
script that uses QMTest API is the question, though.
> I agree that the GUI can be enhanced a lot.
>
> I'm currently looking into a 'qmtest report' command that reads multiple
> result files (potentially from runs on different platforms) and generates
> an xml report. In this context I'm interested to know what information
> should be available in the report to make it usable. Your suggested
> expectation annotations seem to be a good candidate here.
Did you look at boost results format? See:
http://www.meta-comm.com/engineering/boost-regression/cvs-head/developer/program_options.html
Essentially, there are several test results which are merged in a big table.
Each failure can have "notes" -- that's my proposed failure annotations. As
I've mentioned somewhere in the tracker, the key point is that there's one
expectation file for all toolsets.
As a related note -- GUI might make perfect sense for such expectation file.
From time to time somebody commits ill-formed XML expectation file to Boost,
and all test reporting break ;-)
- Volodya
More information about the qmtest
mailing list