QMTest HPCS extensions

Andrew Funk afunk at ll.mit.edu
Tue Nov 30 19:30:38 UTC 2004


Hi Mark,

See responses below:

> -----Original Message-----
> From: Mark Mitchell [mailto:mark at codesourcery.com] 
> Sent: Tuesday, November 30, 2004 1:14 AM
> To: Andrew Funk
> Subject: Re: QMTest HPCS extensions
> 
> 
> Andrew Funk wrote:
> > Hi Mark,
> > 
> > I would like to begin a discussion to determine the best way to
> > transition over to CodeSourcey the development of QMTest features in
> > support of HPCS.  Here are the latest versions of the 
> extension classes
> > and documentation.
> 
> Andy --
> 
> First, I apologize for taking so long to make progress here.
> 
> Second, do you have any objection to moving this discussion to the 
> QMTest mailing list?  I'd like to do that so that our discussions are 
> recorded for posterity, and for Stefan Seefeld, who will be starting 
> with us January 1st.  Stefan will be working on the HPCS 
> QMTest stuff, 
> so it would help to have the discussion on the lists where he can see 
> it, even before he starts.  I'm very excited about Stefan joining us 
> because I think it will give us a chance to make progress a lot more 
> quickly; I'm a bottleneck.  If you give me your permission, 
> I'll resend 
> your message and mine to the mailing lists.
> 
> Third, I've reviewed your work.  I think you've done a great job 
> understanding QMTest.  I'd like to start by figuring out how 
> to clean up 
> ParameterDatabase and make it a part of the standard QMTest 
> distribution.  Does that sound like a good plan to you?
> 

Agreed.  I am looking at the extensions I have written as a functional
prototype that will help us (development team) to understand the nature
of the functionality we (HPCS) want to get out of QMTest.  I would like
to leave it up to you and Stefan to decide how best to implement that
functionality and incorporate it into QMTest.  Of course I will be glad
to help out and especially to explain what I was trying to do with my
code if necessary.

> I'd like to make a few changes.  First, I think there's a conceptual 
> issue.  In particular, there are two kinds of replication in play. 
> There is replication on the part of the testsuite designer and on the 
> part of the testsuite executor.  For example, the former 
> might say "this 
> same code should be run through Sloccount and through a Cyclomatic 
> complexity tool".  The latter might say "all tests should be run with 
> one, two, and four processors".  Conceptually, the first kind of 
> replication should be part of the testsuite; the latter 
> should be part 
> of the context.  I say that because the testsuite designer 
> cannot know 
> how many processors are available.  Do you agree?  I think we could 
> handle the test-executor replication by using a variant of 
> MountDatabase; we would replicate the ParameterDatabase N times as 
> required to deal with the replication requested by the test-executor.
> 

That's a good point.  Perhaps it would be a good idea to separate out
the numbers of processors, and maybe some other platform-dependent
settings like compiler name and options.  For the NAS, there is actually
a lot of compiler settings in the make files that have to be edited
separately for each platform.  I thought about trying to pull this
information into a context, but I hadn't gotten around to it yet.

The only downside I can see of separating the concerns is that it might
make test specification more difficult (having to specify parameters in
two places).  Another one of my desired features was to explore more
user-friendly alternatives to editing the configuration file (e.g. make
it part of the GUI interface).  Improving the method of input might
eliminate any downside of separate concerns.  So I would be interested
to hear your thoughts on that as well.

> I'm a little confused about exactly how much parameterization 
> we need. 
> For example, is the first kind of parameterization (the part 
> done by the 
> testsuite designer) the same across all tests in the database?  For 
> example, do we want to run *all* tests through Sloccount and a 
> cyclomatic complexity tool?  Or is that kind of parameterization 
> different for different tests?
> 

I find it helpful to think about how the NAS is configured when thinking
about specific test cases, but I also want to be careful not to tailor
this solution just to the NAS.  With any luck this framework will be
capable of testing new benchmarks without changes or customization.  

Having said that, let me answer this question using NAS as an example.
Conceptually, we want to run the same suite of tests (e.g. sloccount,
complexity, compile and execute) on all available implementations of the
NAS.  In practice, we need to use different tools and settings to get
this data for the different implementations.  So with my current method
of input, this requires several independent sets of parameters.  If we
can separate out the platform and implementation-specific parameters
into contexts, this may make the test specification cleaner and more
intuitive.

These are my high-level thoughts.  I think there are a lot of specific
issues that we may want to focus on one at at time.  So let me know
where you think we might want to start and I can try to give more
specific information about exactly what we want.

Thanks,
Andy 




More information about the qmtest mailing list