[qmtest] passing data from one test to another

Scott Lowrey slowrey at nextone.com
Mon Jul 19 19:41:09 UTC 2004


Mark Mitchell wrote:

> For us, a canonical example comes from testing a compiler.  Our 
> canonical test is a single file which must be compiled, linked, and 
> run.   Success means doing all of that and exiting with exit code 
> zero.  We could look at that as three separate tests (a compile test, 
> a link test, and a run test) with a data dependency -- but we don't 
> because users of compilers don't think of it that way.  Either the 
> compiler works for that program or it doesn't.  So, we have just one 
> test, but the annotations indicate what kind of failure occurred.
>
> Perhaps you could explain your use case and we could see if there's a 
> way to do it in QMTest that makes sense?  If not, then we can think 
> about what we could do to QMTest to improve it.
>
OK, here's a case that I'm dealing with today.

Our tests exercise a remote device that handles telecom traffic.  There 
are several processes running on it - it's a Unix box - and we hit those 
processes with network traffic generated on the local host.  One of 
those processes is highly-concurrent and real time; it is crucial to the 
operation of the system-under-test.  Although it should never crash or 
hang, it's written in C, so it might.  This process runs throughout the 
duration of a test session.

If a test results in a crash/lockup, we need to know immediately and 
stop testing.  The test classes themselves could do the "health check" 
but, even if they did, they could not affect the onward movement of the 
QMTest execution engine; not without making every single test dependent 
on the one before it, which is not an option for us because the tests 
are not sequential in nature and may even be randomly shuffled.

I'm toying with the idea of writing a custom execution engine that 
watches the SUT and does all the necessary monitoring and data gathering 
in the event of a catastrophic failure.   This engine could also 
terminate the test loop in the event of said failure.  We've already 
tweaked cmdline.py a tiny bit to get some SUT information prior to 
testing so it looks like we're headed down that path...

Here's another one, and probably more relevant:

We have hundreds of tests that simulate telephone calls.  They are quite 
self-contained except for a piece of underlying software that requires 
us to increment a UDP port number for each test.   The new port number 
must be the number used in the preceding test plus two.

Because our custom test class uses the same model as python.ExecTest, 
each test is exec'd.   Say we kept the port number in the context.  We 
can pass the port number *to* the test but the test can't increment the 
number and pass it *back* for the next test because of the exec.  So, we 
cheated by defining a global variable in our custom test class and then 
adding that variable reference to the global namespace that is passed to 
the test.

Are these approaches flawed?  

-- 
*Scott Lowrey*
NexTone Communications
Germantown, Maryland

/(240)912-1369/
NexTone.com <http://nextone.com>




More information about the qmtest mailing list