Regression testing is easy to understand, but difficult to implement if you do not have an automated tool that takes care of most of the details. TestComplete is one such automated test management tool.
This topic holds information about regression testing. It contains the following sections:
Regression testing means “repeating a test already run successfully, and comparing the new results with the earlier valid results”. This process is useful when you run a test on your project and then correct the project code. The user gains two things from this process -- a test, and a standard for acceptance. Regression testing is based on the idea of reusing a test and acceptance standard, rather than forgetting about them once the test is successful.
In true regression testing, all tests of all sizes and their results accumulate, and nothing is thrown away. On each iteration, all existing, validated tests are run, and the new results are compared to the already-achieved standards. And normally, one or more additional tests are run, debugged and rerun until the project successfully passes the test. Obviously, by this point some degree of automation is essential. It is humanly impossible to reliably check hundreds of test results to see if they match old results.
Regression tests begin as soon as there is anything to test at all. The regression test suite grows as the project moves ahead and acquires new or rewritten code. Soon it may contain thousands of small tests, which can only be run in sequence with the help of automated software.
This form of test development inspired the design of the TestComplete central function, the Test Log.
We can run full regression tests several times a day with the help of our powerful machines and sophisticated test management tools. Any time a functional element is added to the application code, the test for it will be written before the code itself, as a way of stating specifications. As soon as the code is completed, it will be added to a test build, and the whole regression suite run on the build. Results for the suite will be checked automatically, results for the new test will be checked by hand, and the code will be corrected and re-tested in full until its test. This test and its validated results, will then be added to the suite, just as the code is added to the main build.
The advantage to this procedure is that if there is a malfunction in one of the regression tests, you know it resulted from a code edit made since the last run.
This use of Regression testing is at the center of the Extreme Programming approach. (See Extreme Programming, Kent Beck, Addison-Wesley, or http://www.extremeprogramming.org/). SmartBear has no relationship with Extreme Programming (XP) or its promoters, but XP is based on the availability of test automation and management software similar to TestComplete. TestComplete is also remarkably well adapted to other XP practices.
Regression Testing With TestComplete
The general procedure of performing regression testing in TestComplete includes the following steps:
- Test and debug the application.
- Add something to the tested application.
- Design a test for features added to the new build.
- Run both the old and the new tests over the new build.
- Fix and rerun until everything is clean.
- And so on.
This means that a new test or tests (for example, one or several script routines, etc.) are added to your test project or project suite for each new build or for each new feature in a build. Then, these tests are added to the test sequence of the project. For instance:
You can use whatever numbering system you want for the builds. You should edit the builds until both the old and new tests run without errors.
You can use the compare methods of TestComplete objects to verify if the script was run successfully. It is also possible to compare object properties, images and files. For instance, if your application outputs a list of customers, you can obtain the list in one of the first builds and compare this file with others generated by subsequent builds. For more information on results comparison, see the Checkpoints section of TestComplete help file.