Regression Testing

Author: SmartBear Software
Applies to: TestComplete 4 - 10

Steve McConnell says it in a nutshell (Code Complete, p. 618, italics added) -

Suppose that you've tested a product thoroughly and found no errors. Suppose that the product is then changed in one area and you want to be sure that it still passes all the tests it did before the change - that the change didn't introduce any new defects. Testing to make sure the software hasn't taken a step backwards or "regressed", is called "regression testing".

[...] If you run the different tests after each change, you have no way of knowing for sure that no new defects were introduced. Consequently, regression testing must run the same tests each time. Sometimes new tests are added as the product matures, but the old tests are kept too.

The only practical way to manage regression testing is to automate it. People become numb from running the same manual tests many times and seeing the same test results all the time. It becomes too easy to overlook errors, which defeats the purpose of regression testing.

The main software testing tools used to support automated testing generate input, capture output, and compare actual output with expected output.

Now, suppose we have automated testing software that takes care of these tasks, so we define the concept rigorously, without care for non-automated approximations.

From the start of the software project, every new capability is accompanied by a short test battery. This battery tests the new capability as thoroughly as the designers want. It is easy to create because the only concern is the new capability, and that capability is fresh-coded, or better, yet to be coded.

As a test battery is applied, needless tests are weeded out and new ones added for forgotten corners.

Once the battery looks good (normally, this takes less than an hour), and once the software meets all of its requirements (this may take longer, but fixing things is easiest when the code is freshest), correct results are garnered for all tests, and stored as files (text, data, screen images, etc.).

Anytime new capability is added, with its new test battery, all previous, validated tests are run, and the results are compared with the standard results already stored on file. This type of testing is what is called a regression test. Computer time is the cheapest resource around. Anything that goes wrong with the old tests can be traced to something done between the last time the regression test was run, and the time the latest one has run. Normally, that would be twenty-four hours.

This truly narrows down the time spent searching for a bug.

The same full regression test is run whenever the implementation is changed, even if no new capability is introduced.

You can quickly write small applications simply to test a portion of your project, for example, to test one specific dialog (perhaps using internal variables as "output") or to run through a specific sequence of operations.

If this is done every day (perhaps in the evening), then the "unintended results" found can be traced out quickly (say, at the start of the next day), fixed and re-tested (full regression test, as always). At that point, you know that your application, in its current state, passes every single test you ever thought up for it, and found to be useful. All of these little tests have been written quickly, each to try one aspect or feature of your software’s capability. It is the sum of them that creates the overall regression test in one solid format and lends itself to the concept of regression testing. By the time the project is into its third month, tens of thousands of boring, time consuming verifications and tests will have been run by the automated testing software, with complete reliability. You can probably already see why regression testing is so important in software development.

There is a programming method that takes this one step further - the complete regression test runs several times a day. The method is called Extreme Programming. See Extreme Programming, Kent Beck, Addison-Wesley, 2000 - short, well thought-out and well-written.

Automated Testing Software and Regression Testing

So, what should automated testing software do to support regression testing? It's a duh-point that it should record macros, both for mouse and for keyboard. More importantly, it should record them by default as Windows input commands (toggling a check box, modifying an edit box, etc.), not as absolute, blind, screen-relative actions. An automated test should not break because the user interface is tweaked! In fact, not only should the default recording be relative to controls, but it should locate them by the window they belong to, and identify this window by its window class, instance number and, optionally, by its caption, and this should be done all automatically, of course. Then, there should be the option to record blind, with absolute screen positions, precisely to test whether the UI has changed accidentally.

This automatic recording should output a human-friendly automated test that can be later modified by testers. It can be an automated test script in a standard scripting language or a sequence of test commands that can be edited visually, like keyword-driven. The second approach may be more preferable because it does not require any scripting skills. This is essential to the three requirements that follow.

Most automated tests are not functional (user-level) tests, they are unit tests. They test the interfaces of libraries before they are integrated into the code base. Unit tests use applets making the required library calls and test-data files. Most often, their one human input is "go". Within a framework of automated support for many small tests, the automated testing software will not be doing half its job for input, if the best it can do for a unit test is click "go". Therefore, its test harnesses should support an interfacing library that will allow the tests to "look into" library interfaces and call them directly. The automated testing software should at least allow its tests to do most of the "harness" work. This of course isn't done by recording, but by creating tests manually.

What we've said up to now for test input goes double for test output. The automated test should allow output to be read off the screen in Windows terms (as well as in pixel terms for special cases). It should also be able to get and read output files. And, if it has support for "internal" access, then of course automated tests will be able to deal with the output of units as well as it deals with the input.

Once all of this is done, we still have not dealt with regression testing itself – the endless comparison of test output against an already standard output. What our automated test harnesses must also support, then, is automated output analysis. The automated test isn't done until it says "ok" or "not ok". The automated test harnesses must support comparisons, take decisions and signal its results to the automated testing software.

Regression Testing– Managing Structure

Now we get to the automated testing software itself. Another duh-point: it must manage the automated test structure. Know what tests to run, and how to report the results. Practically, it must at least have a good set of optional filters, since after many months of testing, one regression test involves hundreds of small automated tests, and the "human overload" problem will occur if the automated testing software forces the user to look over all of the results and find the one "not-ok" among thousands of "ok’s".

Another necessary part of automated test management is file management. Just as it keeps a record of all the automated tests needed, the automated testing software must also record all of the files needed, where they are and for what automated test they're needed. Then, it must also keep a record of all files, in whatever format, that are kept as standards and compare them in the second phase of each regression test.

Finally, the automated testing software must be a good failure manager during the regression testing. Whenever an executable or a library is not found, an input file appears to have changed without warning, a comparison standard has gone missing, the tool must report this concisely, skip what needs skipped, and go on with the regression testing that can still be done. The last thing we need is automated test software that forces us to figure out why things didn't work out in regards to running the regression tests, rather than their results.

Regression Testing with TestComplete

With TestComplete, you can create and run regression tests for any supported applications. You just need to have the appropriate module installed – Desktop, Web or Mobile.

Regression testing typically includes the following steps:

  • First, you test and debug your software project.
  • Next, add new features to the application.
  • Then create tests for the added features.
  • Run both the old and the new automated tests over the new build.
  • Fix and rerun until all automated tests run without errors.
  • And continue to run all new and existing tests through the development of your software.

Creating new automated tests means adding new test items to the project’s test sequence, that you can view and visually change, using TestComplete’s Test Items edit page that displays a tree-like structure of automated tests to be executed during the project run.

The Test Items page of the Project Editor
Figure 1 – The Test Items page of the Project Editor.

Using this page you can do the following:

  • Add a new top-level test item at the end of the test items hierarchy or add a new item as a child of the selected item.
  • Modify the following test item properties:
    • A test item’s source element to be executed (a keyword test, a script project item, a script routine, a low-level procedure, and so on).
    • Number of times the test item should be executed. This property can be useful if you want a test item to be executed in a loop.
    • Maximum execution time (in minutes) for the test item. This property can be used to avoid test hanging.
    • TestComplete’s behavior in case an error or unhandled exception occurs during a test run.
    • Description text that will accompany a test item to describe it.
  • Include or exclude test items from regression testing by selecting or clearing the checkbox next to the desired test item’s icon. Note that TestComplete executes only those test items that are selected on the page. If a parent test item is unselected, all of its child items will be skipped.
  • Change the execution order of the test items using the context menu or by dragging the desired item to the desired location within the tree. The test items are executed in the order they are located on the Test Items page.
  • Copy an existing test item.
  • Delete a test item.

To perform regression testing, you have to run both the old and the new automated tests over the new build. To do it with TestComplete, you can simply select all of the needed automated tests on the Test Items page and command TestComplete to run a project. After this, TestComplete will start executing regression tests specified on the Test Items edit page.

 Test Items edit page
Figure 2 – Testing both old and new builds.

For example, in Figure 2 you can see that the Build 1.0.0.2, Build 1.0.0.3 test items and all of its child test items are included in automated testing (checkboxes next to these test items’ icons are checked).

Results of all regression tests executed during the project run are included into the test log. Using the log you can view the hierarchy of the executed test items, quickly find the regression tests that failed to execute successfully and fix the errors in the application. Thus, you can rerun your project and correct the application’s code until all regression tests run without errors.

Test Items edit page
Figure 3 – Regression test log.

To compare results generated by the last builds with results of earlier builds, you can use special TestComplete features for comparing object properties, files, images, data stored in a database and much more.

Conclusion

Regression testing is a kind of testing that helps developers make sure that there are no defects after the application has been changed. This overview of regression testing is designed to provide information about regression testing as a whole and the automated testing software that help make regression testing of desktop, web and mobile applications easier and more manageable. Try TestComplete today, and see for yourself how it can save time with your regression testing.