Data-Driven Test Design
Data-driven testing (DDT) is a wonderful approach to test design that allows you to keep your test data—both input and output values—separate from the actual test cases. By using one or more centralized data sources, you can store this test data in places like local storage, Excel spreadsheets, XML files, or SQL databases. This flexibility means you can run the same test with different data sets each time, enhancing efficiency and reducing the need for repetitive design and execution of similar test cases. Plus, when your test cycle wraps up, you will have a neat audit trail showing what was tested and what was not.
Typically, test cases are designed using variables in the test steps, which retrieve values from the data source instead of using fixed values. When you run a test case, the placeholder variables easily pull in the relevant test data. This method makes it a breeze to add new test cases: simply by inputting a new row of data into your source. Moreover, DDT promotes a clear separation between the testing logic and the test data. This thoughtful design minimizes maintenance efforts since changes to test steps won’t disrupt your test data, and vice versa.
In a nutshell, DDT can be visualized as follows:

When designing test cases using the DDT approach, the focus should be on mapping the test data and the different combinations and variations (negative and positive) to meet the business requirements and ensure adequate test coverage properly. By doing so, when executing test cases using Zephyr, the variable placeholders in test steps are replaced by the values available in the test-data table, ensuring complete reusability of the test steps.
DDT can be used with manual test scenarios, enabling manual test scripts to run together alongside their respective data sets. In DDT, the following operations are performed in a loop for each row of test data in the data source:
Test data is retrieved.
Test data is input into the system under test (SUT) and simulates other actions.
Expected values are verified.
Test execution continues with the next set of test data.
Working with test cases
Create a test case and enable the test data feature in the Test Script tab.
Add columns (variables) to the test-data table. In the steps, you can reference a column (variable) by typing a { brace (curly bracket), which triggers a drop-down list with column options. When the test case is ready to be executed in the Test Player, the test steps are repeated to ensure complete coverage.
In the Test Player screen, you will notice all steps are unfolded in a flat step list. Also, each step will be unfolded during the test execution with the parameter replaced with the values passed by the main test case. Remember that each row in the test data table will create a new group of steps during the unfolding procedure, as shown in the image below.
Working with data sets
Data sets are basically reusable sets of test data. If you plan to use the same test data between test cases, this is the right option for you.
To use data sets:
Create a data set in the configurations section ofZephyr and add some options.
Select the Test Data option for your test case.
Select the data set you want to use in your test case.
Add the dataset as column.
Set the options that you want to use in your test case.
Reference the test data column in the test case steps.
Save the test case.
When you view the test case in the Test Player, you will see that there is a new row of steps for each variation/row in the table.