By default, LoadNinja simulates script events by using hard-coded parameter values, the ones you specified during script recording or editing.
To get larger coverage and to check how your web application works with different input values, you need to associate a databank (a file containing the data you want to input) with a script. LoadNinja will read it row by row, and will run test commands for each row in that databank. Such tests are called data-driven load tests.
The databank must be a CSV or TXT file, with either commas or tabs as a value delimiter. The first line may contain column headers.
Create a .csv or .txt file with the desired data, using either commas or tabs as value delimiters. Use any text editor to create the file. Many spreadsheet applications such as Microsoft Excel, Numbers, and LibreOffice Calc also export data to CSV.
Tip: We recommend that the first line contains descriptive column headers. In this case, the test will display the column names rather than indexes (1, 2, …).
Open an existing script from your project on the Projects > [Project] > Web Tests tab.
If you do not have a script yet, record it. Replay the recorded test once to make sure it works correctly.
In the script recorder, select Add > Databank on the toolbar:
Click Browse and select one or multiple .csv or .txt files from your device you want to use as databanks.
To remove the file you added accidentally, click next to the file in the list.
Click Next: Define File Format to proceed.
In First row contains, specify whether the first row of each file you have attached should be treated as a header or a data row. In the Column delimiter column, select the symbol that separates columns within your databank.
Click Next: Review Import. LoadNinja will try reading data from your files.
LoadNinja will display a preview of the data. If the data does not look right, click Back and change the parsing settings.
If LoadNinja loaded data correctly, go to the next page by clicking Next: Map Data to Script.
On the next page, map column values to test commands in your script. LoadNinja will locate parameters within the test and list them.
To command LoadNinja to use a databank for replacing parameter values, do the following for each input event:
In the Map file column, select a file to use as the databank in the Map file column for a parameter.
In the Map column column, select the databank column to take values from during the playback.
In both these cases, select No Mapping to skip this setting and use the hard-coded recorded value during the playback.
|Tip:||To change these mappings later, click # databank(s) in the script recorder.|
LoadNinja will show a preview of the item in the first data source’s row in the rightmost column:
To learn how to map values in the script recorder, see below.
Click Next: Save Mapping.
LoadNinja will upload your data file to the cloud and update the test. Events that use values from a databank will have the corresponding label:
|Note:||Databanks are not shared among scripts. To use the same databank in another script, associate it with that script separately.|
To create a data-driven load test in LoadNinja, attach databanks to individual scripts that your load scenario will run. Then, map databank columns to event parameters as explained above.
LoadNinja supports this type of parameterization for the Keyboard Input events, Select Input events, and web page URLs.
During the load test execution, each script will typically run several times. For each iteration, LoadNinja will pick a databank row and insert column values from this row into event parameters:
By default, LoadNinja picks databank rows randomly. To change this behavior, go to the script recorder Settings and change the value of Run Databank Rows:
Possible options are —
|Random||Default value. LoadNinja will pick random data rows from the databank for each iteration and each virtual user.
During the test run, there is a possibility that multiple users will use the same data at the same time.
Depending on the number of iterations and the number of rows, some rows are used more or less frequently than others. If the number of iterations is significantly larger than the number of rows, each row will be used approximately the same number of times.
For each virtual user, LoadNinja will pick data sequentially for each iteration, that is, the first iteration of a virtual user will use the first databank row, the second iteration will use the second databank row, and so on.
Each virtual user processes the databank on its own, so it means that multiple virtual users may be using the same data at the same time.
At any given time instance, LoadNinja will pick unique data rows for virtual users, so they will never use the same data at the same time.
The number of databank rows should be equal to or greater than the number of virtual users that will run this script.
To change the databank columns used for events, use the # databank(s) dialog or edit the event parameters directly.
Click # databank(s) on the toolbar.
Click Edit data mapping.
Configure the mapping settings, as you do for the new databanks.
Click Save mapping.
Currently, LoadNinja supports this feature only for Keyboard Input events. For parameterizing URLs and selection inputs, consider creating a mapping while uploading a databank.
In the list of test steps, locate the Keyboard Input event you want to parameterize.
Expand the event parameters and click .
Select or clear the Use Dataset check box, as you see fit.
If Use Databank is selected, select the databank column to take the parameter value from.
LoadNinja stored databanks along with the scripts which use these databanks. After you associated a databank with a script, the # databank(s) button appears on the toolbar.
To associate an extra databank with the script, do the following:
Click # databank(s) on the toolbar.
(Optional) Remove the obsolete databanks by clicking near them.
Click Import another databank.
Click Browse..., locate the new databank and click Next: Define File Format.
Configure the importing settings, as you do for the new databanks. Click Next Review import.
Check the preview. Click Next: Map data to script
If the new databank has a different set of columns, update the event-to-column mappings as required. Click Next: Save mapping.