Compare the results of multiple test runs to determine the common errors that occur when working with the tested website, to find out how the website behaves under various load conditions, and so on.
Click your project name in Projects.
Switch to the Test Runs tab.
On this tab, in the left column, select up to 4 check boxes for the reports you want to compare:
|Tip:||We recommend that you select the runs that are associated with the same web tests. It is possible to select runs for different web tests, but in this case, the comparison results will be of no use.|
Once ready, click Compare. The comparison report screen will appear:
Switch between the tabs on the comparison report screen to retrieve information on different comparison aspects:
Summary – Shows basic information about each test run included in the comparison, including data on separate web tests.
Charts – Shows customizable comparison charts.
Response times – Shows the response times metrics – average, minimum, maximum, standard deviation, and think times. It also includes navigation timings.
Errors – Shows the overall number of errors for each test and metric. The errors are grouped by their type.
To learn more about each tab, see below.
Click Download Report in the top right corner of the screen:
In the subsequent dialog, specify a name for the report file and select the preferable file format in Save Type:
|Tip:||LoadNinja supports saving reports in the PDF, PNG, and JPG formats.|
Once ready, click Download to generate a report and save it on your device.
In this section, learn about the metrics and fields LoadNinja shows in comparison reports.
This panel shows the following information on each test run:
|Test run name||The name of the scenario used for the test run.|
|Started test||The date and time the test run started.|
|Completed test||The date and time the test run ended.|
|Duration configured||The preconfigured duration of the test, in minutes. Used for duration-based tests.|
|Total run time||
The total run time of the test, in minutes.
|Ramp-up time||The “warm-up” period of the load test, in minutes.|
|Delay between iterations||The preconfigured pause between web test runs performed by a virtual user, in seconds.|
|Average response time||The amount of time it takes the website to return the results of a request to a virtual user.|
|VUs configured||The number of virtual users requested for the test run.|
The actual number of virtual users LoadNinja provided.
Tip: Usually, this number differs from the VU configured value if a warm-up time is required (that is, the test starts with 1 virtual user, and the number of users will increase over time), but some error caused the test to end prematurely.
|Iterations configured||The preconfigured number of iterations (for iteration-based tests).|
|Iterations completed||The actual number of iterations completed by virtual users.|
This panel shows the following information on each web test:
|Scripts run||The web tests selected for the test run.|
|Number of scripts||The number of web tests associated with the test run.|
|Number of steps||The overall number of steps in all the completed iterations of the web test.|
This tab shows customizable charts of separate web tests or steps:
Click Add a chart.
Select the needed web tests or steps to include in the comparison:
|Tip:||Use the search box to find the needed item quickly.|
Optionally, change the chart’s name in Chart Title, then click Next: Select Metric.
Select the metrics you want to include in the comparison. Possible options include: the number of virtual users, the total response time, think time, error counts, and navigation timings:
Once ready, click Add chart.
Select > Edit Chart next to the needed chart name on the Charts tab.
Use the subsequent dialog box to modify the chart. The available options are the same as when adding the chart.
Once ready, click Add a chart.
Select > Remove Chart next to the needed chart name on the Charts tab.
Confirm this action in the subsequent dialog.
To show only one graph, double-click the name of this graph in the chart legend.
To stop showing a graph, click the name of its graph in the chart legend once.
To zoom in a segment of a chart, select it with your mouse.
To restore the default scaling of a chart, double-click on its legend.
This tab shows the response time metrics and navigation timings. They are grouped by web tests and steps associated with the compared runs:
|Average Response Time||The average time it took the server that hosts the website to provide the response within the web test or step, in seconds.|
|Minimum Response Time||The minimum time it took the server that hosts the website to provide the response within the web test or step, in seconds.|
|Maximum Response Time||The maximum time it took the server that hosts the website to provide the response within the web test or step, in seconds.|
|Standard Deviation Response Time||The standard deviation of response times.|
|Think Time||A sum of pauses between actions within the web test or step.|
|Total Time||The total time it took to run the web test or step.|
|Redirect Time||If the tested website initially responded with an HTTP 301 or 302 redirect, this value indicates the time the virtual users spent following the redirects until getting to the final URL.|
|DNS Time||The time spent on performing a DNS lookup, that is, on obtaining the IP address of the website from a DNS server.|
|Connect Time||The time spent on performing a TCP handshake, that is, on establishing a connection to the web server after the DNS lookup.|
|DOM Load Time||
The total time it took to load and construct the DOM. The DOM is considered completed when the
|Response Time||The total time passed from the moment the client sent a composed request until the moment the page’s HTML code was downloaded.|
|First Byte Time||The time spent on waiting for the first byte of the response from the server. Its duration includes processing of the requests, accessing the database of the server, and selecting and generating the response.|
|Event Time||The total time it took to fully load the page along with the required resources to the Document Object Model (DOM).|
This tab shows the following error metrics grouped by web tests and steps associated with the compared runs:
|Total errors||The number of errors (of all types) that occurred during the run of this script or step.|
|Assertion errors||The number of errors triggered by validations.|
|Timeouts||The number of timeout errors occurred during the run of this script or step.|
|Page errors||The number of page errors (for example, objects not found) occurred during the run of this script or step.|