Use the results screen to analyze the results of your test runs, check virtual users during active test runs, debug errors, and gather statistics.
The toolbar contains:
In addition, check the test’s indices in a separate row:
—The time elapsed to run this test.
—The maximum number of concurrent virtual users the test had at a time.
—The number of errors occurred during the run.
—The average time it took the tested website to provide a response.
—The number of iterations completed during the test run.
The Summary tab shows a short overview of your test run:
In Goals, learn about the specific parameters that were tested during the test run.
In Key Findings, learn about the issues encountered during the test run and suggested improvements.
Click Edit in the top right corner of the screen.
In the subsequent dialog, edit the needed fields.
This panel contains statistics on how successfully virtual users performed web tests.
|Web test||The name of the web test the virtual users performed.|
|Load Distribution||The share of the load you set for this web test in scenario settings.|
|Expected Iterations||For iteration-based tests. The number of iterations each virtual user was supposed to run.|
|Total Successful Steps||The number of test steps that the virtual users completed successfully.|
|Failed||The number of test steps that the virtual users failed.|
This panel shows the scenario settings you specified for this load test run.
On the Charts tab, view charts that depict statistics on various aspects of the tested website.
LoadNinja offers the following charts:
Shows the average response time for each web test and step.
Shows the average time it took to move between the website’s pages, and locate and identify objects there.
Shows the number of errors for each web test and step.
Shows the number of successful iterations.
On the Statistics tab, view metrics on timings and failure types. To get information on separate steps (URLs or synthetic steps) of a web test, click the test in the list.
The Response Times and Errors table shows the metrics on how long the tested service processed a request, and the number of various failures that occurred during the test run.
Low response timing values are preferable, since the faster your service responds, the more requests per minute it can process.
The table includes the following columns:
|The name of the web test or step.|
|Last response time||The time it took the server that hosts the tested service to provide the last response in the load test or step, in milliseconds.|
|90th percentile response time||This value indicates the maximum response time for 90% of responses. For each of these responses, the actual time may be less than this value, but not greater.|
|95th percentile response time||This value indicates the maximum response time for 95% of responses. For each of these responses, the actual time may be less than this value, but not greater.|
|Average response time||The average time it took the server that hosts the tested service to provide a response in the load test or step, in milliseconds.|
|Minimum response time||The minimum time it took the server that hosts the tested service to provide a response in the load test or step, in milliseconds.|
|Maximum response time||The maximum time it took the server that hosts the tested service to provide a response in the load test or step, in milliseconds.|
|Standard deviation||The standard deviation of response times.|
|Total Iterations||The total number of iterations performed by all virtual users.|
|Total steps||The total number of steps performed in the web test by all virtual users.|
|Total failures||The total number of errors all virtual users encountered during the test run.|
|Total assertion failures||The number of validation failures that occurred during the run.|
|Total timeouts||The number of timeout errors that occurred during the run.|
|Total page errors||The number of page errors (for example, missing objects) that occurred during the test run.|
The Debugger tab shows the error log and information on errors. Use this tab to identify the source of an issue, view network traces, perform debugging, and so on. For more information, see Debugging.
The VU inspector shows desktops of remote cloud machines where virtual users are working.
Each frame contains the virtual user ID, the name of the web test it runs, and the URL the user is on.
The Loadgen Health tab displays the performance metrics received from LoadNinja servers. It helps you identify cases where the test infrastructure is the cause of errors that appear in test results.
To learn more about the tab, see Loadgen Health Tab.