Load Page

During a test run, ReadyAPI collects data and displays it on the Load page.

Important

This data is gathered in real-time and might be inaccurate because it takes some time to transfer messages. You can find accurate information on the Statistics page.

You can also view recent results of load test runs on the Load Test Results tile on the Dashboard. If you do not have this tile on the Dashboard, you can add it at any time.

Global Metrics for Test Case

Global metrics are collected for the entire test case and for each scenario. These metrics and relevant test assertions are displayed as a graph.

By default, the metrics are displayed for the entire test case. To display the metrics for each individual scenario, select the one you need from the drop-down list.

API load testing with ReadyAPI: The Load Page

Metrics are displayed as solid lines of different colors. The horizontal scale measures the time passed since the start of the test in seconds. Vertical scales are different for each metric. Hover over a data point to see its value, time from the start of the test, minimum and maximum values of the corresponding statistics.

API load testing with ReadyAPI: Global Metrics Details

Assertions are displayed as pecked lines. These lines are horizontal and show maximum acceptable values for metrics of the same color. If the test goes over this line, the assertion logs an error.

Important

Assertions are only displayed if the relevant metric is on the graph.

ReadyAPI displays the following metrics and assertions on the graph:

Metric

Description

VUs/s

The number of virtual users added to the test each second. This metric is available only for the Rate load type.

VUs

The number of virtual users simulated each second. This metric is available only for the VUs load type.

Time taken

The time it takes to complete a test case.

This metric collects test case completion times for one second and calculates a value for that second only.

Avg

The average time it takes to complete a test case.

This metric collects test case completion times during the entire scenario execution and updates its value every second.

Min

The minimum time it takes to complete a test case.

This metric collects test case completion times during the entire scenario execution and updates its value every second.

Max

The maximum time it takes to complete a test case.

This metric collects test case completion times during the entire scenario execution and updates its value every second.

Failures

The total number of failed test cases.

TPS

The number of transactions per second.

BPS

The number of bits transmitted per second.

Queued

The number of queued requests.

Failures/s

The number of failed test cases per second.

Examine Test Failures

If an error occurs during the test run, you will see a notification below the Global Metrics graph. Click the message to open the Performance Log tab with detailed information on the error.

Load testing: view test errors

Individual Test Step Metrics

Individual metrics are collected for each scenario. These results are shown in the Test Step Metrics table:

Test Step Metrics

The metrics are displayed only for one test scenario. If you run multiple scenarios, click the drop-down and select a scenario you need to display the statistics for.

Note

The chart and table are intended for real-time monitoring rather than detailed analysis. To examine the results, use a printable report or the Statistics page.

Important

The test engine updates the chart and the table approximately every second. That is, the metrics indicate the values collected for scenarios and requests during that second. However, these values may be different from values on the Statistics page and in printable reports. Besides that, the chart and the table are updated at different time points, so values in them can differ a little as well.

ReadyAPI displays the following metrics in the table:

Metric

Description

Min

The shortest time it took to execute a scenario.

Max

The longest time it took to execute a scenario.

Median

The median time it took to execute a scenario.

Last

The time it took to execute the scenario last time.

Count

The number of times the scenario was simulated.

TPS

The number of requests sent each second.

Err

The number of failed scenario runs.

Err %

The percentage of failed scenario runs.

See Also

Publication date: