Command-Line Arguments

Applies to ReadyAPI 3.51, last modified on March 04, 2024

You use PerformanceTestRunner to execute ReadyAPI load tests from the command line.

The runner is located in the <ReadyAPI>/bin directory. The file name is loadtestrunner.bat (Windows) or loadtestrunner.sh (Linux and macOS).

You can configure the command line visually by running the utility from the ReadyAPI user interface.

General Syntax

The runner command line has the following format:

Required arguments

test-project

The fully qualified path to the project that contains the load tests to be run. If the file name or path includes spaces, enclose the entire argument in quotes.

Examples:

C:\Work\readyapi-project.xml
C:\Work\composite-project

-n<load-test-name>

Specifies the load test to run (usage -n<test-name>). If the test name includes spaces, enclose the entire parameter in quotes.

Example:

-nLoadTest1

Optional arguments

-a “<args>, or --agents <args>

Specifies the remote agents to be used for the test run. To specify an agent, use the following syntax:

-a "<ip>:<port>[=<scenario1>[,<scenario2>...]]"

  • ip – An IP address or computer name of the agent.

  • port – The port number to be used.

  • scenario – The comma separated list of scenarios to be simulated on the agent. If you omit this parameter, ReadyAPI will run all the scenarios on the specified agent.

You can find the scenario names in the load test editor. If the scenario name includes spaces, enclose the entire argument in quotes.

To specify multiple agents, use the -a argument multiple times.

If you use this command-line argument, the runner will ignore the agents specified in your test (you can see them on the Distribution page of the load test editor).

Note that, to use distributed testing, you need a ReadyAPI Performance license. If you do not have it, sign up for a free trial to try how it works for you.

Example:

-a "127.46.44.12:80=Scenario1"
Tip: To learn if the agents are available, use the agentavailability command-line tool that comes with ReadyAPI.

-A<args>, or --abort <args>

Specifies if the runner terminates requests running at the moment of stopping a test. The argument can be t or f:

  • If the argument is t, the ongoing requests are canceled, and their results are not included into the overall test results.

  • If the argument is f or is not specified, the test will finish only after all the ongoing requests are completed; the request results will be included into the test results.

-D<args>

Specifies a value of a system property for the test run. The specified value will override the variable value during the run.

Usage: -D<variable>=<value>. If the value includes spaces, enclose the entire argument in quotes. To override several variable values, specify the -D argument several times.

Example:

-Dtest.history.disabled=true

-e<args>, or --export <args>

Commands the runner to export data of statistics groups to .csv files.

Usage: -e<FileName>=<StatGroupName>. FileName is a fully-qualified name of the target file. (If you specify an existing file, it will be overwritten). StatGroupName is the name of the statistics group to be exported. You can find the names on the Statistics page of the load test editor:

Names in the Statistics page

If either file name or group name includes spaces, enclose the entire argument in quotes.

To export several statistics groups, use the -e argument several times.

Example:

"-eC:\Work\statistics.csv=New Statistics Group"

-F<args>, or --format <args>

Specifies the format of the exported reports. Usage: -F<FormatName>. Supported formats include: PDF, XLS, HTML, RTF, CSV, TXT and XML.

You must always specify only one parameter for this argument.

Example:

-FXML

-G<args>

Specifies a value of a global property for the test run. The specified value will override the variable value during the run.

Usage: -G<variable>=<value>. If the value includes spaces, enclose the entire argument in quotes. To override several variable values, specify the -G argument several times.

Example:

-Gglobal.property=true

-h, or --help

Outputs the command description.

-j

Commands the runner to generate a JUnit-style report.

-J

Commands the runner to group JUnit-style results by assertion types. If it is not selected, the results are grouped by the test, scenario or target level.

-l, or --local

If this argument is specified, the runner will simulate a load from your local computer. Otherwise, it will distribute load simulation among several agents. See Distributed Testing for complete information.

This argument overrides the "Run Settings..." parameter specified in your load test editor:

The "Run Scenarios..." Setting

Note that, to use distributed testing, you need a ReadyAPI Performance Pro license. If you do not have it, sign up for a free trial to try how it works for you.

-L<args>, or --limits <args>

Specifies limits for the test run.

Usage: ‑L<SECONDS>:<TARGETS>:<FAILURES>

  • <SECONDS> – The maximum allowed execution time in seconds.

  • <TARGETS> – The maximum allowed number of runs for the test cases (targets) used in your load test. Each test case execution increases the target run counter. If some test case runs in a loop, then each iteration increases the counter.

  • <FAILURES> – The maximum allowed number of errors to occur.

When any of these limits is reached, the runner stops the test execution. 0 means the limit is not set.

Example: -L60:100:20.

This argument overrides the appropriate limits for the test run specified in the load test editor.

See also -t.

-P<args>

Specifies a value of a project property for the test run. The specified value will override the variable value during the run.

Usage: -P<variable>=<value>. If the value includes spaces, enclose the entire argument in quotes. To override several variable values, specify the -P argument several times.

Example:

-Pproject.property=true

-r<args>, or --reports <args>

Commands the runner to generate reports and to save them to the specified directory. Usage: -r<directoryName>. To specify the report format, use the -F command-line argument. To include specific statistics data in the report, use the -S argument.

Example:

-rC:\Work\Reports

-S<args>, or --statistics <args>

Specifies the statistics groups to be included in the report.

Usage: -S<statistic group>. You can find the group names on the Statistics page of the load test editor (see above).

If a group name includes spaces, enclose the entire argument in quotes. To specify multiple groups, use the -S argument multiple times. If you skip this argument, the report will include all the statistics groups available on the Statistics page.

Example:

"-SNew Statistics Group"

-t<args>, or --timeout <args>

Specifies the time period (in seconds) within which any agent executing a test will try to reconnect to a controller if the connection is lost.

Usage: -t<timeout>.

If the connection cannot be established during this time, the agent will stop executing the test. The option has no effect if no agents are used.

If the option is not specified, the timeout is set to 10 minutes by default.

Example:

"-t30"
Note: If you need to specify time limits for a test run, use the -L argument.

-x<password>

Specifies the project password, if you have encrypted the entire project or some of its custom properties. See Protecting Sensitive Data.

Examples

  • The following command runs the MassLoad1 test from the specified test project for 10 minutes and saves the test report in the PDF format to the c:\test reports directory:

    loadtestrunner.bat -L600:0:0 "-rc:\test reports" -FPDF "c:\my projects\my-project.xml" -nMassLoad1
  • The following command sets a value for the file.separator system variable for the test run, runs MyLoadTest from the specified project on the specified agents, and exports accumulated statistics to two .cvs files:

    loadtestrunner.bat -Dfile.separator=; -a "192.168.0.10:8080=Test Scenario 1" -a "192.168.0.20:8800=Test Scenario 2" "-eMy Stat Group 1" "-eMy Stat Group 2" "c:\my projects\my-project.xml" -nMyLoadTest

See Also

About PerformanceTestRunner
PerformanceTestRunner Exit Codes

Highlight search results