GET/hosts/{HostId}/currentruns

Applies to QAComplete 14.3, last modified on February 19, 2024

Returns pending tests for a test host.

An agent running on a test host must poll the API periodically (for example, once a minute) to see if there are tests that are awaiting run on this host. If there are some, the agent needs to download automation scripts for these tests, run them on the host and (optionally) upload test logs back to QAComplete.

In QAComplete, tests that are awaiting run appear in the following places:

  • On the Run History tab of a host in the Test Hosts listing screen.

  • In the Run History listing screen. Tests with the Awaiting Run status and with the host name specified in the Run By Host field.

Authentication

Basic authentication using the host ID and security token. See Authentication for details.

Request Format

To get tests that are awaiting run on a host, send an empty GET request to the following URL:

http://{server}/rest-api/service/automation/v2/hosts/{HostId}/currentruns?offset={offset}&limit={limit}
Request parameters

HostId  :  integer, required

The host ID.

offset  :  integer, default: 0

The number of test runs to skip before counting the returned test runs. The default value is 0, which is the offset of the first item. For details, see Paging Through Results Using Offset and Limit below.

limit  :  integer, default: 25

The maximum number of test runs to return in the response.

A sample request:

GET http://yourserver.com/rest-api/service/automation/v2/hosts/143/currentruns HTTP/1.1
Host: yourserver.com
Connection: keep-alive
Accept: application/json
Authorization: Basic am9obkBleGFtcGxlLmNvbTpwQHNzd29yZA==

A sample request made by using cURL:

curl -u &id-token; http://yourserver.com/rest-api/service/automation/v2/hosts/143/currentruns

Response Format

On success, the operation responds with HTTP status code 200 and returns a JSON object with a list of pending test runs for the specified host, sorted by the test run ID.

If the operation fails, it returns the appropriate status code and (optionally) the error description in the response body.

HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Content-Length: 2138


{
    "metadata": {
       "result_set": {
          "count": 2,
          "offset": 0,
          "limit": 25,
          "total": 2
       },
        "__permissions": {
            "acl": 7
       }
    },
    "results": [
       {
          "id": 532,
          "is_sequential": true,
          "status": "Awaiting Run",
          "agents": "",
          "test_run_items": [
             {
                "sequence_number": 1,
                "stop_on_fail": true,
                "status": "Awaiting Run",
                "date_started": "0001-01-01T07:00:00.0000000Z",
                "date_finished": "0001-01-01T07:00:00.0000000Z",
                "run_time": 0,
                "automations": [
                   {
                      "agent": "TestComplete/TestExecute",
                      "timeout": 600,
                      "run_mode": 0,
                      "params": 
                        {  
                          "entry_point": "OrdersTest\\Script\\Unit1\\Login"
                        }
                   }
                ],
                "test_run_results": [
                   {
                      "sequence_number": 1,
                      "stop_on_fail": false,
                      "status": "Awaiting Run",
                      "step": ""
                   }
                ]
             },
             {
                "sequence_number": 2,
                "stop_on_fail": true,
                "status": "Awaiting Run",
                "date_started": "0001-01-01T07:00:00.0000000Z",
                "date_finished": "0001-01-01T07:00:00.0000000Z",
                "run_time": 0,
                "automations": [
                   {
                      "agent": "JUnit (Selenium)",
                      "timeout": 600,
                      "run_mode": 1,
                      "params": 
                        {
                           "start_class": "com.smartbear.selenium.SeleniumTest.TestCaseClass1",
                           "use_maven": false
                        }
                   }
                ],
                "test_run_results": []
             },
             {
                "sequence_number": 3,
                "stop_on_fail": true,
                "status": "Awaiting Run",
                "date_started": "0001-01-01T07:00:00.0000000Z",
                "date_finished": "0001-01-01T07:00:00.0000000Z",
                "run_time": 0,
                "automations": [
                   {
                      "agent": "NUnit (Selenium)",
                      "timeout": 600,
                      "run_mode": 1,
                      "params": 
                        {
                           "test_fixture": "Selenium.Test"
                        }
                   }
                ],
                "test_run_results": []
             },
              {
                "sequence_number": 4,
                "stop_on_fail": true,
                "status": "Awaiting Run",
                "date_started": "0001-01-01T07:00:00.0000000Z",
                "date_finished": "0001-01-01T07:00:00.0000000Z",
                "run_time": 0,
                "automations": [
                   {
                      "agent": "TestNG (Selenium)",
                      "timeout": 600,
                      "run_mode": 1,
                      "params": 
                        {
                           "start_class": "com.smartbear.testng.SeleniumTest.TestCaseClass1",
                           "use_maven": false
                        }
                   }
                ],
                "test_run_results": []
             }
          ]
       },
       {
          "id": 533,
          "is_sequential": false,
          "status": "Awaiting Run",
          "test_run_items": [
             {
                "sequence_number": 1,
                "stop_on_fail": false,
                "status": "Awaiting Run",
                "date_started": "0001-01-01T07:00:00.0000000Z",
                "date_finished": "0001-01-01T07:00:00.0000000Z",
                "run_time": 0,
                "automations": [
                   {
                      "agent": "ReadyAPI / SoapUI OS",
                      "timeout": 30,
                      "run_mode": 0,
                      "params": 
                        {
                          "report_type": "PDF"
                        }
                   }
                ],
                "test_run_results": []
             }
          ]
       }
    ]
}

By default, the API returns the first 25 pending tests for the host. To get a different set of tests, you can use the offset and limit parameters in the GET request’s query string. For example:

URL Description
…/currentruns Returns the first 25 test runs (the default limit is 25).
…/currentruns?limit=10 Returns the first 10 test runs.
…/currentruns?offset=5&limit=5 Returns test runs 6..10.
…/currentruns?offset=10 Returns test runs 11..36 (the default number of the returned items is 25).

To page through all the available items, first use the metadata section of the JSON response to get the total number of items.

{
   "metadata": {
      "result_set": {
         "count": 25,
         "offset": 0,
         "limit": 25,
         "total": 77
      },
      ...

Tip: You can request …/currentruns?limit=0 to get just the metadata without the test run results.

Then send subsequent requests with increasing offsets and a fixed limit until you get all the data.

…/currentruns?offset=25&limit=25
…/currentruns?offset=50&limit=25
…/currentruns?offset=75&limit=25

See Also

QAComplete Test Automation REST API Reference

Highlight search results