Creating a Test Run

The instructions below describe how to create a test run with minimal requirements. Optionally, you can also configure advanced settings.

  1. Click Projects in the left navigation menu, then expand your project in the Projects section.

    • If the project has no test runs, click Create your first test run.

      Adding a test run

      Alternatively, you can navigate to the Dashboard in the left navigation menu, and in the Last Test Runs section, click Create your first test run.

      BitBar_screenshot_dashboard_create_your_first_test_run.png
    • If the project has test runs, click Add new test run.

    OR

    On the Dashboard, in the Automation Testing section, click Create Automated Test.

    Creating an automated test

    OR

    In the left navigation menu, select Automation > Create Automated Test.

    Creating an automated test from Automation
  2. Select a target operating system in the Select a target OS type step:

    • iOS – Select for testing iOS applications.

    • Android – Select for testing Android applications.

    • Desktop – Select for browser testing.

  3. Depending on the previously selected operating system, select one of the available frameworks in the Select a framework step. Currently, the supported frameworks are:

    Note

    Depending on your cloud setup, the names and available frameworks may differ from the ones listed below.

    Table 5. Supported Testing Frameworks across iOS, Android, and Desktop

    iOS

    Android

    Desktop

    Appium iOS Client Side

    Android Instrumentation and Espresso

    Selenium Client Side

    Server Side (Appium and other frameworks)

    Appium Android Client Side

    Desktop Cypress Server Side

    XCTest

    Server Side (Appium and other frameworks)

    XCUITest

    Flutter Android

    Flutter iOS



  4. Upload an application file in the Choose files step. You can upload up to three files.

    Tip

    If you do not have a test app file yet, click Use our samples to upload sample apps.

    Choose an action to perform over the uploaded files (BitBar selects an action by default depending on the file type. You may change the selection if needed):

    • Install on the device – Apply this action to application package files (.apk on Android, .ipa on iOS). BitBar will upload the selected package to the device and install it there.

    • Copy to the device storage – Works for Android devices and .zip files only (iOS is not supported). BitBar will copy the specified .zip archive to the device and unpack it there. You can then use the unpacked files in your tests.

    • Use to run the testBitBar will upload the file to the device. If the file is a .zip archive, BitBar will unpack it on the device. If you use the Android Instrumentation framework for test automation and upload a .apk file, BitBar will use that file to run the test.

  5. Select a device or a device group in the Choose devices step:

    • Use existing device group – Select this option if you want to use trial devices or have an existing device group.

    • Use chosen devices – Select this option if you want to choose devices from the list of available devices. You can run your tests parallelly on multiple devices. To do that, click Click to choose devices and tick selected devices.

      A screenshot of the BitBar Mobile Test device selection window with two devices selected.
    • Run on currently idle devices – Select this option if you want to run your test on random devices that are not currently used.

  6. Click Create and run automated test.

Advanced settings

In addition to the basic settings described above, there are plenty of additional configuration options for the test run.

To open advanced settings, click the area under the Additional settings (optional) step.

Additional settings
Table 6. Automation Testing Settings Overview

Setting

Description

Project name

The name of the project to store the test run. If the specified project name does not exist, a new project will be created. Omitting this option will create a new project with a default name, for example, Project 1.

Test run name

The name of the test run that describes what you are testing with this test run, for example, a build number, fixed bug ID, date and time, and so on.

Language

The language to be set on the selected devices before starting the test run.

Test time-out period

The timeout of the test run start. The default value is 10 minutes. Possible values are: none, 5, 10, 15, 20, 30, and 60 minutes.

Scheduling

Select how and when to run tests on the selected devices: simultaneously, on one device at a time, or on available devices first only.

  • Simultaneously: The test is started on all available devices at the same time. If the devices are not currently available, the test will start when they become available.

  • One device at a time: The test is started sequentially on all of the selected devices.

  • First available device only: The test is run on the first available device of the selected device group.

Use test cases from

Applies to Android Instrumentation test runs. Select a test class of the package to be executed, if you do not want to run the whole test suite.

Test case options

The test case options to be included or excluded.

Test finished hook

As the test run is finished, it is possible to make a POST call to the specific URL at the end of the test run. Note that in addition to this hook URL, you can also use email or Slack integrations to get notified of finished test runs.

Screenshots configuration

By default, screenshots are stored on the device's SD card at /sdcard/test-screenshots/. Here, you can change the storage location.

Custom test runner

The test runner to be used. The default value: android.test.InstrumentationTestRunner.

Test user credentials

The user name and password combination that should be used during the AppCrawler test run.

Custom test run parameters

Public Cloud supports a number of Shell environment variables that are made available to each test run. These can be used for test case sharding or selecting an execution logic for runs.

For Espresso test sharding, variables numShards and shardIndex are supported out of the box. Use these the same way as you would locally.

Xcode-based test suites can be controlled with XCODE_SKIP_TESTING and XCODE_ONLY_TESTING keys.

  • XCODE_SKIP_TESTING should take the value of the -skip-testing command-line flag. This allows you to skip some named test cases or classes.

  • XCODE_ONLY_TESTING takes the value of the command-line flag -only-testing. This flag allows you to define whether to run a single test method or all tests from a test class.

In On-Premise and Private Cloud setups, users can create their own key-value pairs. For customers with advanced plans, it is possible to create keys for specific tasks to be done before, during, or after a test run also in Public Cloud.

Disable Applications resigning

This setting refers to preventing the re-signing of applications during testing, ensuring they retain their original signature.



See Also

Publication date: