Join Wintertainment 2021 to share your stories, have fun, earn community badges, and more!

Parallel Testing

Applies to CrossBrowserTesting SaaS, last modified on October 22, 2021

What is parallel testing?

Parallel testing is your ticket to faster testing and a quicker turn around in deployments. When testing websites or applications, it is important to remember that time is a factor- you always have a finite amount of time to test before deployments or to increase your coverage. Testing 100% of an application is a noble cause, but no developer wants to spend more time testing than developing their product. Parallel testing lets you get more testing done in a tighter window.

With Continuous Integration, testers and developers are asked to be constantly writing new test scripts for different features and test cases. These scripts take time to run, and the more test cases running against an increasing number of environments can spell doom for a deployment. Think that testing is only going to take 2 days? Wrong, your tests have been firing off for a week, and they still have a day left. So how do we speed up testing and do more QA’ing with less time between deployments? The answer is Parallel Testing.


Sequential test execution

Sequential Test Execution

Click the image to enlarge it.

Parallel test execution

Parallel Test Execution

Instead of running tests sequentially, or one after the other, parallel testing allows us to execute multiple tests at the same point in time across different environments or part of the code base. You can do this by setting up multiple VMs and other device infrastructure or by using a cloud test service like CrossBrowserTesting.

Parallel environments

The growing number of devices and browsers your customers are using can be a challenge when trying to test quickly and efficiently. Let us imagine a real world example: in release 3 of your product, you have 8 hours of sequential regression testing to perform before the team feels confident to deploy.

By release 5, this may be twice the amount of hours you will need to run your tests and as a bonus, your product is getting popular and is being used by more users on an increasing number of different devices. Before you were testing for only Chrome and FireFox, but now you see that you need Android and iOS devices, Safari and multiple versions of Internet Explorer. So you have 16 hours of tests and 10 different devices or browser to cover. This would take us 160 hours for complete test coverage before our deployment. With parallel testing environments, we can run our 16 hours of tests on 10 different devices at the same time, saving us 146 hours of testing time.

Parallel execution also has the distinct advantage of isolating test cases and runs to one specific OS or browser, allowing for the testers and developers to dedicate their meaningful resources of serious problems with cross-platform compatibility.

Parallel testing in CrossBrowserTesting

The maximum number of automated Selenium and JavaScript unit tests (API initiated tests) that can be run concurrently is limited according to your subscription plan.

We understand the importance of running as many tests concurrently as possible to increase the velocity of your continuous integration flow. To support this, we do not place arbitrarily low constraints on the number of parallel tests.

A major advantage of our service is that we provide a combination of real operating systems and an extensive array of physical phones and tablets. While we do maintain a large number of mobile devices, they are not unlimited. Ensuring the load is spread across the configurations is important in providing availability for all our customers.

If you try running more parallel tests than your billing plan supports, then additional run requests will be queued. The maximum queue length is equal to the maximum number of parallel tests your plan allows. For example, if your plan supports 5 concurrent tests, then 5 more tests can be in the queue. Additional test requests are denied.






Maximum paralell instances

Sometimes when larger accounts are running multiple tests across different teams, availability can get confusing.

If you use APIs, the response you get will tell you about the number of active automated, manual, or headless tests for both team and a single member:


  "team": {
    "automated": 0,
    "manual": 0,
    "headless": 0
  "member": {
    "automated": 0,
    "manual": 0,
    "headless": 0

You can also receive the response concerning the maximum parallel tests you can carry.


  "automated": 5,
  "manual": 5

Active test count returns the number of tests being currently run, while max limits will return your teams upper limits for manual and automated tests.

Image placeholder

Here is an example of using it with our pre-existing parallel python example:

**note the number of threads created is how many your parallels your team currently has available.

from Queue import Queue
from threading import Thread
from selenium import webdriver
import time , requests


q = Queue(maxsize=0)

browsers = [
    {"os_api_name": "Win7x64-C2", "browser_api_name": "IE10", "name": "Python Parallel"},
    {"os_api_name": "Win8.1", "browser_api_name": "Chrome43x64", "name": "Python Parallel"},
    {"os_api_name": "Mac10.14", "browser_api_name" : "Chrome73x64", "name": "Python Parallel"}

# put all of the browsers into the queue before pooling workers
for browser in browsers:

api_session = requests.Session()
api_session.auth = (USERNAME,API_KEY)
active_tests = api_session.get("").json()['team']['automated']
max_tests = api_session.get("").json()['automated']
print("Active selenium tests happening on overall account: " + str(active_tests) + " \nMaximum sel tests allowed on account: " + str(max_tests))

num_threads = max_tests-active_tests

def test_runner(q):
    while q.empty() is False:
            browser = q.get()
            print("%s: Starting" % browser["browser_api_name"])
            driver = webdriver.Remote(desired_capabilities=browser, command_executor="" % (USERNAME, API_KEY) )
            print("%s: Getting page" % browser["browser_api_name"])
            print("%s: Quitting browser and ending test" % browser["browser_api_name"])
            print("%s: Error" % browser["browser_api_name"])

for i in range(num_threads):
    worker = Thread(target=test_runner, args=(q,))


Another option would be to resend the request for availability every 20-30 seconds at different random timings to help preserve atomicity.

Remember that we have an automatic queueing that will sit for up to 6 minutes as well!

See Also

How to run headless tests on CrossBrowserTesting

Highlight search results