Using API Hub for Contract Testing AI
Generating Pact Tests
To generate Pact tests, use the pactflow-ai generate
command.
Example
You have a simple Product API for which you want to create Pact tests, as described by the following OpenAPI description:
openapi: 3.0.1 info: title: Product API description: Pactflow Product API demo version: 1.0.0 paths: /product/{id}: get: summary: Find product by ID description: Returns a single product operationId: getProductByID parameters: - name: id in: path description: Product ID schema: type: string required: true responses: "200": description: successful operation content: "application/json; charset=utf-8": schema: $ref: '#/components/schemas/Product' "404": description: Product not found content: {} components: schemas: Product: type: object required: - id - name - price properties: id: type: string type: type: string name: type: string version: type: string price: type: number
Generating Tests from OpenAPI Description
OpenAPI Descriptions (OAD) are machine and human-readable descriptions of an API. As it includes details such as resources, verbs, status codes, and request and response bodies, it is a great source for Pact tests.
For more information about OpenAPI, see OpenAPI Specification and Swagger.
Generate a Pact test for the HTTP 200 use case (default), specify a --openapi
and --endpoint
:
pactflow-ai generate --openapi /tmp/products.yml --endpoint "/product/{id}" --output /tmp/api.pact.spec.ts --language typescript
Note
As this generation method does not have access to the client source code, the generated tests use a placeholder API client in place of the real API client, when calling the mock service in the test.
Ensure to update your tests to insert the real API client to ensure the tests are valid.
Generating Tests from Code
If you already have an existing code base, using the --code
flag can help to quickly bootstrap coverage of your contract tests.
Example
If you have already integrated a client with the Product API and have the following (simplified) API client code:t code:
api.js
export class API { // ... async getProduct(id) { return axios .get(this.withPath("/product/" + id), { headers: { Authorization: this.generateAuthToken(), }, }) .then((r) => new Product(r.data)); } }
If you have the following Product
definition:
product.js
export class Product { constructor({id, name, price}) { this.id = id this.name = name this.price = price } }
Generate pact tests using the pactflow-ai generate
command:
pactflow-ai generate \ --output ./src/api.pact.spec.ts \ --language typescript \ --code ./src/api.js \ --code ./src/product.js
Note
You can pass multiple files to the command. Providing the correct information to API Hub for Contract Testing AI ensures it has the necessary context to generate useful output.
Generating Tests from Request-Response
If you do not have access to an OAD or code, instead, have a large test suite that includes request and response logs or detailed behavioral descriptions—such as those found in a BDD suite—describing how the system should behave.
In such scenarios, creating a suite of Pact tests from the recordings is recommended.
Use the following request-response pairs from the output of the curl
command, formatted as an HTTP message. an HTTP Message):
get.request.http
GET /product/10 HTTP/1.1 Host: api.example.com User-Agent: curl/8.1.2 Authorization: Bearer notarealtoken Accept: application/json; charset=UTF-8
get.response.http
HTTP/1.1 200 OK Content-Type: application/json; charset=utf-8 Content-Length: 96 { "id": "10", "name": "Aussie", "type": "Pizza", "version": "v1", "price": 9.99, }
Run the following command to generate a Pact test:
pactflow-ai generate \ --request ./capture/get.request.http \ # path to the request description --response ./capture/get.response.http \ # path to the response description --language typescript \ --output ./src/api.pact.spec.ts
Generating Tests from multiple sources
If multiple sources are available, like an OAD and code, pass them to the generate
command as follows:
pactflow-ai generate \ --openapi ./products.yml \ --endpoint "/product/{id}" \ --output ./src/api.pact.spec.ts \ --language typescript \ --code ./src/product.js \ --code ./src/api.js
When generating the output, the precedence from least-to-most specific impact on the output is as follows:
Request-response
Code
OpenAPI Document
For example, if combining a request-response with an OpenAPI doc, API Hub for Contract Testing AI automatically determines what part of the OAD is relevant and trim the rest (effectively building its own --endpoint
matcher).
Using Test Templates
Test Templates allow teams to generate contract tests that align with their existing style, frameworks, and SDK versions. By defining templates as code or providing additional contextual prompts, users can ensure that generated tests match their project conventions from the start, reducing manual refactoring and improving efficiency.
Test Templates are supported for all forms of test generation.
Providing Code Templates
When generating code, you can ask API Hub for Contract Testing to use a template test to customize the output. API Hub for Contract Testing uses the template to map the style, conventions, and patterns it sees in your code.
To provide a template, use the --template parameter
, passing the location of a file containing the template.
Example
Node JS
import { SpecificationVersion, PactV4, MatchersV3 } from "@pact-foundation/pact"; import { ProductAPI } from './product' import axios from "axios"; // Extract matchers here to improve readability when used in the test const { like } = MatchersV3; // Create a 3 level test hierarchy // // 1. Top level describe block containing the name of the API being tested // 2. Describe block for the specific API endpoint // 3. Test block for the specific test case // 4. Execute the test case // 5. Call the API under test // 6. Assert the response // 8. Use Pact matchers to constrain and test the Provider response // 7. Use Jest matchers to assert the API client behaviour // Top level - name of the API describe("Product API", () => { // Use the PactV4 class, and serialise the Pact as V4 Pact Specification const pact = new PactV4({ consumer: "ProductConsumer", provider: "ProviderProvider", spec: SpecificationVersion.SPECIFICATION_VERSION_V4, }); // Level 2 - Describe block for the specific API endpoint describe("GET /products/:id", () => { // Level 3 - Test block for the specific test case test("given a valid product, returns 200", async () => { await pact .addInteraction() .given("a product with id 1 exists") .uponReceiving("a request for a valid product") // Avoid matchers on the request unless necessary .withRequest("GET", "/products/1", (builder) => { builder.headers({ Accept: "application/json" }); }) .willRespondWith(200, (builder) => { // Use loose matchers where possible, to avoid unnecessary constraints on the provider builder.jsonBody( like({ id: 1, name: "Product 1", price: 100, }) ); }) .executeTest(async (mockserver) => { // Instantiate the ProductAPI client const productAPI = new ProductAPI(mockserver.url); // Call the API under test const product = await productAPI.getProductById(1); // Use Jest matchers to assert the response expect(product).toEqual({ id: 1, name: "Product 1", price: 100, }); }); }); }); });
Passing Additional Prompts
When generating code, specify additional instructions for API Hub for Contract Testing AI to use to customize the output. For example, provide extra guidelines or configurations useful for handling special cases, overriding default behaviors, or adding constraints to the generation logic for a specific test. To customize the output with prompts, use the
To customize the output with prompts, use the --instructions
parameter.
Note
You can provide instructions as a direct string or read them from a local file.
Example
To provide specific updates or constraints for the test generation, use the following instructions:
--instructions "Include the 'X-HMAC-SIGNATURE' header in all GET requests (format: 'SHA256-HMAC-SIGNATURE: {sig}')"
Alternatively, you can load instructions from a file --instructions @/path/to/instructions.txt
This instructs the test generation process to read and use the file content as the instruction.
Example
prompts.txt
:
* Make sure to cover happy and non-happy paths * Specifically, ensure to include test cases for the positive (HTTP 200) scenario and negative scenarios, specifically the case of 400, 401 and 404 * Only include endpoints/properties used by the API client - do not include additional fields in the OAS that are not in the client code * You can check the properties used in the Product class to help make this determination * Use the Jest testing framework * Use the native Jest expect (https://jestjs.io/docs/expect) matchers such as `toEqual` and `toBeTruthy` * Prefer the use of the async/await pattern when using Promises * Use the PactV4 interface
Reviewing Pact Tests
Use the pactflow-ai review
command to analyze an existing Pact test and ensure it follows best practices.
Tip
This feature is currently in beta. Supported languages include JavaScript and Java.
pactflow-ai review \ --test ./src/api.pact.spec.ts \ --code ./src/api.js \ --code ./src/product.js
How to Apply Changes
By default, the review command provides feedback and suggested changes without modifying files.
To apply the suggestions automatically, use the --apply
flag:
pactflow-ai review \ --test ./test.js \ --code ./src.js \ --apply
The command requires a clean Git state. If the files:
are outside a Git repository,
contain uncommitted changes, or
include staged but uncommitted changes,
the command will fail by default to avoid introducing unreviewable modifications.
To override this behavior, use one or more of the following flags:
--allow-no-vcs
- Allow editing files not tracked in a Git repository.--allow-dirty
- Allow changes to files with uncommitted modifications.--allow-staged
- Allow changes to files with staged but uncommitted changes.
Tip
Run pactflow-ai review --help
to see all available options and usage examples.
Best Practices
1. Check API Hub for Contract Testing's work
While API Hub for Contract Testing AI is a powerful tool, it is still capable of making mistakes, and you should always validate the code it suggests. Use the following tips to ensure you are accepting accurate, secure suggestions:
Understand the suggested code before you use it.
Ensure the tests are appropriately scoped and target your system under test (SUT), usually an API client package.
Note
Make sure the tests do not use a generic HTTP client (see below). We do our best to prevent this, but it can occasionally be generated. This is particularly relevant for
--openapi
and--request/--response
generated code. Without knowing your codebase, a placeholder is used which needs to be replaced with the relevant calls to your codebase.// AI generated a dummy client that actually works class ProductClient { constructor(private baseUrl: string) {} async getProduct(productId: string) { return fetch(`${this.baseUrl}/product/${productId}`, { method: 'GET', headers: { 'Authorization': 'Bearer xyz', 'Accept': 'application/json; charset=UTF-8', }, }); } } // ... // Test then uses this dummy client, instead of the real one. // We no longer have a trustworthy test! return provider.executeTest(async (mockserver) => { const client = new ProductClient(mockserver.url); const response = await client.getProduct(10); expect(response.data).to.deep.equal({ id: 27, storeId: '009111fc-992a-4cae-96d4-6507b657b0e4', price: 99.09, categories: ['hardware'], }); });
Be sure the tests improve and challenge the quality of your code base.
Tests state your code can be beneficial for contract testing, but generally, they do not enhance your code quality.
Make sure the tests follow the conventions of your project.
Use automated tests and tooling to check API Hub for Contract Testing AI's prompts. With the help of tools like linting, code scanning, and IP scanning, you can automate an additional layer of security and accuracy checks.
2. Provide Context
In AI, context is key. The more relevant information you provide, the more accurate and reliable the results.
Examples:
When generating code from OpenAPI Descriptions (OAD):
Ensure the OAD is valid. Use API Hub for Design, or tools like https://editor.swagger.io or Spectral to quickly validate and enforce standards.
Make sure to utilize the
description
property across different elements, as it enhances context and clarifies intent.Incorporate examples to demonstrate practical applications and contextual relevance
When providing code:
Include all relevant code. For instance, if you have a class that represents your API and another class for the resource, provide both of those files.
3. Use Test Templates
Test Templates minimize the need for post-generation refactoring. If a recurring generation pattern emerges that you want to avoid, modify the Test Template to enable API Hub for Contract Testing AI to produce more refined outputs.
4. Review Pact Tests
Refer to the Reviewing Pact Tests section and make sure you reviewed your Pact tests.