🧪SeeR Tests: Running & Viewing Automated Tests

Welcome to SeeR's Automated Tests section – your launchpad for rigorously validating your APIs! This is where SeeR takes your configurations and generates and executes intelligent test cases, ensuring your API performs flawlessly.


Running API Automation

After clicking the “Run Automated APIs” button, SeeR begins analyzing your API spec.

Processing State Test generation in progress. Please wait...

Note: This process may take 5–10 minutes, depending on the size and complexity of your Swagger file.


💡 What’s Happening Behind the Scenes?

  • SeeR looks for API endpoints that include valid 200 OK responses

  • It automatically generates smart test cases for those endpoints

🛠️ Avoid refreshing the page during this step. You’ll be notified once test generation completes.


✅ When It’s Done...

Once SeeR finishes generating your test cases, you'll be taken to the Automated Test Case Results screen.

Test Case Results Overview List of all generated and executed test cases.


🔍 Understanding the Test Results Screen

🎛️ Filter Controls

At the top, you'll find three dropdowns to help you filter your test results:

  • Method: Filter by HTTP method (GET, POST, PUT, etc.)

  • API Path: Narrow down tests based on specific endpoints

  • Execution Result: View only passed, failed, or all test results

This helps you quickly find what you're looking for — especially in large specs.


🟢🔴 Test Execution Status Indicators

  • 🟢 Green dot = Test Passed

  • 🔴 Red dot = Test Failed These visual markers appear next to every test case.

Each test is run automatically after it's generated, so you immediately see what passed or failed.


▶️ Viewing Test Details: Dive Deeper! 🔍

Curious about how a specific test ran? Simply click on any test card to expand its detailed view and inspect its execution step-by-step.

Request & Response Details at Your Fingertips ✨

You'll find comprehensive information about each test run, organized into intuitive tabs:

  • Headers: See all the specific request headers that were used in the test.

    Test Case Expanded Headers

    Snapshot of the Headers tab, showing values used.

  • Parameters: View the exact path, query, and other parameters sent with the request.

    Test Case Expanded Parameters

    Snapshot of the Parameters tab, detailing parameter values.

  • Request Body: Inspect the precise payload that was sent in the request's body.

    Test Case Expanded Request Body

    Snapshot of the Request Body tab, showing the request payload.

  • Assertions: Review all the automated checks that SeeR performed against the API's response.

    Test Case Expanded Assertions

    Snapshot of the Assertions tab, listing all checks.

Each expanded test card provides:

  • A clear Description of the test's objective.

  • The Request Headers that accompanied the call.

  • The Parameters and Body payload sent.

  • The Assertions generated and evaluated against the response.

📝 These sections offer a detailed snapshot of the test's execution and are read-only. They reflect the exact input and assertions that were used when the test ran.


📑 Assertion Types

SeeR generates different types of assertions automatically:

  • JSON_PATH assertions – Checks specific values in the response

  • HEADER assertions – Validates headers like Content-Type, Authorization, etc.

  • BODY assertions – Compares static body values

  • JSON_SCHEMA assertions – Validates the structure of the response against the OpenAPI schema


🔄 Sync Status: GitHub Integration Indicator

In the top-right, you'll see a label:

  • Sync: false → GitHub not connected

  • Sync: true → GitHub is connected and test results are tied to commits

This doesn't affect Swagger-based test generation, but becomes useful if you later integrate GitHub for automated impacted test triggering.


🤝 Managing Tests with GitHub Integration

When your GitHub repository is connected (Sync: true), SeeR offers an advanced workflow to manage tests associated with your code commits. You'll see tests created or updated based on your recent changes.

Discovering New and Updated Tests

In the Automated Test Case Results screen, look for these indicators:

Automated Test Case Results with New/Updated Indicators

Overview showing 'UPDATED' and 'NEW TEST' badges.

  • 🟡 UPDATED: An existing test case has been modified due to changes in your API specification.

  • 🔵 NEW TEST: A completely new test case has been generated for a newly discovered or significantly changed API endpoint.

Reviewing and Acting on Changes

Click on any test card marked UPDATED or NEW TEST to review its details and decide whether to accept or reject the changes.

Reviewing UPDATED Tests

When you expand an UPDATED test, SeeR provides a side-by-side comparison of the old and new test configurations, highlighting specific changes in headers, parameters, and the request body.

Updated Test Details with Diff View

Expanded 'UPDATED' test showing a diff view for review.

  • Review Changes: Examine the highlighted differences in the Headers, Parameters, and Request Body tabs.

  • Accept Changes: Click "Accept Changes" to integrate the updated test case into your active test suite.

  • Reject Changes: Click "Reject Changes" to discard the proposed updates. The test will revert to its previous configuration.

Reviewing NEW TEST

For NEW TEST entries, you'll see the full details of the newly generated test case.

New Test Details

Expanded 'NEW TEST' showing its full configuration for review.

  • Review New Test: Examine the proposed new test case configuration.

  • Accept Test: Click "Accept Test" to add this new test case to your active test suite.

  • Reject Test: Click "Reject Test" to discard this new test case. It will not be added to your suite.

💡 Tip: Regularly review UPDATED and NEW TEST entries to keep your test suite aligned with your latest API developments!


➡️ Next up: Interacting with Reports

Last updated