Hypertest v2 Docs
HyperTest
  • Overview
    • How It Works?
  • SETUP GUIDE
    • Installation
      • Deploy HyperTest Server
      • Creating your first User
      • Adding your first service
      • Integrate SDK
        • Node.js
          • Node.js SDK with CJS
          • Node.js SDK with ESM
        • Java
    • Start a Test Run
      • CLI Login
      • Type References
      • Java
  • Interpreting Test Results
    • Test Results
    • Understanding Results Categories
    • Mock Not Found
    • AI Match Mocks
    • Accepting Changes
  • USER GUIDES
    • Node.js SDK
      • Limit memory usage
      • Supported NPM packages
      • Mock Dependencies Manually
      • Unmocking/Passing Through
      • Sampling and blocking requests
      • Manage Global Variables
      • Mocking Environment Variables
      • Tags
      • Set HTTP path patterns
      • Discard a test case(Request) while recording
      • Set Git Commit Hash
      • Code coverage based features
        • Continuous Coverage
        • Updating test coverage
        • Running post test deduplication
        • Only testing modified requests
        • Ignore differences for unmodified requests
      • Experimental flags
      • Manual Request
      • Only testing modified requests
      • Server hooks
    • Java SDK
      • Sampling and blocking requests
      • Mock Dependencies Manually
      • Tags
      • Unmocking/Passing Through
      • Code Coverage Setup and Report Generation
      • Supported Java packages
    • Build your own Docker Image
    • CLI Config
    • Ignoring Differences
      • Type References for Filter functions
  • Impact Features
    • Fast Mode
    • Code Coverage Report
    • Delete Recorded Requests
    • Inter Service Testing
  • Release History
    • Slack Integration
    • Version History
Powered by GitBook
On this page
  1. USER GUIDES

Ignoring Differences

Ignoring differences programmatically

PreviousCLI ConfigNextType References for Filter functions

Last updated 7 months ago

In some scenarios, you might want to ignore certain differences between expected mocks/responses and actual ones.

These differences may be due to env variables, timestamps, or other non-critical factors that don't represent true regressions.

HyperTest allows to provide custom filtering logic to ignore such differences through two filter functions:

filterFunctionToIgnoreMockDiffs:

  • This function allows users to specify conditions under which a particular mock difference should be ignored.

  • Input parameters:

    • : An object describing the difference between the recorded mock and the replayed one.

    • : The original mock object recorded during testing.

    • : The request for which this mock difference was captured.

    Check out detailed Type Reference for Input Parameters .

  • Return value:

    The function should return a boolean, false for ignoring and true for keeping the difference.

Consider a case where you're making an outbound call to a 3rd party service and some metadata is being sent along with it, getting this error for random metadata is undesirable.

You can ignore any differences originating from metadata field like shown in the given example:

function filterFunctionToIgnoreMockDiffs({ mockDiff, currentMock, requestObj }) { 
  // Ignore differences in the metadata field
  if (mockDiff?.evaluatedPath?.at(-2) === "metadata") return false;

  // Return true to consider this mock difference as critical
  return true;
}

filterFunctionToIgnoreResponseDiffs:

  • This function allows users to specify conditions under which a particular response difference should be ignored.

  • Input parameters:

  • Return value:

    The function should return a boolean, false for ignoring and true for keeping the difference.

Consider a case where you're sending out some metrics about the order processing to the client, the processing time would definitely vary from an actual env vs a mocked environment, as this is not a real regression it should not be considered.

function filterFunctionToIgnoreResponseDiffs({ responseDiff, requestObj }) { 
  // Ignore differences in the metadata field
  if (responseDiff?.evaluatedPath?.at(-2) === "metadata") return false;

  // Return true to consider this mock difference as critical
  return true;
}

By customizing these functions, you can tailor the test results to focus on actual regressions while ignoring known, non-critical changes.

: An object describing the difference between the expected response and the actual one.

: The request for which this response difference was captured.

Check out detailed Type Reference for Input Parameters .

here
here
responseDiff
requestObj
mockDiff
currentMock
requestObj