npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@telefonica/acceptance-testing

v5.1.0

Published

## Setup

Downloads

503

Readme

@telefonica/acceptance-testing

Setup

1. Install jest-environment-puppeteer and @telefonica/acceptance-testing packages

yarn add --dev jest-environment-puppeteer @telefonica/acceptance-testing

2. Setup jest-environment-puppeteer in your jest-acceptance.config.js

module.exports = {
  //...
  globalSetup: 'jest-environment-puppeteer/setup',
  globalTeardown: 'jest-environment-puppeteer/teardown',
  testEnvironment: 'jest-environment-puppeteer',
  //...
};

3. Create a jest-puppeteer.config.js file in your repo

const config = require('@telefonica/acceptance-testing/jest-puppeteer.config.js');
module.exports = config;

This will make your tests to run inside a dockerized chromium when they run headless or in CI, and they will run in a local chromium (the one provided by puppeteer) when you run them with UI, for example while debugging.

If you want to autostart a server before running the acceptance tests, you can configure it in your project package.json like follows:

{
  "acceptanceTests": {
    "devServer": {
      // This is the command that starts your dev server and the port where it runs
      "command": "yarn dev",
      "port": 3000
    },
    "ciServer": {
      // The same for CI server (tipically a production build)
      "command": "yarn start",
      "port": 3000
    }
  }
}

Additionally, you can include host, protocol and path parameters. The path will be used to check if the server is ready:

{
  "acceptanceTests": {
    "ciServer": {
      "command": "yarn dev",
      "host": "127.0.0.1",
      "port": 3000,
      "path": "api/health",
      "protocol": "https"
    }
  }
}

The command can be overridden by setting the ACCEPTANCE_TESTING_SERVER_COMMAND environment variable. For example:

ACCEPTANCE_TESTING_SERVER_COMMAND="yarn start" yarn test-acceptance

protocol

Type: string, (https, http, tcp, socket) defaults to tcp or http if path is set.

To wait for an HTTP or TCP endpoint before considering the server running, include http or tcp as a protocol. Must be used in conjunction with port.

path

Type: string, default to null.

Path to resource to wait for activity on before considering the server running. Must be used in conjunction with host and port.

4. Setup your CI to run using the web-builder docker image

Github actions example:

jobs:
  build:
    runs-on: self-hosted-novum
    container: docker.tuenti.io/service-inf/web-builder:pptr10.4-1.0.0

Important: you must use the same docker image version and remember to update it in your CI config if you update the @telefonica/acceptance-testing package. This is the best way to make sure CI uses the same dockerized chromium version that the developers use in their laptops. Otherwise the screenshot tests snapshots may not match.

Writing acceptance/screenshot tests

import {openPage, screen, serverHostName} from '@telefonica/acceptance-testing';

test('example screenshot test', async () => {
  const page = await openPage({path: '/foo'});

  await screen.findByText('Some text in the page');

  expect(await page.screenshot()).toMatchImageSnapshot();
});

Running acceptance/screenshot tests

Just run:

yarn test-acceptance

or with ui:

yarn test-acceptance --ui

Important: test-acceptance script needs a valid jest.acceptance.config.js file in your repo to work. That file should be configured with jest-environment-puppeteer as described previously. If for some reason you need a different jest config file name you can manually setup some scripts in your package.json:

"test-acceptance": "HEADLESS=1 jest --config your-jest-config.js",
"test-acceptance-ui": "jest --config your-jest-config.js",

Just take into account that the jest-environment-puppeteer must always be configured in your jest config file. Also note that tests run in UI mode by default, unless you set the HEADLESS=1 env var.

Intercepting requests

If you can intercept and mock requests in your acceptance tests you can use the interceptRequest function:

import {openPage, screen, interceptRequest} from '@telefonica/acceptance-testing';

test('example screenshot test', async () => {
  const imageSpy = interceptRequest((req) => req.url().endsWith('.jpg'));

  imageSpy.mockReturnValue({
    status: 200,
    contentType: 'image/jpeg',
    body: myMockedJpeg,
  });

  const page = await openPage({path: '/foo'});

  expect(imageSpy).toHaveBeenCalled();
});

To mock JSON api endpoints you can use interceptRequest too, but we also provide a more convenient api wrapper over interceptRequest: createApiEndpointMock

import {openPage, screen, createApiEndpointMock} from '@telefonica/acceptance-testing';

test('example screenshot test', async () => {
  const api = createApiEndpointMock({origin: 'https://my-api-endpoint.com'});

  const getSpy = api.spyOn('/some-path').mockReturnValue({a: 1, b: 2});
  const postSpy = api.spyOn('/other-path', 'POST').mockReturnValue({c: 3});

  const page = await openPage({path: '/foo'});

  expect(getSpy).toHaveBeenCalled();

  await page.click(await screen.findByRole('button', {name: 'Send'}));

  expect(postSpy).toHaveBeenCalled();
});

By default every mocked response will have a 200 status code. If you want to mock any other status code:

import {openPage, screen, createApiEndpointMock} from '@telefonica/acceptance-testing';

test('example screenshot test', async () => {
  const api = createApiEndpointMock({origin: 'https://my-api-endpoint.com'});

  const postSpy = api
    .spyOn('/other-path', 'POST')
    .mockReturnValue({status: 500, body: {message: 'Internal error'}});

  const page = await openPage({path: '/foo'});

  await page.click(await screen.findByRole('button', {name: 'Send'}));

  expect(postSpy).toHaveBeenCalled();
});
  • createApiEndpointMock automatically mocks CORS response headers and preflight (OPTIONS) requests for you.
  • Both interceptRequest and createApiEndpointMock return a jest mock function.

Using globs

You can also use globs for API paths and origins if you need.

Some examples:

// any origin (default)
createApiEndpointMock({origin: '*'});

// any port
createApiEndpointMock({origin: 'https://example.com:*'});

// any domain
createApiEndpointMock({origin: 'https://*:3000'});

// any subdomain
createApiEndpointMock({origin: 'https://*.example.com:3000'});

// any second level path
api.spyOn('/some/*/path');

// accept any params
api.spyOn('/some/path?*');

// accept any value in specific param
api.spyOn('/some/path?param=*');

:information_source: We use the glob-to-regexp lib internally.

:warning: Headless acceptance tests run in a dockerized chromium, so you can't use localhost as origin. The origin will depend on the docker configuration and host OS. For simplicity, we recommend to use * as origin for tests that mock local APIs (eg. Next.js apps).

Uploading files

Due to a puppeteer bug or limitation, when the chromium is dockerized, the file to upload must exist in the host and the container with the same path.

A helper function prepareFile is provided to facilitate this:

await elementHandle.uploadFile(prepareFile('/path/to/file'));

Collecting coverage

Set the ACCEPTANCE_TESTING_COLLECT_COVERAGE environment variable to enable coverage collection or run with the --coverage flag.

The code must be instrumented with nyc, babel-plugin-istanbul or any istanbul compatible tool.

Frontend coverage information

After each test the coverage information will be collected by reading the window.__coverage__ object from the opened page.

Backend coverage information

To collect coverage from your backend, you must create an endpoint that serves the coverage information and specify it the coverageUrls property in your config. The library will make a GET request to each URL and save the report from the response as a json file. The default value is [].

The backend coverage will be collected after all the tests in the suite have run.

The response must be a JSON with the following structure: {coverage: data}.

Example route in NextJS to serve coverage information:

import {NextResponse} from 'next/server';

export const GET = (): NextResponse => {
  const coverage = (globalThis as any).__coverage__;
  if (coverage) {
    return NextResponse.json({coverage});
  }
  return NextResponse.json({error: 'Not found'}, {status: 404});
};

export const dynamic = 'force-dynamic';

Configuration

The coverage information will be saved as json files. To change the destination folder, set the coveragePath property in your config. The default value is reports/coverage-acceptance. The json files will be stored inside <coveragePath>/.nyc_output.

Example config:

{
  "acceptanceTests": {
    "coveragePath": "coverage/acceptance",
    "coverageUrls": ["http://localhost:3000/api/coverage"]
  }
}

Generate coverage reports

After running the tests, you can use a tool like nyc to generate the coverage reports.

Troubleshooting

Unhandled browser errors

If you see an acceptance test failing without any apparent reason, it could be caused by an unhandled error in the browser. You can inspect it by adding a listener to the pageerror event:

page.on('pageerror', (err) => {
  console.log('Unhandled browser error:', err);
  process.emit('uncaughtException', err);
});

Page errors can be ignored by setting the ACCEPTANCE_TESTING_IGNORE_PAGE_ERRORS environment variable. Do not enable this by default as it could hide legitimate errors in your tests.

Executing with --ui fails (Linux)

If your desktop environment uses Wayland, you may see the following error when running the tests with the --ui flag:

Error: Jest: Got error running globalSetup - /home/pladaria/bra/mistica-web/node_modules/jest-environment-puppeteer/setup.js, reason: ErrorEvent {
  "error": [Error: socket hang up],
  "message": "socket hang up",
  ...

To workaround this issue, you can install a newer Chrome in the repo where you are using the acceptance-testing library:

  • From the repo root: npx @puppeteer/browsers install chrome@stable
  • Remove the chrome installed by puppeteer: rm -rf node_modules/puppeteer/.local-chromium/linux-901912/chrome-linux
  • Move downloaded chrome to the expected location: mv chrome/linux-<version>/chrome-linux64 node_modules/puppeteer/.local-chromium/linux-901912/chrome-linux
  • Cleanup. Remove chrome folder from the repo root: rm -rf chrome

Note that this browser will only be used when running the tests with the --ui flag. In headless mode, the dockerized chromium will be used.

Debug mode

If you need additional logs to debug the acceptance-testing library, you can set the ACCEPTANCE_TESTING_DEBUG environment variable or run the acceptance-testing command with the --debug flag.