npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@snap/camera-kit

v1.1.0

Published

Camera Kit Web

Downloads

39,423

Readme

Snap Camera Kit Web SDK

The Camera Kit Web SDK allows web developers to build Snap's core AR Lens technology into their applications.

Minimum browser requirements

  • Chrome 73+
  • Safari 15+
    • MacOS 12+, iOS 15+, iPadOS 15+
    • SIMD: Unsupported
    • Web Worker Mode*: Unsupported
  • Edge 79+
    • Edge is still currently under evaluation. However, since New Edge is Chromium based, the expectations are similar to that of Chrome.
  • Firefox - Under Evaluation
    • Web Worker Mode*: Firefox 105+

*Web Worker Mode requires OffscreenCanvas support.

Prerequisites

Snap Developer set up

You'll need a Snap Developer account, and you'll need to apply for access to Camera Kit Web SDK. You can find more info on that here.

You may also want to familiarize yourself with how to access and manage AR content (i.e. Lenses). You read about that here.

Development environment

This guide assumes you've already set up an NPM package, you're using TypeScript, and have some way to build and host your project during development (e.g. using Webpack).

Using the SDK in a JavaScript project

The SDK is authored in TypeScript, and distributes type definitions. All the examples here are presented in TypeScript. We encourage the use of TypeScript in projects that consume the SDK – but it's also fully compatible with JavaScript projects as well.

Content Security Policy

If your project already has a Content Security Policy in place, you'll likely need to make some changes in order for Camera Kit Web SDK to work.

When it bootstraps, Camera Kit Web SDK downloads an executable WebAssembly file containing the Lens rendering engine. This file is served from an optimized CDN managed by Snap. You'll need to make sure your Content Security Policy allows this file to be executed.

  • connect-src must include https://*.snapar.com, otherwise Camera Kit Web will fail to initialize.
  • script-src must include https://cf-st.sc-cdn.net/ blob: 'wasm-unsafe-eval'.

Note: Some older browser versions may not support the 'wasm-unsafe-eval' source value, and it may be necessary to use 'unsafe-eval' to allow Camera Kit's downloaded WebAssembly to run.

Installing the SDK

npm install @snap/camera-kit

You can find the Camera Kit Web SDK package on npmjs.com here.

Importing Camera Kit

Currently, the SDK distributes JavaScript modules. We may support other module formats in the future (e.g. CommonJS), but for now you'll need to use import syntax to use the Camera Kit Web SDK.

Bootstrapping the SDK

With the SDK installed and imported, the first thing to do is bootstrap the SDK. When bootstrapping, the SDK will download the WebAssembly runtime which is used to render Lenses. This is also where you'll configure the SDK according to your needs.

To call {@link bootstrapCameraKit}, you'll need to provide a apiToken. Once you've completed the Getting set up in our portals section of the Getting Started guide, you'll be able to find this in the Snap Kit Portal.

import { boostrapCameraKit } from "@snap/camera-kit";

(async function main() {
    const apiToken = "Your API Token value copied from the Camera Kit Portal";
    const cameraKit = await bootstrapCameraKit({ apiToken });
})();

Creating a CameraKitSession

In order to render Lenses, you must first create a {@link CameraKitSession}. Each session represents a rendering pipeline - it connects an input media source (e.g. a webcam) to Camera Kit's AR engine, applies a Lens, and renders the output to a <canvas> element.

There are two ways to create a session. If you already have a <canvas> element on your page that you'd like for the {@link CameraKitSession} to render into, you can pass it to Camera Kit when creating the session. Otherwise, the session will create a new <canvas> element which you can add to the DOM.

// Using an existing canvas
const canvas = document.getElementById("my-canvas");
const session = await cameraKit.createSession(canvas);

// Let Camera Kit create a new canvas, then append it to the DOM
const canvasContainer = document.getElementById("canvas-container");
const session = await cameraKit.createSession();
canvasContainer.appendChild(session.output.live);

There are actually two different output canvases:

  • live: This canvas renders content intended for the Lens user. Depending on the Lens being used, this canvas may include UI elements, prompts, or other content that is only meant to be seen by the user of the Lens.
  • capture: This canvas renders content intended for presenting to other users.

These two output canvases correspond to the two different RenderTargets a Lens may use to render its content. Not all Lenses will render different content to live vs. capture, so it's important to understand how the Lenses you'll be using use these two different outputs.

No rendering will happen yet, and the output canvas associated with this new CameraKitSession will be blank. Frames are not processed until you start playback by calling the CameraKitSession's play() method. This will be discussed below.

Creating a CameraKitSource

In order for Camera Kit Web SDK to render anything, it must have a source of imagery to be processed by the AR engine. The Lens content will be rendered on top of this source media.

The most common source of input media is the user's webcam. Camera Kit Web SDK provides a helper method to create a CameraKitSource object from a MediaStream. You can use getUserMedia to obtain a MediaStream with video from the user's webcam.

Note that calling getUserMedia will prompt the user to grant the webpage access to their camera.

Once we have a CameraKitSource, we can tell the CameraKitSession to use this source for rendering.

import { createMediaStreamSource, Transform2D } from "@snap/camera-kit"

const stream = await navigator.mediaDevices.getUserMedia({ video: true });
const source = createMediaStreamSource(stream, { transform: Transform2D.MirrorX, cameraType: 'user' });
await session.setSource(source);

In this example, we also mirror the source stream (which feels more natural in most cases), and we indicate this source comes from a front-facing camera. To read more about these options, see below.

Camera Kit Web SDK has helper methods to create a CameraKitSource from:

  • A MediaStream object, which could come from the user's camera, a WebRTC connection, a <canvas> via the captureStream() method, or elsewhere.
  • A <video> element (i.e. HTMLVideoElement)
  • An <img>element (i.e. HTMLImageElement)

Loading, applying, and removing Lenses

In Camera Kit Web SDK -- just like in the Snapchat app itself -- each individual AR experience is called a Lens. Lenses are created using Lens Studio. You can find more information about acquiring Lenses here.

You can find Lens and Lens Groups in the Camera Kit Portal. This is where you can manage your Lenses and Lens Groups and where you can find Lens IDs and Lens Group IDs. You can read more about managing Lenses in the Camera Kit Portal here.

Before applying a Lens to our session, we have to load metadata about that Lens. Lenses can be loaded either one at a time, or entire Lens Groups can be loaded at once. You'll need a Lens ID and its Lens Group ID to complete this step.

// Loading a single lens
const lens = await cameraKit.lensRepository.loadLens("<Lens ID>", "<Lens Group ID>");
await session.applyLens(lens);

// Loading one or more Lens Groups – Lenses from all groups are returned as a single array of lenses.
const { lenses } = await cameraKit.lensRepository.loadLensGroups([
    "<Lens Group ID 1>",
    "<Lens Group ID 2>",
]);
await session.applyLens(lenses[0]);

Removing the Lens

You can also remove the currently-applied Lens:

await session.removeLens();

After removing the Lens, when the session renders it will simply render the input media directly to the output canvas.

Playback

The {@link CameraKitSession} will only process input video frames and render them to the output when you tell it to do so – this way, you can control when Camera Kit is using a client's resources (e.g. CPU and GPU compute cycles). You can tell the session to play or pause.

session.play();

// ...sometime later
session.pause();

Playing and pausing the different RenderTargets

By default, play() will only begin rendering the live output canvas. You can specify which canvas to render by passing an argument:

session.play('live');
session.play('capture');

Calling pause() with no arguments will pause both outputs. But just like with play(), you can pass an argument to select which canvas to pause.

session.pause('live');
session.pause('capture');
  • live: This canvas renders content intended for the Lens user. Depending on the Lens being used, this canvas may include UI elements, prompts, or other content that is only meant to be seen by the user of the Lens.
  • capture: This canvas renders content intended for presenting to other users.

Error Handling

Camera Kit Web SDK methods may throw errors, or return rejected Promises -- these are documented in the API docs. It is good practice to handle such cases, to provide a good experience to your users.

Errors may also occur during Lens rendering. For example, Lenses contain their own scripting, which could throw an error. A rendering error could also occur if a Lens attempts to use a feature that is not supported by Camera Kit Web SDK.

When a LensExecutionError such as these occurs, the Lens is automatically removed from the CameraKitSession. An error event is emitted so that your application can respond appropriately. You can listen to these error events like so:

session.events.addEventListener('error', (event) => {
  console.error(event.detail.error);

  if (event.detail.error.name === 'LensExecutionError') {
    // The currently-applied Lens encountered a problem that is most likely unrecoverable and the Lens has been removed.
    // Your application may want to prevent this Lens from being applied again.
  }
});

Putting it all together

Using the examples above, here's a complete example of the minimal Camera Kit Web SDK integration:

import { boostrapCameraKit, createMediaStreamSource } from "@snap/camera-kit";

(async function main() {
    const apiToken = "Your API Token value copied from the SnapKit developer portal";
    const cameraKit = await bootstrapCameraKit({ apiToken });

    const canvas = document.getElementById("my-canvas");
    const session = await cameraKit.createSession({ liveRenderTarget: canvas });
    session.events.addEventListener('error', (event) => {
      if (event.detail.error.name === 'LensExecutionError') {
        console.log('The current Lens encountered an error and was removed.', event.detail.error);
      }
    });

    const stream = await navigator.mediaDevices.getUserMedia({ video: true });
    const source = createMediaStreamSource(stream, { transform: Transform2D.MirrorX, cameraType: 'user' });
    await session.setSource(source);

    const lens = await cameraKit.lensRepository.loadLens("<Lens ID>", "<Lens Group ID>");
    await session.applyLens(lens);

    await session.play();
    console.log("Lens rendering has started!");
})();

main();

Advanced use cases

Logging

By default, Camera Kit Web SDK does very minimal logging. Specifying 'console' as logger will cause more log statements to be printed to the browser's console, which may be useful during development.

const cameraKit = await bootstrapCameraKit({
  apiToken: '<apiToken>',
  logger: 'console',
});

Keyboard support for Lenses

Some Lenses allow for keyboard input. The SDK provides a <textarea> (HTMLTextAreaElement) element that can be added to your page, and will send its text to the active Lens whenever the user presses the Enter key.

const textAreaContainer = document.getElementById('text-area-container');
const textArea = session.keyboard.getElement();
textAreaContainer.appendChild(textArea);

Alternatively, an event is emitted when a Lens is expecting text input. You can then implement your own UI and logic for obtaining text input from your users, and then send it back to the Lens. For example, something like:

const input = document.getElementById('my-text-area');
session.keyboard.addEventListener('active', () => {
  input.classList.remove('hidden');
});

input.addEventListener('keyup', () => {
  session.keyboard.sendInputToLens(input.value);
})

See an example of this here.

Customizing CameraKitSource

Setting the render size

By default, Camera Kit will render its output at the same resolution as the video input you provided. But you can also tell Camera Kit to render at a different resolution.

Keep in mind that this controls Camera Kit's render resolution, and not (necessarily) the size at which the output canvas is displayed to the user. The output canvas may be sized using HTML and CSS, and may apply its own scaling to the rendered output.

Most of the time you'll not need to set the render size – but it could be useful if your video source is, say, very high resolution. In that case, you may observe better performance by telling Camera Kit to render at a lower resolution.

await session.setSource(source)
// This must be done *after* calling `setSource()`
await source.setRenderSize(width, height);

When calling getUserMedia, best performance can be achieved by requesting the resolution at which you want to render. This can be done using constraints. Then you won't have to use setRenderSize at all.

Camera type

When setting up a CameraKitSource, you can specify whether or not it is a front facing camera or a back facing camera. By default, the media source will be treated as a front facing camera. If it is a back facing camera be sure to specify it as such when you create your source; this ensures that Lenses created for back facing camera's (e.g. World AR) render properly.

const stream = await navigator.mediaDevices.getUserMedia(constraints);
const source = createMediaStreamSource(stream, { cameraType: 'environment' });
await session.setSource(source);

FPS limit

In some cases, it may be desirable to set a limit on the FPS at which Camera Kit renders. By default, Camera Kit will attempt to render frames at the same rate as the source media. But, for example, if your input media source has a very high framerate you may want to limit Camera Kit's rendering framerate to achieve better performance.

This can be done at the CameraKitSession level:

await session.setFPSLimit(60);

Or at the CameraKitSource level:

const stream = await navigator.mediaDevices.getUserMedia(constraints);
const source = createMediaStreamSource(stream, { fpsLimit: 60 });

These options can also be used with any of the create source helpers.

2D transforms

Any CameraKitSource can be transformed using a matrix, to rotate, scale, or mirror the source media. This is most often used to mirror a front-facing camera stream so that it appears more natural to the user.

import { Transform2D } from "@snap/camera-kit";

await session.setSource(source)
// This must be done *after* calling `setSource()`
source.setTransform(Transform2D.MirrorX);

Metrics reporting

Camera Kit Web SDK reports certain important events which may be of interest to the application.

This is an unstable portion of the Camera Kit Web SDK's public API -- the specific events and their properties may change between versions of the SDK, without warning. We currently offer these events as a convenient way to gather more information from the SDK, but if you need to report accurate metrics (e.g. number of lens views), that is currently something you will have to implement in your application.

Currently, the only event that may be of interest is the lensView event. It is emitted whenever a lens is removed from the CameraKitSession, indicating a complete lens view. It contains information about the lens' performance (e.g. fps, frame processing times, etc.) and how long the lens was applied.

You may listen to these events like so:

cameraKit.metrics.addEventListener('lensView', (event) => {
  console.debug(event.detail);
});