npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

agora-rte-extension

v1.2.4

Published

Agora RTE Extension

Downloads

169,544

Readme

[TOC]

Agora-RTE-Extension

Introduction

Agora RTE Extension provides the ability for extension developer to interact with Agora RTC SDK NG's VideoTrack and AudioTrack object, making video and audio processing possible.

By receiving MediaStreamTrack or AudioNode as input, running custom processing procedure such as WASM module or AudioWorkletNode, and finally generating processed MediaStreamTrack or AudioNode, it will construct a media processing pipeline to allow custom media processing provided by developers.

How Extension and Processor Interacts With Agora RTC SDK NG

A Processor basically connects to other Processors with pipe method:

processorA.pipe(processorB);

The pipe method returns the Processor passed as parameter itself, making a function chaining style:

//processor actually is processorB
const processor = processorA.pipe(processorB);

//function chaining 
processorA.pipe(processorB).pipe(processorC);

On AgoraRTC SDK NG v4.10.0 and afterwards, the ILocalVideoTrack and ILocalAudioTrack objects also have pipe method on it:

const localVideoTrack = AgoraRTC.createCameraVideoTrack();

localVideoTrack.pipe(videoProcessor);

To make the processed media rendering locally and transmitting through WebRTC, processorDestination property on ILocalVideoTrack and ILocalAudioTrack has to be the final processor through the pipeline:

localVideoTrack.pipe(videoProcessor).pipe(localVideoTrack.processorDestination);

An Extension receives injected utility functionality such as logger and reporter during AgoraRTC.registerExtensions function call:

AgoraRTC.registerExtensions([videoExtension, audioExtension]);

Extension also provides createProcessor method for constructing Processor instance:

const videoProcessor = videoExtension.createProcessor();

Wrap it up:

const videoExtension = new VideoExtension();
AgoraRTC.registerExtensions([videoExtension]);

const localVideoTrack = await AgoraRTC.createCameraVideoTrack();
const videoProcessor = videoExtension.createProcessor();

localVideoTrack.pipe(videoProcessor).pipe(localVideoTrack.processorDestination);

Extension and Processor APIs for extension developers

Extension

Extension._createProcessor

Abstract class Extension has one abstract method _createProcessor needs to be implemented:

abstract class Extension<T extends BaseProcessor> {
  abstract _createProcessor(): T;
}

When implemented, it should return a VideoProcessor or AudioProcessor instance.

AgoraRTC developer calling extension.createProcessor() will return the processor returned by _createProcessor.

Extension.setLogLevel

Abstract class Extension has one static method setLogLevel :

abstract class Extension<T extends BaseProcessor> {
  public static setLogLevel(level: number): void
}

AgoraRTC developer calling Extension.setLogLevel(level) will set the output log level of the extension.

Extension.checkCompatibility

Abstract class Extension has one optional abstract public method checkCompatibility could be implemented:

abstract class Extension<T extends BaseProcessor> {
  public abstract checkCompatibility?(): boolean;
}

When implemented, it should return a boolean value indicating whether extension could be run inside current browser environment.

VideoProcessor

VideoProcessor.name

Abstract property name on VideoProcessor has to be implemented in order to name processor:

abstract name: string;

VideoProcessor.onPiped

Abstract optional method onPiped could be implemented in order to be notified when processor connected to a pipeline with ILocalVideoTrack as it's source:

abstract onPiped?(context: IProcessorContext): void;

It will only be called when an ILocalVideoTrack object from AgoraRTC was connected to the pipeline, or when the processor was connected to a pipeline with ILocalVideoTrack as its source.

Pipeline without an ILocalVideoTrack as it's source, onPiped method will not be called for processors belonging to this pipeline until an ILocalVideoTrack connected to it.

videoTrack.pipe(processor);//will be called

processorA.pipe(processorB);//will NOT be called
videoTrack.pipe(processorA);//will be called for both processorA and processorB

VideoProcessor.onUnpiped

Abstract optional method onUnpiped could be implemented in order to be notified when processor disconnected to a pipeline:

abstract onUnPiped?(): void;

VideoProcessor.onTrack

Abstract optional method onTrack could be implemented in order to be notified when the previous processor or ILocalVideoTrack feeds output MediaStreamTrack to the current processor:

abstract onTrack?(track: MediaStreamTrack, context: IProcessorContext): void;

VideoProcessor.onEnableChange

Abstract optional method onEnableChange could be implemented in order to be notified when processor's _enabled property has changed:

abstract onEnableChange?(enabled: boolean): void | Promise<void>;

AgoraRTC developer calling processor.enable() and processor.disable() may change _enabled property and consequently calling onEnableChange, but enabling an already enabled processor or disabling an already disabled processor will not.

VideoProcessor._enabled

property _enabled describes enabled status of the current processor.

protected _enabled :boolean = true;

It defaults to true , but could be change inside processor constructor:

class CustomProcessor extends VideoProcessor {
  public constructor(){
    this._enabled = false;
  }
}

Other than that, it should not be modified directly.

VideoProcessor.enabled

Getter enabled describes enabled status of the current processor.

public get enabled(): boolean;

VideoProcessor.inputTrack

Optional property inputTrack will be setted when the previous processor or ILocalVideoTrack feeds output track on the current processor:

protected inputTrack?:MediaStreamTrack;

VideoProcessor.outputTrack

Optional property outputTrack will be setted when the current processor calling output() to generate output MediaStreamTrack:

protected outputTrack?:MediaStreamTrack;

VideoProcessor.ID

Readonly property ID is a random ID for the current processor instance:

public readonly ID:string;

VideoProcessor.kind

Getter kind describes current processor's kind, which is either audio or video:

public get Kind():'video' | 'audio';

VideoProcessor.context

Optional property context is the current processor's IProcessorContext :

protected context?: IProcessorContext;

VideoProcessor.output

method output should be called when processor was about to generate processed MediaStreamTrack:

output(track: MediaStreamTrack, context: IProcessorContext): void;

AudioProcessor

AudioProcessor shares almost all the property/methods with VideoProcessor, with 1 exception that AudioProcessor's processorContext is IAudioProcessorContext; and with several additions:

AudioProcessor.onNode

Abstract optional method onNode could be implemented in order to be notified when the previous processor or ILocalAudioTrack feeds output AudioNode to the current audio processor:

abstract onNode?(node: AudioNode, context: IAudioProcessorContext): void;

AudioProcessor.output

method output should be called when audio processor was about to generate processed MediaStreamTrack or AudioNode:

output(track: MediaStreamTrack | AudioNode, context: IProcessorContext): void;

AudioProcessor.inputNode

Optional property inputNode will be setted when the previous processor or ILocalAudioTrack feeds output audio node on the current processor:

protected inputNode?:AudioNode;

####AudioProcessor.outputNode

Optional property outputNode will be setted when the current processor calling output() to generate output AudioNode:

protected outputNode?:AudioNode;

ProcessorContext

ProcessorContext provides the ability to interact with the process pipeline's source which is ILocalVideoTrack or ILocalAudioTrack, and possiblly affecting media capture.

ProcessorContext will be assgined to the processor once the processor was connected with a pipeline has ILocalVideoTrack or ILocalAudioTrack as it's source.

ProcessorContext.requestApplyConstraints

Method requestApplyConstraints provides the ability to change the MediaTrackConstraints used for getting pipeline source's MediaStreamTrack :

public requestApplyConstraints(constraints: MediaTrackConstraints, processor: IVideoProcessor): Promise<void>;

Constraints supplied in requestApplyConstraints will be merged with the original constraints used for creating ICameraVideoTrack. If several processors inside the same pipline all request to apply additional constraints, the pipe order will be considered to make the final constraints.

ProcessorContext.requestRevertConstraints

MethodrequestRevertConstraints provides the ability to revert previous constraints request using requestApplyConstraints:

public requestRevertConstraints(processor: IVideoProcessor):void;

AudioProcessorContext

AudioProceesorContext inherits all the methods provided by ProcessorContext, with one addition getAudioContext.

getAudioContext

Method getAudioContext provides the ability to get AudioContext object of the current pipeline:

public getAudioContext(): AudioContext;

Ticker

Ticker is a utitly class that helps with periodic tasks.

Ticker provides simple interface for choosing periodic task implementation, add/remove task and start/stop task.

new Ticker

Ticker constructor requires ticker type and tick interval as parameter:

class Ticker{
  public constructor(type:"Timer" | "RAF" | "Oscillator", interval: number):Ticker; 
}

Ticker has three implementation to choose from:

  • Timer: uses setTimeout as the internal timer
  • RAF: uses requestAnimationFrame as the internal timer. Most users should choose this type of Ticker as it provides best rendering performance
  • Osciilator: uses WebAudio's OscillatorNode as the internal timer. Can still keep running even the browser tab is not focused.

interval sets the time between the next callback. It is a best effort timing not an exactly timing.

Ticker.add

Ticker.add adds a task to the ticker:

public add(fn: Function): void;

Ticker.remove

Ticker.remove removes the task added to the ticker previously:

public remove():void;

Ticker.start

Ticker.start starts the already add task with settled ticker type and interval:

public start():void;

####Ticker.stop

Ticker.stop stops the previously add task:

public stop():void;

Logger

Logger is a global utility singleton that helps the logging. It provides four log levels to log to the console.

When the extension was registered with AgoraRTC.registerExtension, and the AgoraRTC developer choose to upload log, extension logs loged with Logger will also been uploaded.

Logger.info, Logger.debug, Logger.warning, Logger.error

Theses methods log with different level:

public info(...args:any[]):void;
public debug(...args:any[]):void;
public warning(...args:any[]):void;
public error(...args:any[]):void;

Logger.setLogLevel

Logger.setLogLevel set the output log level of the extension.

public setLogLevel(level: number): void;

Reporter

Reporter is a global utility singleton that helps with event reporting to Agora analysis platform:

Reporter.reportApiInvoke

Repoter.reportApiInvoke can report public API calling event to Agora analysis platform:

interface ReportApiInvokeParams {
  name: string;
  options: any;
  reportResult?: boolean;
  timeout?: number;
}
interface AgoraApiExecutor<T> {
  onSuccess: (result: T) => void;
  onError: (err: Error) => void;
}

public reportApiInvoke<T>(params: ReportApiInvokeParams): AgoraApiExecutor<T>;

It accepts ReportAPIInvokeParams as parameter:

  • ReportAPIInvokeParams.name: the name of the public API
  • options: the arguments, or any other options related to this API invoke
  • reportResult: whether to report API invoke result
  • timeout: specifies how long it is Reporter thinks the API calling is timeout.

It reports two callback methods, onSuccess and onEror, which can be called when the API calling success or failed accordingly.

Extending Extension

Extending an Extension is fairly straightforward as we only need to implement _createProcessor abstract method:

import {Extension} from 'agora-rte-extension'

class YourExtension extends Extension<YourProcessor> {
  protected _createProcessor(): YourProcessor {
    return new YourProcessor(); 
  }
}

Extending Processor

There are several abstract methods could be implemented and they will be called at the different timing of the processing pipeline.

onTrack and onNode

onTrack and onNode method will be called when the previous processor/LocalTrack generated output. They are the main entry point for us to process media:

class CustomVideoProcessor extends VideoProcesor {
  protected onTrack(track: MediaStreamTrack, context: IProcessorContext){}
}

class CustomAudioProcessor extends AudioProcessor {
  protected onNode(node: AudioNode, context: IAudioProcessorContext){} 
}

Video Processing

Typically, doing video processing requests extracting each video frame as ImageData or ArrayBuffer.

As for now InsertableStream have not been globally supported by browser vendors yet, we use canvas API here to extract video frame data:

class CustomVideoProcessor extends VideoProcessor {
  private canvas: HTMLCanvasElement;
  private ctx: CanvasRenderingContext2D;
  private videoElement:HTMLVideoElement;
 
  constructor(){
    super();
    
   	//initialize canvas element
    this.canvas = document.createElement('canvas');
    this.canvas.width = 640;  // canvas's width and height will be your output video streams video dimension 
    this.canvas.height = 480;
    this.ctx = this.canvas.getContext('2d')!;
    
    //initialize video element
    this.videoElement = document.createElement('video');
    this.videoElement.muted = true;
  }
  
  onTrack(track:MediaStreamTrack, context: IProcessorContext){
    //loding MediaStreamTrack into HTMLVideoElement
    this.videoElement.srcObject = new MediaStream([track]);
    this.videoElement.play();
    
    //extract ImageData
    this.ctx.drawImage(this.videoElement, 0, 0);
    const imageData = this.ctx.getImageData(0, 0, this.canvas.width, this.canvas.height);
  }
}

As we can see here video frame data was only been eatracted once inside the onTrack method, but we need to run it inside a constant loop to ouput constant frame rate. Luckily, we can leverage requestAnimationFrameto do this for us:

class CustomVideoProcessor extends VideoProcessor {
  onTrack(track:MediaStreamTrack, context: IProcessorContext){
    this.videoElement.srcObject = new MediaStream([track]);
    this.videoElement.play();
    
    this.loop();
  }
  
  loop(){
    this.ctx.drawImage(this.videoElement, 0, 0);
    const imageData = this.ctx.getImageData(0, 0, this.canvas.width, this.canvas.height);
    
    this.process(imageData);
    
    requestAnimationFrame(()=>this.loop());
  }
  
  process(){
    //your custom video processing logic
  }
}

Generating Video Processing Output

When we've done video processing, Processor's output method should be used to generate video putput. output methods requires MediaStreamTrack and IProcessorContext as it's parameter, so we will need to assemble video buffer into a MediaStreamTrack.

Usually canvas's captureStream helps us with it:

class CustomVideoProcessor extends VideoProcessor {
  doneProcessing(){
    // making an MediaStream from canvas and get MediaStreamTrack
    const msStream = this.canvas.captureStream(30);
    const outputTrack = msStream.getVideoTracks()[0];
    
    //output processed track
    if(this.context){
      this.output(outputTrack, this.context);
    }
  }
}

Audio Processing

Audio processing differs with video processing as that audio processing typically requires WebAudio's capability to do custom audio processing.

We can implement onNode method to receive notification when the previous audio processor/ILocalAudioTrack generated output AudioNode:

class CustomAudioProcessor extends AudioProcessor {
  onNode(node: AudioNode, context: IAudioProcessorContext) {}
}

We can call IAudioProcessorContext.getAudioContext to get AudioContext to create our own audioNode:

class CustomAudioProcessor extends AudioProcessor {
  onNode(node: AudioNode, context: IAudioProcessorContext) {
    //accuire AudioContext
    const audioContext = context.getAudioContext();
    
    //create custom gaiNode
    const gainNode = audioContext.createGain();
  }
}

Also don't forget to connect the input audio node to our custom audio node:

class CustomAudioProcessor extends AudioProcessor {
  onNode(node: AudioNode, context: IAudioProcessorContext) {
    const audioContext = context.getAudioContext();

    const gainNode = audioContext.createGain();
    
    //connect
    node.connect(gainNode);
  }
}

Generating Audio Processing Output

When we've done audio processing, Processor's output method should be used to generate audio output. output methods requires MediaStreamTrack/AudioNode and IAudioProcessorContext as its parameter:


class CustomAudioProcessor extends AudioProcessor {
  onNode(node: AudioNode, context: IAudioProcessorContext) {
    const audioContext = context.getAudioContext();

    const gainNode = audioContext.createGain();

    node.connect(gainNode);
    
    //output
    this.output(gainNode, context);
  }
}

Testing

WIP

Best Practices

Audio Graph Connecting

Handling Enable and Disable

Error Handling