npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@jellyfish-dev/react-native-membrane-webrtc

v7.5.0

Published

A React Native client for Membrane WebRTC server.

Downloads

4

Readme

react-native-membrane-webrtc

react-native-membrane-webrtc is a React Native wrapper for membrane-webrtc-android and membrane-webrtc-ios. It allows you to quickly and easily create a mobile client app in React Native for Membrane server.

Documentation

API documentation is available here

Installation

Firstly install react-native-membrane with yarn or npm

yarn add @jellyfish-dev/react-native-membrane-webrtc

or

npm install --save @jellyfish-dev/react-native-membrane-webrtc

Expo plugin

If you're using development builds with eas build or bare workflow you can try using expo plugin to do the configuration below for you. Simply run:

expo install @jellyfish-dev/react-native-membrane-webrtc

Add plugin to your app.json if it's not already added:

{
  "expo": {
    "name": "example",
    ...
    "plugins": [
      "@jellyfish-dev/react-native-membrane-webrtc"
    ]
  }
}

If you want to use screensharing feature, enable it like this:

{
  "expo": {
    "name": "example",
    ...
    "plugins": [
      [
        "@jellyfish-dev/react-native-membrane-webrtc",
        {
          "setUpScreensharing": true,
        }
      ]
    ]
  }
}

On bare workflow run expo prebuild to configure the app, then run pod install. On development build eas build should take care of it.

Android

  1. Add camera and microphone permissions to your AndroidManifest.xml.
  2. Rebuild the app. That's it!

iOS

On iOS installation is a bit more complicated, because you need to setup a screen broadcast app extension for screensharing.

  1. Add camera and microphone permissions to your main Info.plist.
    <key>NSCameraUsageDescription</key>
    <string>Allow $(PRODUCT_NAME) to use the camera</string>
    <key>NSMicrophoneUsageDescription</key>
    <string>Allow $(PRODUCT_NAME) to use the microphone</string>
  2. We recommend adding audio background mode in Info.plist so that the app doesn't disconnect when it's in background:
	<key>UIBackgroundModes</key>
  <array>
    <string>audio</string>
  </array>
  1. Open your <your-project>.xcworkspace in Xcode

  2. Create new Broadcast Upload Extension. Select File → New → Target... → Broadcast Upload Extension → Next. Choose the name for the new target, select Swift language and deselect "Include UI Extension".

    New target config

    Press Finish. In the next alert xcode will ask you if you want to activate the new scheme - press Cancel.

  3. Configure app group. Go to "Signing & Capabilities" tab, click "+ Capability" button in upper left corner and select "App Groups".

    App groups config

    Then in the "App Groups" add a new group or select existing. Usually group name has format group.<your-bundle-identifier>. Verify that both app and extension targets have app group and dev team set correctly.

  4. A new folder with app extension should appear on the left with contents like this:

    App extension files

    Replace SampleHandler.swift with MembraneBroadcastSampleHandler.swift and this code:

    import Foundation
    import MembraneRTC
    import os.log
    import ReplayKit
    import WebRTC
    
    
    /// App Group used by the extension to exchange buffers with the target application
    let appGroup = "{{GROUP_IDENTIFIER}}"
    
    let logger = OSLog(subsystem: "{{BUNDLE_IDENTIFIER}}.MembraneBroadcastSampleHandler", category: "Broadcaster")
    
    /// An example `SampleHandler` utilizing `BroadcastSampleSource` from `MembraneRTC` sending broadcast samples and necessary notification enabling device's screencast.
    class MembraneBroadcastSampleHandler: RPBroadcastSampleHandler {
        let broadcastSource = BroadcastSampleSource(appGroup: appGroup)
        var started: Bool = false
    
    
        override func broadcastStarted(withSetupInfo _: [String: NSObject]?) {
            started = broadcastSource.connect()
    
            guard started else {
                os_log("failed to connect with ipc server", log: logger, type: .debug)
    
                super.finishBroadcastWithError(NSError(domain: "", code: 0, userInfo: nil))
    
                return
            }
    
            broadcastSource.started()
        }
    
        override func broadcastPaused() {
            broadcastSource.paused()
        }
    
        override func broadcastResumed() {
            broadcastSource.resumed()
        }
    
        override func broadcastFinished() {
            broadcastSource.finished()
        }
    
        override func processSampleBuffer(_ sampleBuffer: CMSampleBuffer, with sampleBufferType: RPSampleBufferType) {
            guard started else {
                return
            }
    
            broadcastSource.processFrame(sampleBuffer: sampleBuffer, ofType: sampleBufferType)
        }
    }

    Replace {{GROUP_IDENTIFIER}} and {{BUNDLE_IDENTIFIER}} with your group identifier and bundle identifier respectively.

  5. In project's Podfile add the following code:

    target 'MembraneScreenBroadcastExtension' do
      pod 'MembraneRTC/Broadcast'
    end

    This new dependency should be added outside of your application target. Example

    target 'ReactNativeMembraneExample' do
     ...
    end
    
    target 'MembraneScreenBroadcastExtension' do
     pod 'MembraneRTC/Broadcast'
    end
  6. Run pod install in your ios/ directory

  7. Add the following constants in your Info.plist:

    <key>AppGroupName</key>
    <string>{{GROUP_IDENTIFIER}}</string>
    <key>ScreencastExtensionBundleId</key>
    <string>{{BUNDLE_IDENTIFIER}}.MembraneBroadcastSampleHandler</string>

    Replace {{GROUP_IDENTIFIER}} and {{BUNDLE_IDENTIFIER}} with your group identifier and bundle identifier respectively.

  8. Rebuild the app and enjoy!

Example

We strongly recommend checking out our example app that implements a basic video room client. To run the app:

  1. Go to Membrane's server demo repo: https://github.com/membraneframework/membrane_videoroom. Follow instructions there to setup and run demo server.
  2. Clone the repo
  3. $ cd example
    $ yarn
  4. In App.ts replace server url with your server's url.
  5. yarn run android or yarn run ios or run project from Android Studio / Xcode just like every RN project. Note that simulators won't work, you have to test on real device for camera and screensharing to run.

Usage

Important note!! Since version 7.4.0 call function initializeWebRTC() once in your app before using any other functionality.

Start with connecting to the membrane webrtc server. Use useWebRTC() hook to manage connection:

const { connect, disconnect, error } = useWebRTC();

Connect to the server and join the room using the connect function. Use user metadata to pass things like usernames etc. to the server. You can also pass connection params that will be sent to the socket when establishing the connection tries.

const startServerConnection = () => {
  try {
    await connect('https://example.com', "Annie's room", {
      endpointMetadata: {
        displayName: 'Annie',
      },
      connectionParams: {
        token: 'TOKEN',
      },
    });
  } catch (e) {
    console.log('error!');
  }
};

Remember to gracefully disconnect from the server using the disconnect() function:

const stopServerConnection = () => {
  await disconnect();
};

Also handle errors properly, for example when internet connection fails or server is down:

useEffect(() => {
  if (error) console.log('error: ', e);
}, [error]);

Start the device's camera and microphone using useCamera() and useMicrophone() hooks. Use videoTrackMetadata and audioTrackMetadata options to send metadata about the tracks (for example whether it's a camera or screencast track).

const { startCamera } = useCamera();
const { startMicrophone } = useMicrophone();

await startCamera({
  quality: VideoQuality.HD_169,
  videoTrackMetadata: { active: true, type: 'camera' },
});
await startMicrophone({ audioTrackMetadata: { active: true, type: 'audio' } });

For more options and functions to control the camera and microphone see the API documentation.

If you have the connection set up, then use useEndpoints() hook to track the other endpoints in the room. One of the endpoints will be a local participant (the one who's using the device). When endpoints is added or removed because an user joins or leaves the room, the endpoints will be updated automatically. Simply call the hook like this:

const endpoints = useEndpoints();

When you have the endpoints all that's left is to render their video tracks. Use <VideoRendererView /> component like this:

{
  endpoint.videoTracks.map((track) => (
    <VideoRendererView trackId={track.id} />
  ));
}

You can style the views to lay out them however you'd like, basic animations should work too.

There are also some simple hooks for toggling camera, microphone and screensharing. Use them like this:

const { isCameraOn, toggleCamera } = useCameraState();
const { isMicrophoneOn, toggleMicrophone } = useMicrophoneState();

For screencasting use useScreencast() hook. The local endpoint will have a new video track which you can render just like an ordinary video track with :

const { isScreencastOn, toggleScreencast } = useScreencast();
...
toggleScreencast({screencastMetadata: { displayName: "Annie's desktop" }});

Use track metadata to differentiate between video and screencast tracks.

Developing

Run ./scripts/init.sh in the main directory to install swift-format and set up git hooks.

To release a new version of the lib just run yarn release, follow the prompts to bump version, make tags, commits and upload to npm To release a new version of the example app on Android install fastlane, get upload key password and firebase auth json from the devs, update ~/.gradle/gradle.properties like this:

MEMBRANE_UPLOAD_STORE_FILE=my-upload-key.keystore
MEMBRANE_UPLOAD_KEY_ALIAS=my-key-alias
MEMBRANE_UPLOAD_STORE_PASSWORD=********
MEMBRANE_UPLOAD_KEY_PASSWORD=********

and run yarn releaseAppAndroid from the main directory.

To release a new version of the example app on iOS install fastlane, get added to swmansion app store account and run yarn releaseAppIos from the main directory.

Pro tip: when developing set backend url in .env.development.

Credits

This project has been built and is maintained thanks to the support from dscout and Software Mansion.