openproctor
v0.0.2
Published
A open source proctoring SDK with face detection and media stream uploading
Downloads
2
Readme
Here’s a README.md file for your project, covering all the steps and guidelines for getting started with your proctoring SDK, implementing face detection, and handling contributions.
OpenProctor is an open-source Proctoring SDK with Face Detection and Media Stream Uploading
This repository contains a Proctoring SDK that allows:
- Capturing camera, microphone, and screen share streams.
- Real-time face detection using TensorFlow.js BlazeFace.
- Streaming and uploading media streams to a server for analysis and storage.
Features
- Camera, Microphone, and Screen Sharing: Capture video, audio, and screen streams.
- Real-time Face Detection: Detect faces from the camera stream using BlazeFace.
- Proctoring Setup: Control when to start and stop media streams and face detection.
- Stream Uploading: Send media streams (video, audio, screen) to a server for storage and analysis.
Table of Contents
Get Started
Prerequisites
- Node.js (version 14 or later)
- npm or yarn
Clone the Repository
git clone https://github.com/your-username/proctoring-sdk.git
cd proctoring-sdk
Install Dependencies
Install the dependencies using npm or yarn:
npm install
or
yarn install
Start the Development Server
npm run dev
This will start the Next.js development server. Visit http://localhost:3000
in your browser to view the application.
Using the SDK
Importing the Proctoring SDK
The SDK provides functions to set up the camera, microphone, screen sharing, and face detection. Use the alias @/lib
to import the SDK components in your project.
import { Proctor } from '@/lib';
Starting Proctoring
To start capturing the camera, microphone, and screen sharing, and run face detection:
Proctor.setup({
onVideoFrame: (cameraStream: MediaStream) => {
// Handle the camera stream (e.g., attach to a video element)
},
videoElement: videoRef.current, // The video element for face detection
canvasElement: canvasRef.current, // The canvas element to draw face boxes
onAudioStream: (audioStream: MediaStream) => {
// Handle the audio stream
},
onScreenStream: (screenStream: MediaStream) => {
// Handle the screen sharing stream
},
onTabSwitch: () => {
// Handle tab switch detection
alert('You switched tabs! Please return to the test.');
}
});
Stopping Proctoring
To stop all active streams and face detection:
Proctor.stopCamera();
Proctor.stopMicrophone();
Proctor.stopScreenShare();
Face Detection
This SDK uses TensorFlow.js BlazeFace for real-time face detection. The face detection model is loaded automatically when the camera stream starts. The results (face bounding boxes) are drawn on a canvas overlay.
How It Works
- The BlazeFace model is loaded and used to detect faces in real-time from the camera feed.
- Face bounding boxes are drawn on a canvas overlaid on the video stream.
Key Code for Face Detection
Face detection is set up when the camera stream is initialized:
import { FaceDetection } from '@/lib/faceDetection';
Proctor.setup({
onVideoFrame: (cameraStream: MediaStream) => {
// Attach the camera stream to the video element and start face detection
FaceDetection.detectFaces(videoRef.current, canvasRef.current);
},
videoElement: videoRef.current, // The video element for face detection
canvasElement: canvasRef.current // The canvas element to draw face boxes
});
Uploading Streams to Server
The SDK includes a StreamUploader
class that allows you to capture and upload the media streams (camera, microphone, and screen) to a server for analysis and storage.
How to Use the StreamUploader
import { StreamUploader } from '@/lib/streamUploader';
// Initialize the uploader
const uploader = new StreamUploader({
videoStream: cameraStream,
audioStream: audioStream,
screenStream: screenStream,
serverUrl: 'https://your-server-url/upload' // Server endpoint to upload streams
});
// Start uploading the streams
uploader.start();
Server-Side Example (Express.js)
The following is an example of an Express.js server to handle media uploads:
const express = require('express');
const multer = require('multer');
const path = require('path');
const fs = require('fs');
const app = express();
const port = 3000;
const storage = multer.diskStorage({
destination: (req, file, cb) => {
const uploadDir = path.join(__dirname, 'uploads');
if (!fs.existsSync(uploadDir)) {
fs.mkdirSync(uploadDir);
}
cb(null, uploadDir);
},
filename: (req, file, cb) => {
cb(null, file.originalname);
}
});
const upload = multer({ storage });
app.post('/upload', upload.single('file'), (req, res) => {
console.log(`File uploaded: ${req.file.originalname}`);
res.status(200).send('File uploaded successfully.');
});
app.listen(port, () => {
console.log(`Server listening at http://localhost:${port}`);
});
Contribution Guidelines
We welcome contributions to this project. Here’s how you can contribute:
1. Fork the Repository
Fork the project repository to your own GitHub account by clicking the Fork button at the top right of the repository page.
2. Clone Your Fork
Clone your fork to your local machine:
git clone https://github.com/your-username/proctoring-sdk.git
3. Create a Branch
Create a new branch for your feature or bug fix:
git checkout -b feature/new-feature
4. Make Your Changes
Make your changes in the codebase, and be sure to write unit tests for any new functionality.
5. Run Tests
Ensure that all tests pass before submitting your changes:
npm run test
6. Commit and Push
Commit your changes and push the branch to your fork:
git add .
git commit -m "Add new feature"
git push origin feature/new-feature
7. Submit a Pull Request
Submit a pull request to the main repository, and provide a detailed description of your changes.
License
This project is licensed under the GNU License 3.0.
Let me know if you need any further clarifications or additions to the README.md
!