@locusx/pose-detection
v1.0.6
Published
> 通过输入视频,可以执行动捕、面捕、手势识别、表情识别等功能,开启动捕时其余识别会基于动捕的数据进行识别
Downloads
4
Keywords
Readme
@locusx/pose-detection
通过输入视频,可以执行动捕、面捕、手势识别、表情识别等功能,开启动捕时其余识别会基于动捕的数据进行识别
Install
安装:
npm install --save @locusx/pose-detection
复制静态资源:
将@locusx/pose-detection目录下的inferenceModel和ort拷贝到自己项目的静态目录下
parcel项目
inferenceModel和ort粘贴到dist目录下
vite项目
worker放到public目录下,inferenceModel和ort放到worker目录下
使用:
开启摄像头 openCamera(video) 用于开启摄像头,传入承载视频画面的video dom,返回是一个promise,结果是[width,height]视频画面的尺寸
import {openCamera} from "@locusx/pose-detection"
const video = document.getElementById("video")
openCamera(video)
开启3d场景 createThreeD(three) 用于创建初始的3D场景,传入div dom,返回的是3D场景的scene对象
import {createThreeD} from "@locusx/pose-detection"
const three = document.getElementById("three")
let scene = createThreeD(three)
导入模型 loadModel(url,scene) 用于像3D场景内导入模型,传入模型的地址以及scene场景对象,返回的是promise,结果是模型对象
import {loadModel,createThreeD} from "@locusx/pose-detection"
let scene = createThreeD(three)
let modelPromise = loadModel('./3DModel/trump_T.glb',scene)
识别
import {Identify} from "@locusx/pose-detection"
let identify = new Identify()
Identify.completeIdentify(video,size,options)
identify.bodyStatus = true
identify.faceStatus = true
identify.handStatus = true
identify.emotionStatus = true
<template>
<div class="PoseDetection_wrap">
<video autoplay ref="video"></video>
<canvas ref="canvas1" style="border:1px solid #fff;background-color: #000;"></canvas>
<canvas ref="canvas2" style="border:1px solid #fff;background-color: #000;"></canvas>
</div>
</template>
<script setup>
import { ref,onMounted } from "vue"
import { Identify, createThreeD, loadModel, openCamera } from "@locusx/pose-detection"
const video = ref(null)
const canvas1 = ref(null)
const canvas2 = ref(null)
onMounted(()=>{
openCamera(video.value).then(res=>{
canvas1.value.width = 256/2
canvas1.value.height = 256/2
let ctx1 = canvas1.value.getContext('2d');
canvas2.value.width = 256/2
canvas2.value.height = 256/2
let ctx2 = canvas2.value.getContext('2d');
console.log(ctx1)
let identify = new Identify()
identify.completeIdentify(video.value, res, {
bodyOptions: {
cb: () => { },
x: 0,
y: 0,
width: res[0],
height: res[1],
model: null,
drawCtx: ctx1
},
faceOptions: {
cb: () => { },
x: 220,
y: 100,
width: 256,
height: 256,
model:null,
drawCtx:ctx2
},
emotionOptions: {
cb: (res, time, describe) => {
// console.log(describe)
},
x: 220,
y: 100,
width: 256,
height: 256,
},
handOptions: {
cb: (res, time, describe) => {
// console.log(describe)
},
x: [0, 416],
y: [0, 0],
width: [224, 224],
height: [224, 224]
}
})
identify.bodyStatus = true
identify.faceStatus = true
identify.handStatus = true
identify.emotionStatus = true
})
})
</script>
<style scoped lang="scss">
.PoseDetection_wrap{
position: absolute;
bottom: 20px;
right: 20px;
border:1px solid #ccc;
background: red;
}
</style>