npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@kne-components/speech-text

v1.1.0

Published

Downloads

8

Readme

speech-text

安装

npm i --save @kne-components/speech-text

示例(全屏)

示例样式

.ant-card {
  border-color: black;
  text-align: center;
  width: 200px;
}

示例代码

  • 录音文件上传识别
  • 录音文件上传识别
  • _SpeechText(@kne/current-lib_speech-text)[import * as _SpeechText from "@kne-components/speech-text"],antd(antd)
const {default: speech} = _SpeechText;
const {Button, Alert, Flex} = antd;
const {useState, useEffect, useRef} = React;

const BaseExample = () => {
    const [message, setMessage] = useState({type: 'info', message: '尚未开始'});
    const [recording, setRecording] = useState(false);
    const recordRef = useRef(null);
    useEffect(() => {
        recordRef.current = speech({url: 'https://ct.deeperagi.com/action/papi/ai/vCMA01/uploadWavFile'});
    }, []);
    return <Flex vertical gap={10}>
        <Alert type={message.type} message={message.message}/>
        <div>
            <Button onClick={() => {
                recordRef.current.then(async ({start, stop}) => {
                    setMessage({type: 'warning', message: '正在识别,请稍等'});
                    if (recording) {
                        const {data} = await stop();
                        if (data.code === 200) {
                            setMessage({type: 'success', message: data.message || '未识别到语音内容'});
                        } else {
                            setMessage({type: 'error', message: '转换错误'});
                        }
                    } else {
                        setMessage({type: 'warning', message: '开始语音识别'});
                        start();
                    }
                    setRecording(!recording);
                });
            }}>{recording ? '正在录制' : '点击开始'}</Button>
        </div>
    </Flex>;
};

render(<BaseExample/>);
  • 实时语音识别
  • 实时语音识别
  • _SpeechText(@kne/current-lib_speech-text)[import * as _SpeechText from "@kne-components/speech-text"],antd(antd),_axios(axios)
const {speechTextRealTime} = _SpeechText;
const {Button, Alert, Flex} = antd;
const {default: axios} = _axios;
const {useState, useEffect, useRef} = React;

const BaseExample = () => {
    const [message, setMessage] = useState({type: 'info', message: '尚未开始'});
    const [recording, setRecording] = useState(false);
    const recordRef = useRef(null);
    useEffect(() => {
        recordRef.current = speechTextRealTime({
            getToken: async () => {
                try {
                    const {data} = await axios({
                        url: 'https://ct.deeperagi.com/action/papi/ai/vCMA02/createToken',
                        method: 'POST',
                        data: JSON.stringify({
                            "avgtype": "11111"
                        }),
                        headers: {
                            'content-type': 'application/json'
                        }
                    });
                    return {
                        token: data.token, appKey: data.appKey
                    };
                } catch (e) {
                    return {
                        "appKey": "TYcsiL5CZb9hd9DR", "token": "e80b7d7f6f054f91a79a14a67cb7f34c"
                    };
                }
            }, onChange: ({message}) => {
                setMessage({type: 'success', message});
            }
        });
    }, []);

    return <Flex vertical gap={10}>
        <Alert type={message.type} message={message.message}/>
        <div>
            <Button onClick={() => {
                recordRef.current.then(async ({start, stop}) => {
                    setMessage({type: 'warning', message: '正在识别,请稍等'});
                    if (recording) {
                        await stop();
                        setMessage({type: 'info', message: '识别结束'});
                    } else {
                        setMessage({type: 'warning', message: '开始语音识别'});
                        start();
                    }
                    setRecording(!recording);
                });
            }}>{recording ? '正在录制' : '点击开始'}</Button>
        </div>
    </Flex>;
};

render(<BaseExample/>);

API

默认导出 speech(options):Promise

上传语音文件识别

example:

const {start, stop} = await speech(options);

options:Object

| 属性名 | 说明 | 类型 | 默认值 | |-----|----------------|--------|-----| | url | 上传文件语音识别目标接口地址 | string | - |

开始录音 start():Promise

example:

await start();

结束录音 stop():Promise

example:

const response = await stop();
const {code, message} = response.data;

| 属性名 | 说明 | 类型 | 默认值 | |---------|------------------|--------|-----| | code | 后端接口返回状态值,200为成功 | number | - | | message | 语音转换结果 | string | - |

speechTextRealTime(options):Promise

实时语音识别

example:

const {start, stop} = await speechTextRealTime(options);

options:Object

| 属性名 | 说明 | 类型 | 默认值 | |---------------|------------------------------------------------------------|----------|----------------------------------------| | getToken | 获取Token方法:getToken():{token,appKey} | function | - | | onChange | 识别文本内容发生变化时回调函数 | function | ({message}) => {console.log(message);} | | getGatewayUrl | 获取WebSocket的url地址: getGatewayUrl({token}):url,可以获取到token参数 | function | - | | onComplete | 录音结束回调方法 | function | - | | url | 保存录音文件url | string | - |

开始录音 start():Promise

example:

await start({
    getToken: () => {
    },
    onChange: ({message}) => {
    },
    onComplete: ({file, taskId, messageId, message, chunks}) => {
    }
});

结束录音 stop():Promise

example:

await stop();