# DeepSpeech-examples **Repository Path**: mirrors_mozilla/DeepSpeech-examples ## Basic Information - **Project Name**: DeepSpeech-examples - **Description**: Examples of how to use or integrate DeepSpeech - **Primary Language**: Unknown - **License**: Not specified - **Default Branch**: r0.9 - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2020-08-28 - **Last Updated**: 2026-03-07 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README DeepSpeech 0.9.x Examples ========================== These are various examples on how to use or integrate DeepSpeech using our packages. It is a good way to just try out DeepSpeech before learning how it works in detail, as well as a source of inspiration for ways you can integrate it into your application or solve common tasks like voice activity detection (VAD) or microphone streaming. Contributions are welcome! **Note:** These examples target DeepSpeech **0.9.x** only. If you're using a different release, you need to go to the corresponding branch for the release: * `v0.9.x `_ * `v0.8.x `_ * `v0.7.x `_ * `v0.6.x `_ * `master branch `_ **List of examples** Python: ------- * `Microphone VAD streaming `_ * `VAD transcriber `_ * `AutoSub `_ JavaScript: ----------- * `FFMPEG VAD streaming `_ * `Node.JS microphone VAD streaming `_ * `Node.JS wav `_ * `Web Microphone Websocket streaming `_ * `Electron wav transcriber `_ Windows/C#: ----------- * `.NET framework `_ * `Universal Windows Platform (UWP) `_. Java/Android: ------------- * `mozilla/androidspeech library `_ Nim: ---- * `nim_mic_vad_streaming `_.