Go to file
2023-09-17 00:29:08 +03:00
.github add cuda build options 2023-09-12 01:03:05 -04:00
build-aux Initial commit 2023-08-10 22:05:20 +03:00
cmake add caption to stream option 2023-09-13 20:36:21 -04:00
data attempt fix 2023-08-20 01:25:19 +03:00
src add step processgin 2023-09-17 00:29:08 +03:00
vendor initial 2023-08-12 23:51:51 +03:00
.clang-format subtitles source working 2023-08-13 15:41:23 +03:00
.cmake-format.json Initial commit 2023-08-10 22:05:20 +03:00
.gitignore dont fail on patch 2023-08-13 18:00:23 +03:00
.gitmodules initial 2023-08-12 23:51:51 +03:00
buildspec.json bump version 2023-09-12 09:23:08 -04:00
CMakeLists.txt add cuda build options 2023-09-12 01:03:05 -04:00
CMakePresets.json add model downloader 2023-08-13 17:55:04 +03:00
LICENSE Initial commit 2023-08-10 22:05:20 +03:00
patch_libobs.diff dont fail on patch 2023-08-13 18:00:23 +03:00
README.md Update README.md 2023-09-14 10:54:40 -04:00

LocalVocal - AI assistant OBS Plugin

GitHub GitHub Workflow Status Total downloads GitHub release (latest by date)

Introduction

LocalVocal live-streaming AI assistant plugin allows you to transcribe, locally on your machine, audio speech into text and perform various language processing functions on the text using AI / LLMs (Large Language Models). No GPU required, no cloud costs, no network and no downtime! Privacy first - all data stays on your machine.

Current Features:

  • Transcribe audio to text in real time in 100 languages
  • Display captions on screen using text sources
  • Send captions to a file (which can be read by external sources)
  • Send captions on a RTMP stream to e.g. YouTube, Twitch

Roadmap:

  • Remove unwanted words from the transcription
  • Translate captions in real time to 50 languages
  • Summarize the text and show "highlights" on screen
  • Detect key moments in the stream and allow triggering events (like replay)
  • Detect emotions/sentiment and allow triggering events (like changing the scene or colors etc.)

Internally the plugin is running a neural network (OpenAI Whisper) locally to predict in real time the speech and provide captions.

It's using the Whisper.cpp project from ggerganov to run the Whisper network in a very efficient way on CPUs and GPUs.

Check out our other plugins:

  • Background Removal removes background from webcam without a green screen.
  • 🚧 Experimental 🚧 CleanStream for real-time filler word (uh,um) and profanity removal from live audio stream
  • URL/API Source that allows fetching live data from an API and displaying it in OBS.

If you like this work, which is given to you completely free of charge, please consider supporting it on GitHub: https://github.com/sponsors/royshil

Download

Check out the latest releases for downloads and install instructions.

Building

The plugin was built and tested on Mac OSX (Intel & Apple silicon), Windows and Linux.

Start by cloning this repo to a directory of your choice.

Mac OSX

Using the CI pipeline scripts, locally you would just call the zsh script. By default this builds a universal binary for both Intel and Apple Silicon. To build for a specific architecture please see .github/scripts/.build.zsh for the -arch options.

$ ./.github/scripts/build-macos -c Release

Install

The above script should succeed and the plugin files (e.g. obs-urlsource.plugin) will reside in the ./release/Release folder off of the root. Copy the .plugin file to the OBS directory e.g. ~/Library/Application Support/obs-studio/plugins.

To get .pkg installer file, run for example

$ ./.github/scripts/package-macos -c Release

(Note that maybe the outputs will be in the Release folder and not the install folder like pakage-macos expects, so you will need to rename the folder from build_x86_64/Release to build_x86_64/install)

Linux (Ubuntu)

Use the CI scripts again

$ ./.github/scripts/build-linux.sh

Windows

Use the CI scripts again, for example:

> .github/scripts/Build-Windows.ps1 -Target x64 -CMakeGenerator "Visual Studio 17 2022"

The build should exist in the ./release folder off the root. You can manually install the files in the OBS directory.

Building with CUDA support on Windows

To build with CUDA support on Windows, you need to install the CUDA toolkit from NVIDIA. The CUDA toolkit is available for download from here.

After installing the CUDA toolkit, you need to set variables to point CMake to the CUDA toolkit installation directory. For example, if you have installed the CUDA toolkit in C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.4, you need to set CUDA_TOOLKIT_ROOT_DIR to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.4 and LOCALVOCAL_WITH_CUDA to ON when running .github/scripts/Build-Windows.ps1.

For example

> .github/scripts/Build-Windows.ps1 -Target x64 -ExtraCmakeFlags "-D LOCALVOCAL_WITH_CUDA=ON -D CUDA_TOOLKIT_ROOT_DIR='C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.4'"

You will need to copy a few CUDA .dll files to the location of the plugin .dll for it to run. The required .dll files from CUDA (which are located in the bin folder of the CUDA toolkit installation directory) are:

  • cudart64_NN.dll
  • cublas64_NN.dll
  • cublasLt64_NN.dll

where NN is the CUDA version number. For example, if you have installed CUDA 11.4 then NN is likely 11.