[Open Source] gpt4all.unity - free GPT running on your machine



Demo of Gpt4All using Whisper for speech recognition and AC-Dialogue from Mix and Jam.

This is Unity3d bindings for the gpt4all. It provides high-performance inference of large language models (LLM) running on your local machine.

Main features:

  • Chat-based LLM that can be used for NPCs and virtual assistants
  • Models of different sizes for commercial and non-commercial use
  • Fast CPU based inference
  • Runs on local users device without Internet connection
  • Free and open source

Feel free to use it in your projects:



I get the following errors:
Plugins: Failed to load 'C:/Users/Brandon/Downloads/gpt4all.unity-master/gpt4all.unity-master/Packages/com.gpt4all.unity/Plugins/Windows/libllama.dll' because one or more of its dependencies could not be loaded.

Plugins: Failed to load 'C:/Users/Brandon/Downloads/gpt4all.unity-master/gpt4all.unity-master/Packages/com.gpt4all.unity/Plugins/Windows/libllmodel.dll' because one or more of its dependencies could not be loaded.


Could you try download latest master and see if recent update fixed that?

That seems to have fixed it! I'm excited to play around with this.Thanks!

Hello and thanks for your efforts! I'm trying to run the sample scene and get this error in unity 2022.2.1 :

DllNotFoundException: llmodel assembly:<unknown assembly> type:<unknown type> member:(null)
Gpt4All.LlmWrapper.InitFromPath (Gpt4All.LlmModelType type, System.String modelPath) (at /Users/yyyy/Documents/BGS/UnityProject/gpt4all.unity-master/Library/PackageCache/com.gpt4all.unity@9045baeed4/Scripts/LlmWrapper.cs:175)
  • I've tried several models you linked to, and also tried installing the package from the package manager and also as just the project you download, and also set the model path in the code and in the inspector for llm manager

It seems that Unity struggles to import built library. Do you run Unity editor on Windows? What is your CPU model?

I was testing on a mac, is it windows only?

No, it should support Mac. What is your CPU model and MacOS version?

attached a screenshot - using unity 2022.2.1

9312893--1304117--Screenshot 2023-09-14 at 11.42.08.png

9312893--1304123--Screenshot 2023-09-14 at 11.44.51.png

9312893--1304126--Screenshot 2023-09-14 at 11.46.33.png

I think I realized the main issue: I'm getting these warnings, which really should be errors as well:

Plugins: Couldn't open /Users/yyy/Documents/BGS/AIShow/gpt4all.unity-master2022/Library/PackageCache/com.gpt4all.unity@9045baeed4/Plugins/MacOS/libllama.dylib, error: dlopen(/Users/yyy/Documents/BGS/AIShow/gpt4all.unity-master2022/Library/PackageCache/com.gpt4all.unity@9045baeed4/Plugins/MacOS/libllama.dylib, 2): Symbol not found: __ZNKSt3__115basic_stringbufIcNS_11char_traitsIcEENS_9allocatorIcEEE3strEv
  Referenced from: /Users/yyy/Documents/BGS/AIShow/gpt4all.unity-master2022/Library/PackageCache/com.gpt4all.unity@9045baeed4/Plugins/MacOS/libllama.dylib (which was built for Mac OS X 12.3)
  Expected in: /usr/lib/libc++.1.dylib

Plugins: Couldn't open /Users/yyy/Documents/BGS/AIShow/gpt4all.unity-master2022/Library/PackageCache/com.gpt4all.unity@9045baeed4/Plugins/MacOS/llmodel.dylib, error: dlopen(/Users/yyy/Documents/BGS/AIShow/gpt4all.unity-master2022/Library/PackageCache/com.gpt4all.unity@9045baeed4/Plugins/MacOS/llmodel.dylib, 2): Library not loaded: @rpath/libllama.dylib

  Referenced from: /Users/yyy/Documents/BGS/AIShow/gpt4all.unity-master2022/Library/PackageCache/com.gpt4all.unity@9045baeed4/Plugins/MacOS/llmodel.dylib
  Reason: image not found

so the main issue seems to be libllama.dylib was built for Mac OS X 12.3

chatgpt needs to have a db behind it, im curious how this is an offline product, can you elaborate?

yes you have to download them ( there are several linked to in the instructions), each is like 3-4gb

1 Like

Yeah, your MacOS version is probably too old. Sadly, I can't confirm it, but update might fix this issue.

after updating to the latest mac os, it now works!
- however, when I type something, the response takes 20-40 seconds, I tried lowering the max tokens predict/context window to 256 but that didn't help much - is there something else I should try or is it just a matter of needing a more beefier pc?

I tried to add 2 LlmManagers so that the AI's could talk to each other, however I get the error:

SystemException: Only one instance of Llm is supported!

is there some hacky way to simulate 2 or more distinct people at once?

EDIT: for the second part never mind, I just told the AI to write a movie script between 2 people and that works for that purpose

Hi! I'm now trying to run it on windows 10, but when I click play the editor crashes

========== OUTPUTTING STACK TRACE ==================

0x00007FF88FC3B587 (llama) ggml_init
0x00007FF8CAF46C90 (llmodel) gptj_model_load
0x00007FF8CAF48262 (llmodel) GPTJ::loadModel
0x00007FF8CAF501AD (llmodel) llmodel_loadModel
0x000001FCB1FC5AFD (Mono JIT Code) (wrapper managed-to-native) Gpt4All.Native.LlmNative:llmodel_loadModel (intptr,string)
0x000001FCB1FC0B13 (Mono JIT Code) [.\Packages\com.gpt4all.unity\Scripts\LlmWrapper.cs:200] Gpt4All.LlmWrapper:InitFromPath (Gpt4All.LlmModelType,string)
0x000001FCB1FBFE6B (Mono JIT Code) [.\Packages\com.gpt4all.unity\Scripts\LlmWrapper.cs:212] Gpt4All.LlmWrapper/<>c__DisplayClass28_0:<InitFromPathAsync>b__0 ()
0x000001FCB1FBFD75 (Mono JIT Code) System.Threading.Tasks.Task`1<TResult_REF>:InnerInvoke ()
0x000001FB223F4C14 (Mono JIT Code) System.Threading.Tasks.Task:Execute ()
0x000001FB223F4BC3 (Mono JIT Code) System.Threading.Tasks.Task:ExecutionContextCallback (object)
0x000001FB223DDD0E (Mono JIT Code) System.Threading.ExecutionContext:RunInternal (System.Threading.ExecutionContext,System.Threading.ContextCallback,object,bool)
0x000001FB223DD49B (Mono JIT Code) System.Threading.ExecutionContext:Run (System.Threading.ExecutionContext,System.Threading.ContextCallback,object,bool)
0x000001FB223F488B (Mono JIT Code) System.Threading.Tasks.Task:ExecuteWithThreadLocal (System.Threading.Tasks.Task&)
0x000001FB223F459B (Mono JIT Code) System.Threading.Tasks.Task:ExecuteEntry (bool)
0x000001FB223F441B (Mono JIT Code) System.Threading.Tasks.Task:System.Threading.IThreadPoolWorkItem.ExecuteWorkItem ()
0x000001FB223DA9FA (Mono JIT Code) System.Threading.ThreadPoolWorkQueue:smile:ispatch ()
0x000001FB223D9F7B (Mono JIT Code) System.Threading._ThreadPoolWaitCallback:PerformWaitCallback ()
0x000001FB223DA055 (Mono JIT Code) (wrapper runtime-invoke) <Module>:runtime_invoke_bool (object,intptr,intptr,intptr)
0x00007FF896FAE274 (mono-2.0-bdwgc) [C:\build\output\Unity-Technologies\mono\mono\mini\mini-runtime.c:3445] mono_jit_runtime_invoke
0x00007FF896EEEB74 (mono-2.0-bdwgc) [C:\build\output\Unity-Technologies\mono\mono\metadata\object.c:3066] do_runtime_invoke
0x00007FF896F2BB27 (mono-2.0-bdwgc) [C:\build\output\Unity-Technologies\mono\mono\metadata\threadpool.c:386] worker_callback
0x00007FF896F2EBC0 (mono-2.0-bdwgc) [C:\build\output\Unity-Technologies\mono\mono\metadata\threadpool-worker-default.c:502] worker_thread
0x00007FF896F1D37B (mono-2.0-bdwgc) [C:\build\output\Unity-Technologies\mono\mono\metadata\threads.c:1272] start_wrapper_internal
0x00007FF896F1D556 (mono-2.0-bdwgc) [C:\build\output\Unity-Technologies\mono\mono\metadata\threads.c:1348] start_wrapper
0x00007FF957857034 (KERNEL32) BaseThreadInitThunk
0x00007FF957BA2651 (ntdll) RtlUserThreadStart

========== END OF STACKTRACE ===========

A crash has been intercepted by the crash handler. For call stack and other details, see the latest crash report generated in:
* C:/Users/user/AppData/Local/Temp/Unity/Editor/Crashes
  • uploading the crash.dmp (not really sure what to do with this)
  • tried with Unity 2022.3.9f1 and Unity 2023.1.9f1
  • crash only occurs with the model file present, without it just a nullrefrenceexception

9338846--1306595--crash.rar (142 KB)