Hi all. I’m excited to share that our Muse Chat team is hosting virtual office hours on Dec 11th to answer your questions and to help anyone who’s interested in participating in our beta to try out new features.
In case you missed it - we’re running a FREE beta for new features coming to Muse Chat: the ability to run commands for you (aka agentic behavior) and improved code generation. You don’t need a Muse subscription to participate, and If you have yet to be invited into the beta, you can sign-up for access here.
How to participate in office hours:
We’ll open up the office hours Discussions board on December 10th, and you can submit your questions by 3pm CET on December 11th. We’ll be able to respond to you by the end of the day.
We look forward to answering your question - whether they’re about how to set-up the beta package, what kinds of repetitive tasks you can automate with Muse Chat’s new beta features, or whatever else is top of mind
Chiming in to let you all know that the team and I are all so excited to answer your questions, so please share them in the Office Hour board per Liz’s instructions so we can address as many as possible!
I’m trying to understand the intended use case for this kind of thing.
Tools for this kind of scene construction range from automation “helpers” (like Microverse, which is very easy to use) to full-blown procedural (like Houdini, which is a steeper learning curve but arguably even more powerful).
Correct me if I’m wrong, but what’s being pictured here looks like a super simplistic stab at those same problems, but without a lot of robustness or features. However, this works via Muse. Is the fact one can use Muse to do this the whole point? It seems to me like I would use mature, robust tools for this and that I don’t really care whether I’m clicking a button in “Muse” or somewhere else to get it done.
But I’m probably missing the point. Can you elaborate on what’s uniquely valuable about this case of placing rocks with Muse?
More of a marketing question. Any plans to break off Unity Muse Chat into a separate product? $30 USD a month is very expensive and I simply don’t have a need for the other services in the Muse bundle.
Hi @stenbone. Great question. The team is currently re-evaluating Muse Chat distribution and pricing to help make it more accessible - more details are coming in 2025. For now, we’re offering the Muse Chat beta for free, which gets you access to both the Run and Code shortcuts (automating tasks in the Editor and also generating more elaborate code in Muse), along with the Ask shortcut - the standard Q&A. You can sign up at Unity to get unlimited generations while the beta is running.
Hi @StenBone! Being part of the IDE is something we are exploring, but could you elaborate on how you see that working and what the benefits would be to you?
Hey @Claytonious, what @LizC14 shared is just one example of what you can do with the Muse Chat beta (running commands in the Editor). You can use Chat to automate a variety of other tasks, like disabling all AudioSource Components in the scene, or creating a report with missing scripts, removing all colliders from the scene.
Our overarching goal with the agentic feature is to automate the repetitive tasks, to free up time so creators can focus on the more creative side of game development. Our hope is to boost productivity and cut down on mindless tasks. If you want to try the beta so you can automate tasks, you can add your email to the list over at Unity and we can add you!
Thanks for making it free during beta so we can play with it and see how worthy it is compared to ChatGPT. Having said that, how good its integrition or knowledge with Unity ML? For example if i want him to pick a model and run it for a variable size grid for tic tac toe how would it respond?
Is Muse Chat specifically trained on knowledge about Oculus SDK compared to say chatgpt? Or it is for questions on the Unity engine? I need help with writing code that can help with this SDK specifically.
Hi @jGate99 - Glad to hear, please keep us posted on how it goes!
Regarding how well it knows Unity ML, we have ingested docs on Sentis and I’ll need to check about the ML agents package. So it should be helpful to support you in working with Sentis and the general LLM model knowledge to support you with for the rest!
Hey, our focus is to support you in Unity development in the engine! The goal is to provide you with a product that is Unity-centric, and deeply embedded in the project to give you contextual and relevant answers to questions and generate reliable code.
As an example, you can attach objects e.g. a prefab as context to your chat to understand why it might not behave as you expected or similar.
Regarding the Oculus SDK, I would highly recommend you trying it! The benefit of an LLM pipeline is that it probably will be able to generate that code you need for Oculus SDK <> Unity too, where the unity-specific parts may be retrieved from our documentation! Join the beta and let us know how it goes