I was lucky enough to be added to the Muse Sound Generator Beta program, and I’ll add my feedback here as I use the program.
I’ll start with the pros: The AI has a pretty good understanding of what I want through the text prompts. I’ve tried getting human screams, fictional monster noises, different clocks ticking, and ambient and stinger fx. Each time it does seem to understand the prompt I’m asking of it. The audio does not clip of have any artifacts making it unbearable.
Cons: as far as audio quality, it is not great. There seems to be a lack of depth in terms of frequencies being used. When creating low end frequency there is a lingering artifact in higher frequencies as well, making it sound saturated or distorted. Pretty much anytime I’m asking for an animal or a monster sound effect, it sounds very much like a human creating these noises. As a sound designer, I can easily take these FX and run it through a DAW with my favorite plugins to make them more “Monster” sounding, but for animal noises this isn’t entirely possible. Also, it sounds like it is being pitch corrected (badly), as in it doesn’t seem to transition between pitch, but sounds like a step by step algorithm that is being glued together. I’ve also input clock sounds just to see what it would do, and it does seem to get right the ticks, and I’ve added the prompt of making the ticks one second apart, which does work. However, they are mostly inconsistent in terms of timbre. Again, importing into a DAW and editing it manually could make it work, but this seems to defeat the purpose of the AI model. Why use this when I’ll still have to edit it myself manually especially for something as simple as a clock? I’ll answer questions and update here as I tinker with it more.