Would adding Wwise, or Fmod to the project be something feasible? Audio integration using audio middlewares is one of my passions, and it seems I’m not the only one who would like to add dynamic audio .
I won’t speak for the project leads but my thinking is that this project is meant to be entirely done with Unity. Since it’s a chance to exemplify what Unity can do. I’m always a fan of middleware but I’m thinking native here.
Hmm what about an handmade system?
I, too, think native would be better. Both middlewares would require a license:
FMOD would also additionally require FMOD Studio.
Both middlewares have a free tier for production budgets under $XYZk, but I don’t know how that would translate to when it comes to our project here…
I guessed it would be fine, since the project is open source and released for free. Am I wrong?
Ultimately it would be up to the project leads. Mauri is right that the open source nature of the project opens some legal concerns that Unity as a company would have to consider.
My guess would be to not use any of it since we don’t want any legal dispute over the use of the middleware. As much as I would love to use FMOD I think the focus here is to encourage people to use Unity’s built in components and / or to create our own.
From the Contributor Guide:
Hmm what about a simple audio manager that keeps track of the bpm and key, or slightly more complicated, a melody/side melody or the chords to create a random arpeggio?
I wrote a simple small utility tool which utilizes Unity’s native Audio component to randomize + automate some audio elements. When I was writing this I was in an extreme rush and I had no idea how editor UI worked therefore some badly optimized and badly written code is to expected. If it is needed I can update it to be more clean and optimized + easier to understand.
Edit: I released it under MIT license so no one has to worry about some legal stuff.
Nice. I don’t know how your tool works but I thought about using something like this GitHub - Volian0/midi2json: MIDI to JSON converter for Piano Tiles 2 to coordinate the music with the sounds.
That’s a totally different set of tool. What I understand from it it’s a MIDI converter. Not that I understand from Midis or I know anything about them. I literally know nothing about them. It’s just a term I heard over and over again from the audio engineers that I have worked with lol
My tool simply dynamically randomizes the audio’s you provide to it with the parameters you gave such as pitch, it’s spatial blending, it’s volume, hearing distances etc.
@superpig btw I realized that I have not provided the link to the repo. What an idiot I am… Here is the link https://github.com/EmreB99/Dynamic-Audio-for-Unity
I thought about audio middleware, but I am not 100% sure how nice they play with open-source projects.
They often come with libraries/DLLs/so on, and I don’t know if we can redistribute those as part of the project’s files.
To anticipate suggestions: I don’t think it’s fine to strip those libraries and distribute the project as incomplete. We want this project to be easy to just download and try, without missing pieces.
If anyone would be able to clarify that from the legal standpoint, would be great. Otherwise, we’ll have to rely on Unity’s audio tools.
I think we shouldn’t worry about middleware for this. Since the contributor guidelines specify anything we add needs to be open source, FMOD’s license precludes that. Wwise even more so.
Also, I wouldn’t want to worry about using middleware with source control. For someone who’s dealt with it on a lot of projects it can be a bit stressful to manage with bank building, source assets, etc. Especially for people who aren’t used to dealing with it.
Lastly, I think it’d be a great experience for everyone to work with Unity’s native tools. Let’s make something kick butt with what we already have
Alright, what I mean is we could convert the tracks to json so instead of playing random notes, we would play specifically chosen notes depending of the current chords or melody.
I would strongly recommend not using audio middleware here. For folks interested in doing dynamic or generative music playback it’s an interesting approach but I think that it might be overkill , having a multilayered set of stems with some elements which blend in and out based on game state might be enough IMO.
There are a few pieces of audio / music code around that could be used to build a simple dynamic loop based system that would be appropriate for the game. This guide has some of the fundamentals: How to Queue Audio Clips in Unity (the Ultimate Guide to PlayScheduled) - John Leonard French
I think first and foremost it would be good to get a list of requirements for what the audio system actually needs/wants to do from a game design / mechanics perspective.
Thanks, I thought about a system that would queue audio, but playscheduled seems to be the way to go.
Also we for sure won’t have a key changing music track so just having a system that generates a random pitch from a specific set of notes would be sufficient.
We have the bard, so a fading-in/out spatial melody could be a thing. I could also see something like tribal drums for enemy encounters.
Walking could be on beat and hits could be in tune.
There’s a simple implementation of playing random in-key notes that I did for the Physics Playground Prototype for Unity, if folks are interested it’s publicly available here.
I don’t think it’s really necessary to use a middleware, is it?
It’s not that hard to implement our own system to handle audio with multi-channel pooling for SFX specially.