Hello Unity!
I have a game that uses a proprietary system for storing DLC (image & audio data)
This DLC is downloaded from the cloud (on user demand) and then loaded from a Byte[ ] as required (during level load)
It mostly works well, accept I’m finding it near impossible to come up with an acceptable solution for the audio side of things.
The audio content is OGG and I’ve currently implemented some temporary code that uses UnityWebRequestMultimedia to create the AudioClip
But it requires the following…
- Open custom DLC package
- Read audio content from DLC package into Byte[ ]
- Write Byte[ ] to storage (as file)
- Use UnityWebRequestMultimedia to load file from storage
This works in the Editor, on Windows & Android (haven’t tested iOS yet)
But…
The extra steps (3\4) of writing\reading the file twice has performance implications. On-top of that, I don’t want to write raw audio content to storage (even if just temporarily)
It looks like Unity includes the required decoders\classes\methods to create an AudioClip from a Byte[ ] (in various formats) but chooses not to make them public, which is very frustrating.
There are many dozens of threads\requests for this feature (dating back ten years+) but it’s still not possible. Textures have this support (i.e. LoadImage) but not audio.
Whilst I understand Unity want developers to go down the AssetBundle \ Addressable routes. Those who choose custom DLC solutions shouldn’t be penalised by basic functionally that’s already present, just hidden.
So is there a way to create an AudioClip from a Byte[ ] without an extra write to storage? If not, please could Unity consider surfacing what’s needed?
PS: I even attempted to use NVorbis (as that allows an AudioClip to be loaded from a Stream, therefore a Byte[ ]). But it’s managed code, slow-ish (when using the main thread) and very slow (when threaded). So abandoned that approach.