Nice, I’ve done some test with FMOD but it seems to complicate for what I wanted to do.
I found a c# audio wrapper called NAudio : http://naudio.codeplex.com/
And I did it with the wrapper, using the asio driver so the latency is very low.
When the new version of your plugin will be out I will try it to check the latency !
yeah, NAudio is a widepsread .NET audio solution, which I have used occasionally too; from what I understand it’s main bottleneck is that it’s doing en/decoding in the managed code and lack of it’s native audio interface wrappers for various paltforms. ( the latter maybe being progressively patched )
FMOD has more consistent multi platform coverage including consoles, has better platform specific support and has no noticeable footprint when running ( footprint in the profiler is lower than other unity components when running e.g. empty scene, and has 0 GC Allocs ).
As for latency - I did some testing measuring AudioSettings.dspTime difference between 1st audio frame of an AudioSource and 1st callback requested by FMOD for routing the audio to selected output.
The difference is somethings like 60 - 100 ms, give or take some logging involved.
So - not utterly terrible and it might be probably possible to lower it by offloading FMOD initialization to Start and calling the setup for routing from OnAudioFilterRead thread. ( which I did initially, but it seems like this had problems on Mac, so I left it on the main thread for now )
This atm works for any AudioSource but I’ve yet to come up with some easy to use and understand solution for AudioStream too, maybe leaving this as a separate component so it can be used independently.
I’m not sure what do you mean, can you be more specific in what are you trying to do ?
You can request some non standard speaker setup directly from FMOD, if that’s what you mean :
This would atm work only when not using AudioSource with this plugin.
(and might be actually worth implementing for AudioSource with the redirect component too)
… other than that probably not automatically without some explicit sw channel filtering ( or having the original signal already have only required channel/s )
We are currently looking for a solution to be able to select an output device and specify channel on that device per audio file.
So i.e if a soundcard has 8 channels we would be able to play 8 audio files at the same time.
unless the driver does create and expose e.g. virtual device/s mapped to respective sound card physical channel/s i’m afraid there’s no general way to utilize them
( sound is played on a ‘driver’ - which is a representation of a single device in the system in FMOD - then it depends how that concrete device is configured, i.e. what outputs are connected, how the signal is split up is necessary and so on )
new AudioSourceOutputDevice component - enables redirection of AudioSource’s output buffer to any audio output present in the system
update for FMOD Version 1.08.11 ← at least this version is needed for AudioSourceOutputDevice to work as it contains a bug fix formerly preventing so.
fixed tags reporting on track change
( refactored common functionality into a new source file )
So it’s possible to direct output of any AudioSource to any present system output driver via the new AudioSourceOutputDevice component (more info and details in readme).
/cc @rekerukuru89 if you still want to try this out.
Your plugin looks great but before I purchase I wanted to check if it can do what I’m after. I’d like to direct a real-time live audio input coming into my sound card to play from a Studio Event Emitter with 3-D spatial positioning applied. The only processing that is needed is the spatialisation which I have working through FMOD at the moment but I’m unsure if it is possible to do the same with a live input?
First of all - AudioStream does not, unfortunately, stream audio in at the moment.
I was thinking about adding audio / microphone input for some time now, but it is not ready yet.
Secondly - technically it does streaming using FMOD, but not using any FMOD Studio functionality - I suppose that Studio Event Emitter is part of that.
I can only recommend looking for live in input in the Studio itself, but being not very familiar with it myself, I am not sure if it’s feasible at all.
That being said - I’ll probably try to figure something out - stay tuned.
Thanks for your response. Great to hear your investigating the audio/microphone input. That’s interesting to hear, I was unsure if it actually was possible. The spatialisation does work through an FMOD studio plugin so would need to pass through that at some point in the processing. However, the same plugin that I’m currently using (The Oculus Native Spatializer) also works straight into Unity without the need for FMOD, so any standard Unity audio source can also be spatialized. Would this make any difference to the possibility of using your future audio/microphone input function attached to a moving object then spatialized?
Hi, it shouldn’t:
AudioStream can be used with AudioSources transparently - it passes audio data to Unity’s implementation for spatialization and all things Unity such as effects etc.
I might provide you with testing build with the audio input to test the processing speed and moving objects in VR, but it will be at least couple of days until I will be able to look at it and figure whether it’s actually feasible it or not.
Hi! I can use this plugin for streaming play audio from default URL link of music file? (just playing until downloading)
I cannot use default WWW and audio source for streaming because it’s crash with “Error: Cannot create FMOD::Sound instance for resource H%BB, (Operation could not be performed because specified sound/DSP connection is not ready. )”.
Unfortunately it looks like this is an engine bug. There is a bug on the IssueTracker out now, however it appears that Unity has marked it as fixed and users are reporting it as still an issue.
That would be great to test out thank you. It would be important for my particular project to keep the latency to a minimum. Would be interesting to see how it performs.
@VictorKrasovsky just to clarify:
if you want to physically download the file and use it later you’d have to do it by yourself.
AudioStream just streams the content and immediately plays on audio output without storing it.
Small update regarding streaming from audio input devices and spatial sound / stereo panning -
After initial testing of audio input - thanks @phillee53 for testing! - I’ve discovered that stereo panning and spatial left/right position is - unfortunately - ignored by the AudioSource. I’ve reported a bug to Unity, but it’s hard to say when this will be corrected.
just felt obliged to warn users wanting to stream audio to 3D positioned objects - this will not work for now
I will probably release an update which will allow to stream any audio input ( microphones, line-in devices and so on ), but without spatial functionality - if I won’t find a workaround.
Hello @hzqtkxel ,
depends how you structure your audio assets, but since you mention SD card, I would assume yes.
This plugin does not import anything into your project - it only needs a full file path to an audio, which then can stream - so if you put your files separately somewhere on the filesystem on Android outside of Unity application and that location is accessible, you are good to go.
The situation is more complicated if the audio is part of the project, e.g. in StreamingAssets folder - since the plugin needs full file path and the audio is distributed as part of the resulting jar/apk, you would first need to extract it from the jar, supply its file path to the plugin and only play it afterwards.
I hope this makes sense and helps!
EDIT: cleared up the StreamingAssets folder. (Resources has nothing to do with this… )
I’m having some issues with the ‘Audio Source Output Device’ component. When I use this to route the audio through a unity audiosource, the audio becomes ‘crackly’ and sounds clipped / distorted. If I use the standard AudioStream behaviour and do not route through a Unity audiosource, the sound is perfect quality. This happens even in the demo scene.
Is there a way to resolve this issue?
EDIT: the issue seems to resolve itself if I enable ‘Bypass Effects’ on the audiosource. However, I can hear the audio ‘stutter’ every so often while streaming a local song off my harddrive. I assume this is something to do with the streaming process.