AudioStream - {local|remote media} → AudioSource|AudioMixer → {outputs}

Hi, I think you need to toggle it by setting OutputDevice ID to 0 (as in not -1) since -1 turns off the processing IIRC
that said ASIO is often very fragile - make sure you’re also not using any/other mixer/plugin from the asset which uses ‘normal’ audio when using ASIO enabled mixer/group
I also used FlexASIO(GUI) for testing since ASIO4All didn’t always behave as expected - might be worth a try if everything else fails
Lmk if you make it work thanks !

1 Like

I would like to know if my use case would be solved with this asset -

  1. I will be playing a YouTube video in my scene.
  2. I want to capture system audio to drive audio reactive effects
  3. It needs to work on Android / Quest headset.

Is this possible or does loopback only work on desktop? Also, is there any other way to get SpectrumData from a YouTube video? I’ve run into CSP issues from YouTube.

Also, will I need the FMOD Unity integration for this functionality?

Thanks!

: I don’t know what is CSP; if you’re playing YouTube video in scene then it’s responsibility of the asset/way you’re using to play it in the scene to properly feed its audio stream into Unity AudioSource, apart from video texture - then you can use SpectrumData on it as per usual

loopback specifically is Windows only - it needs manual interface setup, you can similarly setup system audio capture on macOS (not exactly sure about Linux), but not on mobiles/Android
FMOD Unity integration is needed to access any input/rec audio interface

1 Like

Experiencing freezing on iOS with this asset.

Steps to reproduce:

  1. Create empty URP 6000.0.18f1 project.
  2. Import latest FMOD from asset store 2.02.22
  3. Import latest Audiostream from asset store 3.4.2
  4. Build SampleScene with a particle system to notice freezing

When resumed from background, app doesn’t render any more, but tapping UI buttons does generate xcode logs, so the app is still running…

If I import FMOD alone no problems. If I force quit app and reload, everything is fine, until it gets background-ed. I have tried the iOS specific settings recommended in Documentation with same result.

Any ideas? Is there something to hook OnApplicationPause or OnApplicationFocus into?

this looks like this issue:

(look in ‘AudioStream/Plugins/iOS’ folder for the app controller .mm file)
Unity will have to fix UnityBatchPlayerLoop if they want to keep it around probably
Lmk if this worked Thanks !

Hello,

I’m currently developing a karaoke app using AudioStream, and I’ve encountered an issue. When recording the user’s singing with the iPhone Microphone, the background music played through the speaker gets recorded along with the singing, causing an overlap.

I tried using iOS’s AVAudioSessionModeVoiceChat to enable echo cancellation, but it seems there’s a conflict with the AVAudioSessionWrapper used by AudioStream, and the echo cancellation is not working.

Do you know how to properly configure this within AudioStream to avoid recording the background music while using the microphone? Any advice would be greatly appreciated!

Thanks in advance!

Audio session is modified in AudioStreamAppController.mm (file is in AudioStream\Plugins\iOS) in order to have access to Bluetooth devices
– remember to modify it (also) there, not only in Xcode build when testing otherwise it will be overwritten when unity generates Xcode project
set the mode there and audioSession should use it
you should be also having iOS recording enabled in iOS player settings (this shoudl be mentioned in documentation)

Many thanks. I’ve tried AVAudioSessionModeVoiceChat instead of AVAudioSessionModeDefault in AudioStreamAppController.mm. It actually ducks everything but there are still echo voices. Maybe I should try some software AEC methods.

one more thing - Unity later added Force iOS Speakers when Recording to iOS Player settings
it is off by default - just make sure you’re not using it accidentally/unintentionally - it should help with echo cancellation
but other than that you’re probably right / except user using headphones :|/

Considering this is a karaoke game, some players must hear the background music through the speakers while recording their singing, if they don’t have headphones. However, this leads to a poor experience for users without headphones, as we rely on pitch recognition algorithms for scoring. :|/

Greetings all.

I just bought this asset and I have a specific use case for it.

I need to play up to 4 videos at the same, while playing each audio track on a different output device.
I’m using Windows and I have a hardware component that gives me 4 different output devices.

Which demo scene would be the best place to start?
What’s the best approach to do this?

Thanks

AudioSourceOutputDeviceDemo scene in ‘Output devices’ section shows how to direct an AudioSource to user selected output
AudioSource in this case will be an AudioSource set in VideoPlayer’s outputmode [ Unity - Scripting API: Video.VideoAudioOutputMode.AudioSource ]
so i recommend setting up a separate scene/game object with required single video + its output first
having then four of them in a scene should work as expected

– you can also see OutputDeviceUnityMixerDemo in UnityMixer section where AudioSource used for VideoPlayer output would have its output set to AudioMixer group, and use AudioStream mixer plugin if e.g. on Windows

but note there will be very likely noticeable delay and audio won’t be in sync
. it should be possible to use VideoPlayer VideoAudioOutputMode.APIOnly output mode to get its audio, but this is currently not in the asset, and won’t solve this entirely either
. probably the only way with good results right now is to render video with audio already desynced with required/estimated shift since VideoPlayer can’t do this automatically

Hello, I’m working on a project that is using Dante Virtual Soundcard for audio in/out. I’m currently using DVS in WDM mode, and your AudioStream asset with the AudioStreamInput2D script. Because of the latency with WDM, I’m looking to switch the project so I can use DVS in ASIO mode.

Is this possible with your asset, and could you point me in the direction I need to go with set up? In the demo exe, if I enable ASIO mode, and open the Output Devices demo, I can see both ASIO4ALL and Dante Virtual Soundcard as available. However, for the input devices demo, I only see ASIO4ALL available but not DVS. Not sure if there’s an additional configuration step I’m missing?

Thanks in advance for the help!

you have to configure this at ASIO4ALL/DVS level, probably
/ all the app does it just reads all available/present/configured interfaces

but in case of ASIO4ALL i’m not sure it even allows to configure more than 1 input in this case, plus in case of the demo pay also attention to its (default) 512/4 buffers - these should match

From what I understand, I shouldn’t have to use ASIO4ALL in the middle and should be able to go directly to DVS?

I did confirm I’m using a buffer size of 512, but not sure what the 4 count equivalent is in DVS. There aren’t many configuration options at all in DVS, so not sure what else to try



ideally, make also sure ASIO4ALL isn’t running (would be probably required uninstalling it)
other than trying also 2o ms latency and/or seeing if some other app/s can pick up DVS ASIO input(s)
there’s probably not much i can recommend though
/ wrt ASIO buffers 512 is most important value, oh and I think the format might be PCMFLOAT (though this shouldn’t be affecting opening the input…)

Interesting, uninstalling ASIO4ALL did the trick! Appreciate that suggestion, am now seeing DVS as an ASIO Input in the demo now as well.

Before I commit to doing some refactoring in my project, can you confirm that this approach below to have Unity’s video player use AudioStream is still the right way to go about it? I also read some comments about delay and sync problems in your recent response to Vice39’s question

heh good point; i’m not sure right now tbh
it might work w/o this additional clip creation, but i’d have to test it -
it’s not that much of a hurdle though: if VideoPlayer + its output to an AudioSource in scene + ASOD component on it won’t work, then this will be necessary
[Unity AudioSources worked most of the time with empty/null AudioClip, but I added this to be sure IIRC]