[Released] Dissonance: Unity Voice Chat


Website | Documentation | Help Forum | Issue Tracker | Asset Store

Dissonance is a voice chat asset for Unity which makes it easy to add high quality, low latency voice communication into your game no matter what network system you’re using!

Features

Real time, high quality, low bandwidth voice communication provided by the Opus codec.

Written to be totally independent from the underlying network system. Use the built in networking support for UNet HLAPI, UNet LLAPI, Forge Remastered, Forge Classic, Photon Unity Networking, Photon BOLT or Dark Rift 2. You can also write your own integrations for any other networking system.

Positional voice playback makes other player voices sound like they’re in the correct positions with no additional bandwidth or processing overhead. Even when you’re using a VR spatializer plugin.

Collider based chat volumes automatically start and stop talking based on where players are in the scene. Attach the trigger to your player to get easy proximity voice chat.

Automatic voice detection algorithm starts and stops transmitting when players are speaking, no need for players to set up a push-to-talk key (PTT is also supported).

Priority Speakers automatically mute lower priority speakers when a high priority speaker is speaking.

All source code available when you buy the asset. All the C# source code for Dissonance as well as the C++ source code for the native dependencies (C++ is not included in the package, will be supplied upon request).

Requirements
Unity version 5.6+, Works with Windows, Linux, MacOS, iOS and Android.

I would be interested in trying this out for my Multiplayer FPS/TPS.

1 Like

I’d like to be in on the beta. I am working on an educational VR app and I need voice chat. I have tried photon voice with very little luck.

2 Likes

Your website claims it is in the asset store but I could not find it there. Has it been released yet or when is it going to be released?

Hi Chuluney. I’m afraid we got a bit ahead of ourselves there with the website - it’s currently being reviewed by Unity and will be available as soon as the review is done. I will post an update in this thread as well as on the website when it is finally available :slight_smile:

I’m happy to announce that Dissonance is now available on the Unity asset store!

Do you know what would be required to integrate this with Photon Bolt?

Generally adding a new network backend should be relatively simple. Dissonance only requires 2 things from it’s network back end: send an unreliable+unordered message (e.g. UDP) and send a reliable+ordered message (e.g. TCP). Everything else is done for you by Dissonance itself.

I haven’t used Photon bolt, so unfortunately I can’t say anything specific to it. It’s definitely something we’d like to support in the future.

Dissonance now supports Android and macOS.

Why is this Unity 5.5+ only? We’re on 5.4.4p3. You’re shooting yourself in the foot if you’re only supporting 5.5 and up.

Dissonance was developed using Unity 5.5 so that’s all we supported for the initial release - we do intend to support older versions of Unity soon. In fact we already have a cloud build target for Unity 5.4 which works just fine :slight_smile:

Hello, we are trying to use Dissonance but run into some issues :

  • The communication is very intermittent, even if the VAD GUI says “Long speech” the other user sometimes doesn’t receive anything.
  • It also seems that at a certain point in the game, the communication ends definitely, with no apparent error in the console.
    Can you help us ?

Hi RLasne, these sound like they may both be issues we’re aware of.

We recently fixed an issue which caused voice to cut out completely in certain situations, that fix is currently in our test version and should be released to the store within a week (assuming our test group doesn’t find anything which delays it). If you’d like to get your hands on that version right now send your invoice number to martin@placeholder-software.co.uk and I’ll add you to the test list :slight_smile:

Unfortunately the current VAD isn’t great - it’s ok once it’s configured properly but it’s extremely picky about your microphone and is very fiddly to tweak. We have a complete replacement coming soon (based on the fantastic webRTC VAD).

Dissonance 1.0.5 has just gone live on the asset store!

This version adds a new inspector for the playback component which displays realtime statistics on the playback system. This version also brings a number of important bugfixes for various issues which could cause desyncs, audio cutoff and crackly voice.

We’re already hard at work on the next version which should be available in 4-5 weeks.

I’m trying to use Dissonance over UNET. I have 3 players connected:

  • Host (Server with player): Doesn’t hear anybody.
  • Client 1 (Client with player): Hears Host and Client 2.
  • Client 2 (Client with player): Hears Host and Client 1.

Any idea what the problem with the host might be? They are all using the same scene and the VoicePlayback prefab instances are duplicated and positioned (I attach them manually to our skeleton’s head) correctly.

Sorry that I don’t provide more information; I am unsure what exactly you need. Ask away!

That’s not a problem I’ve seen anyone else have. Does it still happen if you don’t fiddle with the playback instances? By UNET do you mean HLAPI or LLAPI?

What are you trying to achieve by moving the VoicePlayback prefabs around? Dissonance doesn’t really let you do that - it recycles the playback instances as players leave and join so it completely manages the lifetime of the playback instances. If you’re using the position tracking system (simply attach a component implementing IDissonancePlayer to whatever object you’d like sound to come from) you shouldn’t need to change the prefabs at all to get positional audio in the right place.

The next version has some more debugging tools for seeing live information about packets in the voice playback pipeline. If you send me an email (martin@placeholder-software.co.uk) with your invoice number I can give you a test version with that stuff in it - that should help with debugging the problem :slight_smile:

I mean HLAPI. And I don’t really do much fiddling, I just attach this script (which should also work with the recycling, I hope) to the VoicePlayback prefab:

public class DissonanceHeadAttacher : MonoBehaviour
    {
        VoicePlayback playback;

        void Awake()
        {
            playback = GetComponent<VoicePlayback>();
        }

        void OnDisable()
        {
            transform.parent = null;
        }

        void Update()
        {
            if (transform.parent == null)
                return;

            foreach (var player in GameController.Instance.Players)
            {
                if ((player.VoiceChatDissonanceClientId == null) || !player.VoiceChatDissonanceClientId.Equals(playback.PlayerName))
                    continue;

                transform.parent = player.Skeleton.GetBoneTransform(ActorSkeletonBone.Tongue);
                transform.localPosition = Vector3.zero;
                transform.localRotation = Quaternion.identity;
                return;
            }
        }
    }

The reason why we don’t want to use positional tracking is that a) we already know the head position, so there is no need to send the data again and b) we want it to be strictly synchronized with the current head position, so the best way seems to be to just attach it.

I’ll send you a mail later!

That script certainly doesn’t seem like it would break anything - on the other hand we’ve already done almost exactly the same thing for you! The position tracking system built into Dissonance doesn’t ever transmit positions and rotations over the network, instead it asks the local position tracking component where the player is and moves the playback instance to the same place. In your case you could implement IDissonancePlayer to return the bone transform for the tongue and Dissonance would do the rest of the work for you.

I’ll ask you for some more details by email once you have the new version to help debugging :slight_smile:

Okay! It’ll be a few days until I can get back to this project, so don’t worry if you don’t hear from me immediately.

And I’m looking forward to new debugging ways. It certainly is weird: On the host, DissonanceComs shows the client player as “speaking” and “DEBUG [Dissonance : Playback] Player Playback: Began playback of speech session” and “DEBUG [Dissonance : Playback] Player Playback: Speech session complete” is logged, but the host just can’t hear anything.

1 Like

We are having numerous issues with dissonance right now. After a few minutes, the voice completely cuts out.

I’ve created a new scene without any of our other code/prefabs to test from scratch, but I’m having trouble even getting any audio through.

My current setup:
Default NetworkManager with HlapiPlayerTracker in spawnable prefabs
DissonanceSetup prefab with defaults, Added Voice Broadcast and Voice Receipt triggers with defaults

This is all the HLAPI quickstart instructed me to do, but when I connect over the network, I am hearing no audio… it seems like Unity would require an AudioSource to pipe the audio out to the speakers. Do I need to have a VoicePlayback object in the scene?