Holographic Photo Blending with PhotoCapture

Some have asked for an example of capturing a holographic photo that blends in with the physical environment. You can do this by using the PhotoCapture API along with the projection and world to camera matrices that are included with the captured image data.

https://www.youtube.com/watch?v=He0ln4FQI-w

Attached is a C# script and a shader. The C# script captures an image using the web camera on the HoloLens whenever you do the airtap gesture. The C# script will upload the captured image to the GPU so that the shader can access the image data. The shader will calculate what part of the image should be shown based on where the photo was taken.

HoloLensSnapshotTest.cs

using UnityEngine;
using System.Collections;
using System.Collections.Generic;
using UnityEngine.VR.WSA.WebCam;
using UnityEngine.VR.WSA.Input;

public class HoloLensSnapshotTest:MonoBehaviour
{
    GestureRecognizer m_GestureRecognizer;
    GameObject m_Canvas = null;
    Renderer m_CanvasRenderer = null;
    PhotoCapture m_PhotoCaptureObj;
    CameraParameters m_CameraParameters;
    bool m_CapturingPhoto = false;
    Texture2D m_Texture = null;

    void Start()
    {
        Initialize();
    }

    void SetupGestureRecognizer()
    {
        m_GestureRecognizer = new GestureRecognizer();
        m_GestureRecognizer.SetRecognizableGestures(GestureSettings.Tap);
        m_GestureRecognizer.TappedEvent += OnTappedEvent;
        m_GestureRecognizer.StartCapturingGestures();

        m_CapturingPhoto = false;
    }

    void Initialize()
    {
        Debug.Log("Initializing...");
        List<Resolution> resolutions = new List<Resolution>(PhotoCapture.SupportedResolutions);
        Resolution selectedResolution = resolutions[0];

        m_CameraParameters = new CameraParameters(WebCamMode.PhotoMode);
        m_CameraParameters.cameraResolutionWidth = selectedResolution.width;
        m_CameraParameters.cameraResolutionHeight = selectedResolution.height;
        m_CameraParameters.hologramOpacity = 0.0f;
        m_CameraParameters.pixelFormat = CapturePixelFormat.BGRA32;

        m_Texture = new Texture2D(selectedResolution.width,selectedResolution.height,TextureFormat.BGRA32,false);

        PhotoCapture.CreateAsync(false,OnCreatedPhotoCaptureObject);
    }

    void OnCreatedPhotoCaptureObject(PhotoCapture captureObject)
    {
        m_PhotoCaptureObj = captureObject;
        m_PhotoCaptureObj.StartPhotoModeAsync(m_CameraParameters,true,OnStartPhotoMode);
    }

    void OnStartPhotoMode(PhotoCapture.PhotoCaptureResult result)
    {
        SetupGestureRecognizer();

        Debug.Log("Ready!");
        Debug.Log("Air Tap to take a picture.");
    }

    void OnTappedEvent(InteractionSourceKind source,int tapCount,Ray headRay)
    {
        if(m_CapturingPhoto)
        {
            return;
        }

        m_CapturingPhoto = true;
        Debug.Log("Taking picture...");
        m_PhotoCaptureObj.TakePhotoAsync(OnPhotoCaptured);
    }

    void OnPhotoCaptured(PhotoCapture.PhotoCaptureResult result,PhotoCaptureFrame photoCaptureFrame)
    {
        if(m_Canvas == null)
        {
            m_Canvas = GameObject.CreatePrimitive(PrimitiveType.Quad);
            m_Canvas.name = "PhotoCaptureCanvas";
            m_CanvasRenderer = m_Canvas.GetComponent<Renderer>() as Renderer;
            m_CanvasRenderer.material = new Material(Shader.Find("AR/HolographicImageBlend"));
        }

        Matrix4x4 cameraToWorldMatrix;
        photoCaptureFrame.TryGetCameraToWorldMatrix(out cameraToWorldMatrix);
        Matrix4x4 worldToCameraMatrix = cameraToWorldMatrix.inverse;

        Matrix4x4 projectionMatrix;
        photoCaptureFrame.TryGetProjectionMatrix(out projectionMatrix);

        photoCaptureFrame.UploadImageDataToTexture(m_Texture);
        m_Texture.wrapMode = TextureWrapMode.Clamp;

        m_CanvasRenderer.sharedMaterial.SetTexture("_MainTex",m_Texture);
        m_CanvasRenderer.sharedMaterial.SetMatrix("_WorldToCameraMatrix",worldToCameraMatrix);
        m_CanvasRenderer.sharedMaterial.SetMatrix("_CameraProjectionMatrix",projectionMatrix);
        m_CanvasRenderer.sharedMaterial.SetFloat("_VignetteScale", 1.0f);

        // Position the canvas object slightly in front
        // of the real world web camera.
        Vector3 position = cameraToWorldMatrix.GetColumn(3) - cameraToWorldMatrix.GetColumn(2);

        // Rotate the canvas object so that it faces the user.
        Quaternion rotation = Quaternion.LookRotation(-cameraToWorldMatrix.GetColumn(2),cameraToWorldMatrix.GetColumn(1));

        m_Canvas.transform.position = position;
        m_Canvas.transform.rotation = rotation;

        Debug.Log("Took picture!");
        m_CapturingPhoto = false;
    }
}

HolographicImageBlendShader.shader

Shader "AR/HolographicImageBlend"
{
    Properties
    {
        _MainTex ("Texture", 2D) = "white" {}
        _VignetteScale ("Vignette Scale", RANGE(0,2)) = 0
    }
    SubShader
    {
        Tags { "RenderType"="Opaque" }
        LOD 100

        Pass
        {
            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag

            #include "UnityCG.cginc"

            struct appdata
            {
                float4 vertex : POSITION;
            };

            struct v2f
            {
                float4 vertexPositionInProjectionSpace : SV_POSITION;
                float2 uv : TEXCOORD0;
                float4 vertexInProjectionSpace : TEXCOORD1;
            };

            sampler2D _MainTex;
            float4x4 _WorldToCameraMatrix;
            float4x4 _CameraProjectionMatrix;
            float _VignetteScale;

            v2f vert (appdata v)
            {
                v2f o;
                o.vertexPositionInProjectionSpace = mul(UNITY_MATRIX_MVP, v.vertex);

                // Calculate the vertex position in world space.
                float4 vertexPositionInWorldSpace = mul(unity_ObjectToWorld, float4(v.vertex.xyz,1));
                // Now take the world space vertex position and transform it so that
                // it is relative to the physical web camera on the HoloLens.
                float4 vertexPositionInCameraSpace = mul(_WorldToCameraMatrix, float4(vertexPositionInWorldSpace.xyz,1));

                // Convert our camera relative vertex into clip space.
                o.vertexInProjectionSpace = mul(_CameraProjectionMatrix, float4(vertexPositionInCameraSpace.xyz, 1.0));

                return o;
            }

            fixed4 frag (v2f i) : SV_Target
            {
                // Transform the vertex into normalized coordinate space.  Basically
                // we want to map where our vertex should be on the screen into the -1 to 1 range
                // for both the x and y axes.
                float2 signedUV = i.vertexInProjectionSpace.xy / i.vertexInProjectionSpace.w;

                // The HoloLens uses an additive display so the color black will
                // be transparent.  If the texture is smaller than the canvas, color the extra
                // area on the canvas black so it will be transparent on the HoloLens.
                if(abs(signedUV.x) > 1.0 || abs(signedUV.y) > 1.0)
                {
                    return fixed4( 0.0, 0.0, 0.0, 0.0);
                }

                // Currently our signedUV's x and y coordinates will fall between -1 and 1.
                // We need to map this range from 0 to 1 so that we can sample our texture.
                float2 uv = signedUV * 0.5 + float2(0.5, 0.5);
                fixed4 finalColor = tex2D(_MainTex, uv);

                // Finally add a circular vignette effect starting from the center
                // of the image.
                finalColor *= 1.0-(length(signedUV) * _VignetteScale);

                return finalColor;
            }
            ENDCG
        }
    }
}
6 Likes

Brandon, I've followed your example and a few other to try and use the Hololens camera. In all cases, I seem to be getting stuck at PhotoCapture.SupportedResolutions returning an empty list. Do you have an idea what might be causing this? Im using the latest Unity Beta (24) and have asked for Camera permissions.

1 Like

Have you enabled both WebCam and Mic in the capabilities settings?
2726485--193747--RequiredCapabilities.png

Yes, I have. After deploying my App, I don't see a seperate entry in the settings screen under Camera listing that my App is asking for those permissions though. When I call Resolution selectedResolution = PhotoCapture.SupportedResolutions.OrderByDescending((res) => res.width * res.height).First(); in the Start method, I receive a null reference exception PhotoCapture.SupportedResolutions is returning empty.

By default Package.appxmanifest is in Unity's no-overwrite list, so if the capability was added after the UWP app was generated, it might not get updated. You can open Package.appxmanifest to confirm or manually add it:



3 Likes

This is my current XML. Thank you Waterine, I believe you're correct. I would have never found that.




I don't understand how to incorporate the C# script and shader into Unity... Do I make a blank Game Object or something? Could somebody please explain

1 Like

Hi cyberstorm5076!

You can create a game object in the editor like so,

2727897--193873--CreateAGameObject.png

You can also create the sphere game object dynamically in code like so,

GameObject sphereGameObject = GameObject.CreatePrimitive(PrimitiveType.Sphere);
sphereGameObject.name = "Sphere";

The following is a good tutorial to get you up and running with writing shaders.


Shaders are just GPU programs that do certain things like modify your vertices, generate vertices, or calculate what the final pixel color should be. A Material is an asset that your artist typically creates. Think of it this way. If shaders are GPU programs, then materials are the command line arguments that you pass into the shader. The material also stores a reference to the shader that it uses.

The following code sample is a Color Fader shader and script I wrote. The Color Fader shader will morph the color of a game object between 2 different colors based on whatever the current color morph value is. The color morph value is a variable that must between the numbers 0.0 and 1.0. When the morph value is 0.0, the color of the game object will be whatever Color0 is. When the morph value is 1.0, the color of the game object will be whatever Color1 is.

The ColorController script dynamically creates the material and shader and then feeds in your values to the shader. In this script, I dynamically create the material and shader however you could also assign the material as a public property as well.

ColorController.cs

using UnityEngine;
using System.Collections;

[RequireComponent(typeof(MeshRenderer))]

[ExecuteInEditMode]
public class ColorController : MonoBehaviour
{
    public Color Color0 = Color.red;
    public Color Color1 = Color.blue;
    [Range(0.0f,1.0f)]
    public float ColorMorphValue = 0.0f;

    private Material myMaterial = null;

    void Start()
    {
        Renderer gameObjectRenderer = this.transform.GetComponent<Renderer>();
        myMaterial = new Material(Shader.Find("MyCustomShader/ColorFader"));
        gameObjectRenderer.sharedMaterial = myMaterial;
    }

    void Update ()
    {
        if(myMaterial == null)
        {
            return;
        }

        // Send custom parameters from the CPU to the GPU shader.
        myMaterial.SetColor("_Color0", Color0);
        myMaterial.SetColor("_Color1", Color1);
        myMaterial.SetFloat("_ColorMorphValue", ColorMorphValue);
    }
}

ColorFader.shader

Shader "MyCustomShader/ColorFader"
{
    Properties
    {
        _Color0 ("Color 0", COLOR) = (1.0, 1.0, 1.0, 1.0)
        _Color1 ("Color 1", COLOR) = (0.0, 0.0, 0.0, 1.0)
        _ColorMorphValue ( "Color Morph Value", FLOAT) = 0.0
    }
    SubShader
    {
        Tags { "RenderType"="Opaque" }

        Pass
        {
            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag

            #include "UnityCG.cginc"

            struct appdata
            {
                float4 vertex : POSITION;
                float2 uv : TEXCOORD0;
            };

            struct v2f
            {
                float4 vertex : SV_POSITION;
                float3 color : COLOR;
            };

            float3 _Color0;
            float3 _Color1;
            float _ColorMorphValue;

            v2f vert (appdata v)
            {
                v2f o;
                o.vertex = mul(UNITY_MATRIX_MVP, v.vertex);
                o.color = (1.0-_ColorMorphValue) * _Color0 + (_ColorMorphValue * _Color1);
                return o;
            }

            float4 frag (v2f i) : SV_Target
            {
                return float4(i.color, 1.0);
            }
            ENDCG
        }
    }
}

If you would rather your ColorController script reference a premade material asset, then you will need to create the material asset like so,
2727897--193871--CreateAMaterial.png

Then you will need to assign the correct shader to the material like so,
2727897--193872--AssignAMaterial.png

Finally you will need to drag and drop the material onto your game object.
Then you will need to drag and drop the modified ColorController script onto your game object.

ColorController.cs - References an Assigned Material Asset

using UnityEngine;
using System.Collections;

[RequireComponent(typeof(MeshRenderer))]

[ExecuteInEditMode]
public class ColorController : MonoBehaviour
{
    public Color Color0 = Color.red;
    public Color Color1 = Color.blue;
    [Range(0.0f,1.0f)]
    public float ColorMorphValue = 0.0f;

    void Update ()
    {
        Renderer myRenderer = this.GetComponent<Renderer>();

        // Send custom parameters from the CPU to the GPU shader.
        myRenderer.sharedMaterial.SetColor("_Color0", Color0);
        myRenderer.sharedMaterial.SetColor("_Color1", Color1);
        myRenderer.sharedMaterial.SetFloat("_ColorMorphValue", ColorMorphValue);
    }
}

In order to use the HoloLensSnapshotTest script, just drag and drop it onto your camera game object.

I hope that helps!

1 Like

@BrandonFogerty thank you so much, this was a very detailed explanation and thoughtful of you :)

@BrandonFogerty Thank you so much. I'm able to use your shader to align the web cam result on hololens. The downside is, playing the webcam texture will drop the frame rate from 60fps to 15fps. Does anyone know how can we get the webcam frame data while maintaining high fps still?

Hi @yjlin5210

I am glad I could help! Anytime Mixed Reality Capture is used on the HoloLens, the fps will automatically drop to 30 fps as the price of doing business. However I was unaware that it is dropping lower that 30 fps. Do you have a minimum repro project that I could take a look at? Either way, I will look into this further. Thanks for bringing this to my attention!

@BrandonFogerty
All I do is

WebCamTexture back;
back = new WebCamTexture(WebCamTexture.devices[0].name);
back.Play();

And the frame rate drops to 15, if I specify the size down to 640x360 or lower

back = new WebCamTexture(WebCamTexture.devices[0].name, 320, 180);
back.Play();

The frame rate will be around 20

Here are some other people who get the same results as I do.
http://forums.hololens.com/discussion/comment/6668/#Comment_6668

Here is the video reference

https://www.youtube.com/watch?v=I5dTE8pAwDs

After update the OS on Hololens, the frame rate is jumping around 20 to 30. Most of the time it is around 25.

Hi @yjlin5210 ,

Have you tried using the PhotoCapture api to take photos?

@BrandonFogerty

When using PhotoCapture, the game's framerate is running at around 40fps or higher, and the placement of the image position is much more stable. However, the RGB camera's update rate is way slower and probably update at around 5 fps.

Another question, if I am interested in some of the points on the texture, and want to project the point to world space (Assume the z is a fixed distance, such as 100). How should I do that?

Right now I tried using.

Vector3 poiPoint = new Vector3(point2D.x, point2D.y, 100); // point2D is a 2D vector in the RGB camera space;
Matrix4x4 inverseMVP = (projectionMatrix * worldToCameraMatrix).inverse; // the projectionMatrix and worldToCameraMatrix are from the photoCapture information
Vector2 poiPointInWorld = inverseMVP.MultiplyPoint3x4(poiPoint);

but the result is wrong.

I think I found the problem, the coordinate system in opencv's Mat is different from texture2D.

@BrandonFogerty - This photo Blending that you made is cool!

I've been trying to take what you have here and map the photo to the spatial mesh. Using SpatialMappingManager.Instance.SetSurfaceMaterial(new Material(Shader.Find("AR/HolographicImageBlend"))); in the function void OnPhotoCaptured. The problem is that the mesh just turns all white. Do I need to set UV's on the mesh or is there something else I'm missing? What do you recommend?.

The goal here is to map the photo as a texture onto the spatial mesh.

Thanks

Hi @acylum ,

The Holographic Image Blend shader requires that the mesh it is applied to contain texture coordinates. However the spatial mapping mesh only contains vertices and indices. That is why your spatial mapping mesh appears white. You can add your own uvs to the spatial mapped mesh but that may be a bit complicated depending on what you are trying to do.