Rendering GMSL Camera in Unity

How can I render GMSL Cameras alongwith the lens distortion that happens in the camera inside of Unity’s native camera component. For example: Please see the attached image and camera spec sheet. I have the distortion parameter values as below.

    public float fx = 1386.45f; // Focal length in x-direction
    public float fy = 1410.01f; // Focal length in y-direction
    public float cx = 647.32f;  // Principal point offset in x-direction
    public float cy = 388.73f;  // Principal point offset in y-direction
    public float k1 = -0.56945845f; // Distortion parameter k1
    public float k2 = 0.36552985f;  // Distortion parameter k2
    public float p1 = -0.000267616527f; // Distortion parameter p1
    public float p2 = -0.0017359377f;   // Distortion parameter p2
    public float k3 = -0.184636924f;    // Distortion parameter k3

To be more specific, I want to have the distortion as seen in the attached image without applying post-processing. I will need to create/modify the current camera in Unity to take Distortion parameters and render the view accordingly.

9751279–1395580–camera-sensor-LI-OV10635-GMSL_datasheet.pdf (1.04 MB)

either by using physical camera settings, or maybe adjusting the projection matrix. however, i believe a true fisheye is impossible with linear projection. there are also vertex displacement methods in shader that would apply the fisheye warp, though you need highly tesselated geometry for it to look any good

1 Like

Thank you for your reply. I tried using the physical camera settings, but it does not contain the attribute fields that will allow to input the camera distortion parameters. Will I need to create a custom shader ?

Hi!

Yes, I think the best way is to write your own shader that processes your overlays (like parking lines) according to the distortion of the camera.

There are two ways to do this and the math you’d need would be the same in both cases if the distortion follows something that can be expressed as a mathematical function.

  • Do it in the fragment shader

  • This is more expensive because calculations run every frame and you need a large buffer, but allows you to distort complex graphics.

  • You’d render your overlay into a render texture that contains the undistorted “perfect” view.

  • Then, over your camera view you’d overlay a quad that uses that texture along with a customer shader implementing your distortion function.

  • Instead of using UV coordinates directly, for the lookup, coordinates would be shifted according to the function.

  • Do it in the vertex shader.

  • This is “cheaper” because you only need to do calculate distortion on a per-vertex basis and don’t need an additional buffer for that, but you’ll need to make sure you have enough vertices, as Pookshank said.

  • You’d first calculate the undistorted screen positions of your vertices normally (using UnityObjectToClipPos(); to project the vertices from 3D space to the on-screen position)

  • Instead of using the result of this transformation directly for the screen-position of the vertices, distort it using X/Y, as a function of the screen position.

I would recommend starting with the Vertex version, for performance reasons.

1 Like

Thank you so much for the kind reply. Are there any tutorials or documentation I can refer to understand its implementation. I am not an expert in Unity so some resources could surely help.

I made an example (attached). Get ready for some vertex shader fun!

Usage:

The ParkingLines scene contains a ParkingLines object. The ParkingView Script on the ParkingLines object has a trackWidth property that is just a helper to position the tracks based on the vehicle’s track width in the editor. Under this, you can find two instances of the “Track” prefab, referenced in the ParkingView script.

The “Steering” Property works in play mode and can be used to set the steering direction every frame.
There is also a “Test” object that you can turn on using the checkbox in the inspector next to the object name. It’s just a simple scene I made that I used to test the distortion. The lines on the test objects don’t look amazing and that’s because there isn’t enough geometry to distort, but it’s sufficient for testing purposes.

Explanation of how it works:

The Track prefab uses a ParkingLines script that generates a simple mesh that looks like a ladder that can be distorted. You may experiment with the “sections” property of the ParkingLines script that allows you to modify the number of subdivisions the mesh has.

The ParkingLine material uses the ParkingLine shader. It has some properties you can modify in the editor. When Play mode starts, this material is instantiated for each track, so we have two tracks with different material properties. You can, however, edit the width, length, color and distortion properties to your liking since these aren’t changed during runtime via script.

The ParkingLine shader does the following things with that mesh that standard unlit shaders don’t do:

  • It takes the generated mesh and bends it to the left or right based on the steering value. This is happening in world-space coordinates.
  • It uses a custom Distort2D function after the transformation from object-space coordinates to screen-space coordinates happened. This function comes from the Distort.cginc file I also added. The reason I put it in an include file is that this way I can test the same function with both my line and my test shaders.
  • It generates a gradient between two color properties along the length of the line. That one is easy.

For you convenience, I marked the place in the Distort2D function where the actual screen-space distortion happens. In my example, I’m simply shifting vertices based on their distance to the center (so the distortion gets greater towards the edges). If you want to go down this vertex route, you’ll need to replace this with code that is tweaked for your specific camera’s properties.

9763246--1398472--LineGif.gif

This will work for simple distortion. If you have very complex distortion parameters, we’ll need to render the lines into a texture and then distort this based on a “baked” distortion texture. That is much heavier on the GPU, though, let’s see if the vertex solution is enough.

1 Like

I tested this on an embedded chipset today and realized that it doesn’t work the same out of the box.

The reason is that on the embedded chipset, clipspace is defined a little differently. Y is inverted and the depth is different.

Here’s an updated version with a shader that works the same on desktop and embedded/mobile. you can see the UNITY_UV_STARTS_AT_TOP and UNITY_Z_0_FAR_FROM_CLIPSPACE I used.

9767670–1399422–distortedParkingLines_mobileFix.unitypackage (27 KB)