LineRenderer2D: GPU pixel-perfect 2D line renderer for Unity URP (2D Renderer)

LineRenderer2D: GPU pixel-perfect 2D line renderer for Unity URP (2D Renderer)

Code repository: GitHub - QThund/LineRenderer2D: Scripts for rendering pixel-perfect GPU-calculated lines in 2D (Unity URP 2D Renderer).
More on Twitter: @JailbrokenGame
Other code I shared:
Script for generating ShadowCaster2Ds for Tilemaps
Delaunay Triangulation with constrained edges
Target sorting layers as assets

I re-posted this thread on my website: LineRenderer2D: GPU pixel-perfect 2D line renderer for Unity URP (2D Renderer)[Repost] - Jailbroken

Hi everybody, I have been refactoring and improving an old piece of code I wrote years ago, and adapting it to the new render pipeline. I think this is that kind of feature that should ship with Unity as many people that develop 2D games need it at some point. So, I decided to write an article to share my implementations with you just in case you find it useful. I wrote it first in a document out of this forum and used some background colors in the code snippets, that’s why I had to write the color word in some sections, sorry for that.

  • Introduction
  • Vectorial solution
  • Bresenham solution
  • Line strips drawing
  • Optimizations

Introduction

Unity provides developers with a great line rendering tool which basically generates a 3D mesh that faces the camera. This is enough for most games but, if you want to create 2D games based on pixel-art aesthetics, “perfect” lines do not fit with the rest of sprites, especially if the size of the pixels in those sprites do not match the size of the pixels of the screen. You will need lines that fulfill one main rule: each pixel may have a neighbor either in the same column or in the same row, but not in both. Unity does not help in this case, you need to work on your own solution.

There are several alternatives, you can just draw the line into a sprite, which will look awful in case you rotate it; you can use a texture and change it dynamically, drawing the line in the CPU side, with C#, using the SetPixels method and the Bresenham algorithm, which can be slow and is limited by the size of the texture (although it allows resizing the sprite to achieve whatever line-thickness you need); our you can use a shader in the GPU and either vectorial algebra along with some “magic” or a modified version of the Bresenham algorithm, as I am going to explain here.

Both shading methods have the following inputs in common:

  • Current screen pixel position.
  • The position of both line endpoints, in screen space.
  • The color of the line.
  • The line thickness.
  • The position of the origin (0, 0), in screen space (for screen adjustment purposes).

In Unity, we need just 1 sprite in the scene with whatever texture (it can be 1-pixel-wide repeating texture), a material with a shader (made in Shadergraph, in this case) and a C# script to fill the parameters of the shader in the OnWillRenderObject event. Since we are using a sprite and Shadergraph with the 2D Renderer, it works with both the 2D sorting system and the 2D lighting systems. In the C# script there must be something like this:

protected virtual void OnWillRenderObject()
{
    Vector2 pointA = m_camera.WorldToScreenPoint(Points[0]);
    Vector2 pointB = m_camera.WorldToScreenPoint(Points[1]);
    pointA = new Vector2(Mathf.Round(pointA.x), Mathf.Round(pointA.y));
    pointB = new Vector2(Mathf.Round(pointB.x), Mathf.Round(pointB.y));

    Vector2 origin = m_camera.WorldToScreenPoint(Vector2.zero);
    origin = new Vector2(Mathf.Round(origin.x), Mathf.Round(origin.y));

    m_Renderer.material.SetVector("_Origin", origin);
    m_Renderer.material.SetVector("_PointA", pointA);
    m_Renderer.material.SetVector("_PointB", pointB);
}

Vectorial solution

The vectorial solution is not perfect but it is the fastest. The main idea is to calculate the distance of a point in the screen to the line defined by other 2 points; if such distance is lower than or equals half of the thickness of the line, the screen point is colored.

The main problem of this approach is that the screen is not composed of infinite points, it is a grid whose rows and columns depend on the resolution and the physical screen. If we want to draw a line whose thickness is 1 pixel, we cannot compare the distance of the point to the line to 0.5, because that will make any pixel crossed by the imaginary line to be colored, producing that some parts of the line look wider.

We need to find a way to compare distances that gives us the appropriate points to color. I have to be honest, I am not a mathematician and did not have enough time to analyze the values to find the best method to calculate the adjustment factor, so I only found some constants by trial and error based upon an assumption: it seems that the slope of the line is related to the distance to compare, such distance is inversely proportional to how close the slope is to 45Âș. This relation is not exact, erroneous results are unavoidable using this method. The constant values I discovered were:

fBaseTolerance (minimum distance in any case): 0.3686
fToleranceMultiplier (applied depending on the slope): 0.34935

#define M_PI 3.1415926535897932384626433832795

vEndpointA = vEndpointA - fmod(vEndpointA, float2(fThickness, fThickness));
vEndpointB = vEndpointB - fmod(vEndpointB, float2(fThickness, fThickness));
vEndpointA = round(vEndpointA);
vEndpointB = round(vEndpointB);

// The tolerance is bigger as the slope of the line is closer to any of the 2 axis
float2 normalizedAbsNextToPrevious = normalize(abs(vEndpointA - vEndpointB));
float maxValue = max(normalizedAbsNextToPrevious.x, normalizedAbsNextToPrevious.y);
float minValue = min(normalizedAbsNextToPrevious.x, normalizedAbsNextToPrevious.y);
float inverseLerp = 1.0f - minValue / maxValue;

outDistanceCorrection = fBaseTolerance + fToleranceMultiplier * abs(inverseLerp);

Once we have the distance correction factor, we calculate whether the current screen point is close enough to the imaginary line. There are 2 corner cases when the line is either completely horizontal or completely vertical, in which case an offset is added just to avoid the round numbers that produce bad results (bolder line).

YELLOW

// The amount of pixels the camera has moved regarding a thickness-wide block of pixels
vOrigin = fmod(vOrigin, float2(fThickness, fThickness));
vOrigin = round(vOrigin);

// This moves the line N pixels, this is necessary due to the camera moves 1 pixel each time and the line may be wider than 1 pixel
// so this avoids the line jumping from one block (thickness-wide) to the next, and instead its movement is smoother by moving pixel by pixel
vPointP += float2(fThickness, fThickness) - vOrigin;
vEndpointA += float2(fThickness, fThickness) - vOrigin;
vEndpointB += float2(fThickness, fThickness) - vOrigin;

BLUE

vEndpointA = vEndpointA - fmod(vEndpointA, float2(fThickness, fThickness));
vEndpointB = vEndpointB - fmod(vEndpointB, float2(fThickness, fThickness));
vEndpointA = round(vEndpointA);
vEndpointB = round(vEndpointB);
vPointP = vPointP - fmod(vPointP, float2(fThickness, fThickness));
vPointP = round(vPointP);
const float OFFSET = 0.055f;

// There are 2 corner cases: when the line is perfectly horizontal and when it is perfectly vertical
// It causes a glitch that makes the line fatter
if(vEndpointA.x == vEndpointB.x)
{
    vEndpointA.x -= OFFSET;
}

if(vEndpointA.y == vEndpointB.y)
{
    vEndpointA.y -= OFFSET;
}

float2 ab = vEndpointB - vEndpointA;
float dotSqrAB = dot(ab, ab);

float2 ap = vPointP - vEndpointA;
float dotPA_BA = dot(ap, ab);
float normProjectionLength = dotAP_AB / dotSqrAA;

float projectionLength = dotAP_AB / length(ab);
float2 projectedP = normalize(ab) * projectionLength;

bool isBetweenAandB = (normProjectionLength >= 0.0f && normProjectionLength <= 1.0f);
float distanceFromPToTheLine = length(ap - projectedP);

outIsPixelInLine = isBetweenAandB && distanceFromPToTheLine < fThickness * fDistanceCorrection;

In the blue part of the source code you can see how every input point is adjusted to the bottom-left position of the blocks they belong to. For example, if the line has a thickness of 4 pixels, the screen is divided by an imaginary grid whose cells occupy 4x4 pixels; if the point is at [7.2, 3.4] it is moved to the position [4, 0]. In the following image dark squares represent the bottom-left corner of each 4x4 block and green squares are the pixels that are actually near to the line and that are treated as if they were in each corner.

This subtract module operation is what makes the line be drawn with the desired thickness. The round operation avoids a jittering effect produced by the floating point calculation imprecisions.

Since the camera can move 1 pixel at a time and the thickness of the line may be greater than 1 pixel, an undesired visual effect occurs: the line does not follow the camera per pixel, it abruptly jumps to the next block of pixels as the camera displacement is greater than the thickness of the line. To fix this problem we have to subtract the displacement of the camera inside a block (from 0 to 3, if the thickness is 4 pixels) to the position of every evaluated point (vPoint). In the source code, the yellow part uses an input point (vOrigin), whose position is [0, 0] in world space transformed to screen space, that is used for calculating the amount of pixels the camera has moved both vertically and horizontally. The modulo of the position is calculated using the thickness and it is subtracted to the thickness value too, so we know the camera offset inside a block of pixels.

Here we can see the results of this algorithm, setting the thickness to 4 pixels:

Bresenham solution

This solution uses the Bresenham algorithm so the result is perfect but the calculation is more expensive than the vectorial solution. For each pixel occupied by the sprite rectangle, the algorithm is executed from the beginning to the end of the line; if the current point of the line coincides with the current screen position being evaluated, it uses the line color and the loop stops; otherwise the entire line is checked and the time is wasted (the background color is used instead).

The same adjustment is applied to the input points as in the vectorial solution (yellow and blue parts in the source code). The Bresenham implementations one can find out there use an increment of 1 to select the next pixel to be evaluated, in this version the increment equals the thickness of the line.

YELLOW

// The amount of pixels the camera has moved regarding a thickness-wide block of pixels
vOrigin = fmod(vOrigin, float2(fThickness, fThickness));
vOrigin = round(vOrigin);

// This moves the line N pixels, this is necessary due to the camera moves 1 pixel each time and the line may be wider than 1 pixel
// so this avoids the line jumping from one block (thickness-wide) to the next, and instead its movement is smoother by moving pixel by pixel
vPointP += float2(fThickness, fThickness) - vOrigin;
vEndpointA += float2(fThickness, fThickness) - vOrigin;
vEndpointB += float2(fThickness, fThickness) - vOrigin;

BLUE

// This fixes every point to the bottom-left corner of the thickness-wide block it belongs to, so all pixels inside the block are cosidered the same
// If the block has to be colored, then all the pixels inside are colored
vEndpointA = vEndpointA - fmod(vEndpointA, float2(fThickness, fThickness));
vEndpointB = vEndpointB - fmod(vEndpointB, float2(fThickness, fThickness));
vEndpointA = round(vEndpointA);
vEndpointB = round(vEndpointB);
vPointP = vPointP - fmod(vPointP, float2(fThickness, fThickness));
vPointP = round(vPointP);
// BRESENHAM ALGORITHM
// Modified to allow different thicknesses and to tell the shader whether the current pixels belongs to the line or not

int x = vEndpointA.x;
int y = vEndpointA.y;
int x2 = vEndpointB.x;
int y2 = vEndpointB.y;
int pX = vPointP.x;
int pY = vPointP.y;
int w = x2 - x;
int h = y2 - y;
int dx1 = 0, dy1 = 0, dx2 = 0, dy2 = 0;

if (w < 0)
{
    dx1 = -fThickness;
}
else if (w > 0)
{
    dx1 = fThickness;
}

if (h < 0)
{
    dy1 = -fThickness;
}
else if (h > 0)
{
    dy1 = fThickness;
}

if (w < 0)
{
    dx2 = -fThickness;
}
else if (w > 0)
{
    dx2 = fThickness;
}

int longest = abs(w);
int shortest = abs(h);

if (longest <= shortest)
{
    longest = abs(h);
    shortest = abs(w);

    if (h < 0)
    {
        dy2 = -fThickness;
    }
    else if (h > 0)
    {
        dy2 = fThickness;
    }
 
    dx2 = 0;
}

int numerator = longest >> 1;

outIsPixelInLine = false;

for (int i = 0; i <= longest; i += fThickness)
{
    if(x == pX && y == pY)
    {
        outIsPixelInLine = true;
        break;
    }

    numerator += shortest;

    if (numerator >= longest)
    {
        numerator -= longest;
        x += dx1;
        y += dy1;
    }
    else
    {
        x += dx2;
        y += dy2;
    }
}

Here we can see the results of this algorithm, setting the thickness to 4 pixels:

Line strips drawing

If we want to draw multiple concatenated lines we could create multiple instances of the line renderer and bind their endpoints somehow, but there are cheaper ways to achieve line strips rendering to represent, for example, a rope.

If we were using ordinary shaders we could send a vector array with all the points of the line to be processed but, unfortunately, Shadergraph does not allow arrays as input parameters for now. A workaround is sending a 1D texture, which is not supported either, so we will have to use a 2D texture whose height is 1 texel and whose width equals the amount of points. Everytime the position of the points changes, the texture has to be updated. This is not the “main texture”, we are talking about an additional texture. Regarding the format of the points texture, it is necessary to use a non-normalized one, for example TextureFormat.RGBAFloat (R32G32B32A32F), otherwise a loss of resolution occurs and the points jitters on the screen. We will need to know also the amount of points and the way the texture is to be sampled so do not forget to pass in both parameters, the float and the sampler state.

Once we have the data available in our shader, we have to iterate through the array, which means enclosing the Bresenham implementation explained previously into a for loop, sampling the points texture and picking an endpoint A and an endpoint B for calculating that line segment. When all the point pairs have been used, the loop ends. This way we are using only one texture, one sprite and one material.

void IsPixelInLine_float(float fThickness, float2 vPointP, Texture2D tPackedPoints, SamplerState ssArraySampler, float fPackedPointsCount, float fPointsCount, out bool outIsPixelInLine)
{
    // Origin in screen space
    float4 projectionSpaceOrigin = mul(UNITY_MATRIX_VP, float4(0.0f, 0.0f, 0.0f, 1.0f));
    float2 vOrigin = ComputeScreenPos(projectionSpaceOrigin, -1.0f).xy * _ScreenParams.xy;

    // The amount of pixels the camera has moved regarding a thickness-wide block of pixels
    vOrigin = fmod(vOrigin, float2(fThickness, fThickness));
    vOrigin = round(vOrigin);

    // This moves the line N pixels, this is necessary due to the camera moves 1 pixel each time and the line may be wider than 1 pixel
    // so this avoids the line jumping from one block (thickness-wide) to the next, and instead its movement is smoother by moving pixel by pixel
    vPointP += float2(fThickness, fThickness) - vOrigin;

    vPointP = vPointP - fmod(vPointP, float2(fThickness, fThickness));
    vPointP = round(vPointP);

    int pointsCount = round(fPointsCount);

    outIsPixelInLine = false;
 
    for(int t = 0; t < pointsCount - 1; ++t)
    {
        float4 packedPoints = tPackedPoints.Sample(ssArraySampler, float2(float(t / 2) / fPackedPointsCount, 0.0f));
        float4 packedPoints2 = tPackedPoints.Sample(ssArraySampler, float2(float(t / 2 + 1) / fPackedPointsCount, 0.0f));
 
        float2 worldSpaceEndpointA = fmod(t, 2) == 0 ? packedPoints.rg : packedPoints.ba;
        float2 worldSpaceEndpointB = fmod(t, 2) == 0 ? packedPoints.ba : packedPoints2.rg;
        float4 projectionSpaceEndpointA = mul(UNITY_MATRIX_VP, float4(worldSpaceEndpointA.x, worldSpaceEndpointA.y, 0.0f, 1.0f));
        float4 projectionSpaceEndpointB = mul(UNITY_MATRIX_VP, float4(worldSpaceEndpointB.x, worldSpaceEndpointB.y, 0.0f, 1.0f));
 
        // Endpoints in screen space
        float2 vEndpointA = ComputeScreenPos(projectionSpaceEndpointA, -1.0f).xy * _ScreenParams.xy;
        float2 vEndpointB = ComputeScreenPos(projectionSpaceEndpointB, -1.0f).xy * _ScreenParams.xy;

        vEndpointA = round(vEndpointA);
        vEndpointB = round(vEndpointB);
 
        vEndpointA += float2(fThickness, fThickness) - vOrigin;
        vEndpointB += float2(fThickness, fThickness) - vOrigin;

        vEndpointA = vEndpointA - fmod(vEndpointA, float2(fThickness, fThickness));
        vEndpointB = vEndpointB - fmod(vEndpointB, float2(fThickness, fThickness));
        vEndpointA = round(vEndpointA);
        vEndpointB = round(vEndpointB);
 
        int x = vEndpointA.x;
        int y = vEndpointA.y;
        int x2 = vEndpointB.x;
        int y2 = vEndpointB.y;
        int pX = vPointP.x;
        int pY = vPointP.y;
        int w = x2 - x;
        int h = y2 - y;
        int dx1 = 0, dy1 = 0, dx2 = 0, dy2 = 0;

        if (w<0) dx1 = -fThickness ; else if (w>0) dx1 = fThickness;
        if (h<0) dy1 = -fThickness ; else if (h>0) dy1 = fThickness;
        if (w<0) dx2 = -fThickness ; else if (w>0) dx2 = fThickness;

        int longest = abs(w);
        int shortest = abs(h);

        if (longest <= shortest)
        {
            longest = abs(h);
            shortest = abs(w);

            if (h < 0)
                dy2 = -fThickness;
            else if (h > 0)
                dy2 = fThickness;
 
            dx2 = 0;
        }

        int numerator = longest >> 1;

        for (int i=0; i <= longest; i+=fThickness)
        {
            if(x == pX && y == pY)
            {
                outIsPixelInLine = true;
                break;
            }

            numerator += shortest;

            if (numerator >= longest)
            {
                numerator -= longest;
                x += dx1;
                y += dy1;
            }
            else
            {
                x += dx2;
                y += dy2;
            }
        }
    }
}

Note: In this version, some additional optimizations have been implemented, see next section.

Optimizations
Sprite size fitting

In order to avoid shading unnecessary pixels, the drawing area should be as small as possible. This area is defined by the sprite in the scene. If a 1x1 pixel texture is used (with its pivot at the top-left corner) then the width and height will match the scale and calculations are simpler.

Every time the position of the points change, the position and scale of the sprite change too. We only need to calculate the bounding box that contains the points of the line and expand it as many pixels as the thickness of the line, so pixel blocks greater than 1 pixel are not cut off.

Points texture packing

The size of the 2D texture used for sending a point array to the GPU can be halved. We are working with 2D points so every texel (Color, in C#) can store 2 points.

GPU-side point transformation

Instead of transforming the points of the line in the C# script it is better to postpone that calculation to the GPU. Points can be passed in world space and then, in the shader, multiplied by the view matrix, the projection matrix and the screen size to obtain their screen position. The origin parameter (vOrigin) can be removed and calculated in the shader too.

6618232--753820--upload_2020-12-14_1-33-41.png
6618232--753823--upload_2020-12-14_1-34-14.png
6618232--753853--upload_2020-12-14_2-31-23.png
6618232--753856--upload_2020-12-14_2-31-50.png

14 Likes

Some people asked me to share the code, so here it is:
https://github.com/QThund/LineRenderer2D

Please let me know if it was useful for you.

And please, if you would be so kind, RT this:
https://twitter.com/JailbrokenGame/status/1338552609259581448

4 Likes

Might want to post it here too.

1 Like

Added some code fixes and unlit shaders.

New commit:
Fixed: The multi line was not working properly with OpenGL due to wrong texture sampler configuration.
Now you can use standard shaders instead of Shadergraph.
Standard shaders allow to make the line unlit by enabling a checkbox in the material.
Files moved to 2 folders: Shadergraph and Shaders.
The .hlsl files are shared among both versions.
The test scene has been updated. 2 new lines have been added which use the new standard shaders. A 2D point light has been added to demonstrate how the light affects the lines, unless they are unlit.

New commit:
Fixed: The inherited scale was not properly calculated.

What an awesome solution!!! I imported your project but I get the following errors when opening the SG_BresenhamMultiLine shadergraph:

Shader error in ‘hidden/preview/Branch_31483F37’: ‘ComputeScreenPos’: no matching 1 parameter function at Assets/Plugins/LineRenderer2D/Assets/LineRenderer2D/Shaders/S_BresenhamMultiLine.hlsl(18) (on d3d11)

Shader error in ‘hidden/preview/CustomFunction_A7422E2F’: ‘ComputeScreenPos’: no matching 1 parameter function at Assets/Plugins/LineRenderer2D/Assets/LineRenderer2D/Shaders/S_BresenhamMultiLine.hlsl(18) (on d3d11)

Any idea what’s causing this?

Yes, in the S_BresenhamLine.hlsl shader you have to add an additional parameter to the calls to ComputeScreenPos, a -1.0f, like this:

float2 vOrigin = ComputeScreenPos(projectionSpaceOrigin, -1.0f).xy * _ScreenParams.xy;

The reason I haven’t fixed that is because in the HLSL version of the line renderer it uses a different ComputeScreenPos function, which only receives 1 parameter. So I had to decide which of both would break, in order to share the same shader file in both versions.

Hi, first of all, amazing work! this is really cool. I wanted to make some sort of “tentacle” using this on Runtime but I can’t make it to work. I’ve tried different approaches. First I think I need to assign the positions and then move the those positions. This is my script but it’s still not working (not know why):

[RequireComponent(typeof(MultiLineRenderer2D))]
    public class WiggleLineRenderer2D : MonoBehaviour
    {
        [SerializeField] private Transform[] positions;
     
        [SerializeField] private float wiggleSpeed;
        [SerializeField] private float wiggleMagnitud;
        [SerializeField] private int wiggleOffset = 3;

        private MultiLineRenderer2D multiLineRenderer;
        private List<Vector2> lineRendererPoints = new List<Vector2>();

        private void Awake()
        {
            multiLineRenderer = GetComponent<MultiLineRenderer2D>();
         
            foreach (var pos in positions)
            {
                lineRendererPoints.Add(pos.position);
            }
         
            multiLineRenderer.Points = lineRendererPoints;
         
            multiLineRenderer.CurrentCamera = Camera.main;
        }

        private void LateUpdate()
        {
            var newPos = new Vector2();

            for (var i = 0; i < lineRendererPoints.Count; i++)
            {
                var rendererPoint = lineRendererPoints[i];
                newPos.x = rendererPoint.x;
                newPos.y = i % wiggleOffset * Mathf.Sin(Time.time * wiggleSpeed) * wiggleMagnitud;

                lineRendererPoints[i] = newPos;
            }

            multiLineRenderer.Points = lineRendererPoints;
            //multiLineRenderer.ApplyLayoutChanges();    don't know the difference but it works also without this line
            multiLineRenderer.ApplyPointPositionChanges();
        }
    }

I can see the changes of the points on the editor but still can’t see them rendering properly (even if the Gizmos are there moving)

Is anything I’m doing wrong? Helps for the help in advance.

PS: I’ve tried both prefabs for multiline: SG and S and it’s the same outcome
PS.2: Also this script is attached to the prefab directly and the assigned Transforms are just childs of this prefab and Positions are Local Space is checked

[EDIT] [SOLVED]

Ok, the script works just fine! I had some Sorting Layer issues
 Soo if anyone wants to use this script, feel free! Both SG and S works like a charm!

2 Likes

Hi @ThundThund !

Thank you so so much for writing this article and providing the example code! I am learning so much.

I was wondering if you could explain the implications of line 238 of MultiLineRenderer2D.cs
In particularly,
m_packedPointsTexture.Apply(); m_packedPointsTexture.Apply();

It is to my understanding that texture.Apply() is significantly expensive, much more so than SetPixels()

Is it cheaper to use when the texture size is smaller? The reason I ask is because I am hoping to draw lines like grass in a 2D tilemap. This would require at absolute most, ~254 tiles, and in my resolution, that could be over 960x540 pixels, with a lot of separate texture.Apply() calls, needed for the sake of layering I think.

I may need to look into compute shaders

I’ll give it a shot and just see what happens

1 Like

Hi Rocky! I’m glad you found it useful. The Apply method has to be called in order to send all the changes you made to the texture (you may call SetPixels multiple times before) that is stored in main memory to the VRAM in the graphics card. I mean, it’s not a choice between Apply or SetPixels, both are used together.
If your intention is to add grass to your scenario, I would recommend to fake it by using sprites of hand-drawn grass and moving them with vertex shaders. Drawing one line per grass blade is going to be too expensive, maybe.

My apologies, I didn’t mean to compare SetPixels to Apply. That statement was more of a tangent about the performance cost of calling Apply(), as I am pretty sure we can make SetPixels really fast by setting pixels in an unsafe {} context with pointers, or the Unity provided NativeArray version

Anyway, I think you are right, that it is going to be too expensive to draw each blade of grass one line at a time by modifying textures. I will investigate using vertex shaders or compute shaders. Unfortunately this is my first time learning shaders, so it may take me a while to figure it out lol. This is really cool so I think it’s worth it!

1 Like

@ThundThund , I think I figured out the mathematical solution to the distance formula :slight_smile:
In your code, just use

outDistanceCorrection = max(normalizedAbsNextToPrevious.x, normalizedAbsNextToPrevious.y);
1 Like

Great! I will try it later.

Hmm
 it seems that for thicker lines the correction needs to be added rather than multiplied. I posted an example shadertoy code in this stackoverflow answer.

I added your modification in Unity and it does not produce a pixel-perfect line when thickness > 1.

Quite possibly. I haven’t thoroughly tested it for thickness > 1
 Out of curiosity, can you make a screenshot of the not-pixel-perfect results you get?

Or do you mean thickness in terms of blocks that you describe in your first post? (“For example, if the line has a thickness of 4 pixels, the screen is divided by an imaginary grid whose cells occupy 4x4 pixels”)

fThickness, which defines the “imaginary grid”.

So
 something like this?
7663765--956626--upload_2021-11-17_7-54-43.png

I made a test shader in Unity (it’s attached to this post), the results above were achieved with _Thickness = 1 and _BlockSize = 1, 2, 4, and 8, respectively. The lineSegment(p, a, b, thickness) function is unchanged, I just divided the first three arguments by _BlockSize and also floor()'ed the first argument.

7663765–956629–PixelPerfectLineShader.shader (2.02 KB)

1 Like

Thanks a lot for this!

I found an issue with the MultiLineRenderer, however, that drove me insane, as some lines would sometimes not get rendered, depending on the total point count (usually the last or last 2 lines).
I managed to fix it by replacing the way the packed points are sampled in S_BresenhamMultiLine.hlsl:
Instead of:
csharp** **float4 packedPoints = tPackedPoints.Sample(ssArraySampler, float2(float(t / 2) / fPackedPointsCount, 0.0f)); float4 packedPoints2 = tPackedPoints.Sample(ssArraySampler, float2(float(t / 2 + 1) / fPackedPointsCount, 0.0f));** **

I used:

int xCoord = floor(t/2.0f);
float4 packedPoints = tPackedPoints.Load(int3(xCoord, 0, 0));
float4 packedPoints2 = tPackedPoints.Load(int3(xCoord+1, 0, 0));
1 Like