HLSL convert float to int losing precision

I have the following frag function in my vertex shader: (the rest of the shader is the Sprite-Default shader that ships with Unity).

float4 SpriteFrag(v2f IN) : SV_Target
{
    float4 c = tex2D (_MainTex, IN.texcoord);
    int r = int(c.r * 255);
    int g = int(c.g * 255);
    int b = int(c.b * 255);

    c.r = r / 255.0f;
    c.g = g / 255.0f;
    c.b = b / 255.0f;
               
    c.rgb *= c.a;
                
    return c;
}

and the following color value gets sampled: RGBA(68, 48, 36, 255). I would like to cast the R, G, B values to ints to perform bit manipulation on them, but when I cast the float values they aren’t accurate. My values end up as:

r = 66
g = 46
b = 34

The values are not consistently off by 2 either (otherwise i would just increment them and work with that). What am I doing wrong?

**note: if I return c unmodified, the color values are correct. **

Update:

I feel like I’m going insane here. Returning a hardcoded red value seems to be incorrect.

fixed4 SpriteFrag(v2f IN) : SV_Target
{
    float4 c = tex2D (_MainTex, IN.texcoord);

    c.r = float(48.0/255.0);
    c.g = 0;
    c.b = 0;

    c.rgb *= c.a;
            
    return c;
}

This should set the R channel to 48, or 0.18823529411, but instead it sets it to 120 or 0.4705883. WTAF is going on here?

and the following color value gets sampled

Where and how do you get that value? Keep in mind that depending on the actual shader you may have to consider lighting or other post processing effects. Apart from that what bites most people are gamma corrections either when textures are imported or the gamma correction of the actual output. See gamma workflow for more information. If you have never heard about gamma / gamma correction, you may have a look at the wikipedia article to get a better understanding what it is or why it’s applied.