Compute Shader not work on older GPU with DX11

Hi, I created Compute Shader in unity 2019.4 (DX11) and works well on RTX 2070S. But when I try run it in same version of Unity or even build on my older laptop with GT 650M (2GB) I have black screen. I started deleting some parts of code and I find out it does not read input properly. when I use this code it works and output red color.

    #pragma kernel SomeKernel
    
     RWTexture2D<float4> Input;
     RWTexture2D<float4> Result;
    
     [numthreads(8, 8, 1)]
     void SomeKernel(uint3 id : SV_DispatchThreadID)
     {
         Result[id.xy] = float4(1,0,0,1); // working
     }

But when I use this, it output black on GT 650M but on RTX 2070S it works well.

    #pragma kernel SomeKernel
    
     RWTexture2D<float4> Input;
     RWTexture2D<float4> Result;
    
     [numthreads(8, 8, 1)]
     void SomeKernel(uint3 id : SV_DispatchThreadID)
     {
         Result[id.xy] = Input[id.xy]; // not working on GT 650M but RTX2070S yes
     }

it’s weird, because if there was a problem with the syntax, even a red result wouldn’t work. But it have maybe something to do with num threads? I tried [numthreads(8, 8, 1)] and [numthreads(32, 32, 1)] with same result, new card work, old not.

Old card have hardware support for DX11.1 with SW DX12 and everything you need. NVIDIA GeForce GT 650M 2 GB GDDR5 Specs | TechPowerUp GPU Database


Any idea where can be problem or difference? Thanks.

Same issue here on 870m laptop. My code works fine on 1080 and 2080 ti

Did you ever find a workaround for this?

The short version is the GT 650M and GTX 870M are the same GPU architecture, Nvidia’s Kepler from 2012. The GT 650M is a GK107, and the GTX 870M is a GK104, aka a rebadged GT 680MX, which was also sold as the GTX 780M. In Nvidia’s chip numbering schemes, higher numbers are generally slower. Even the later GT 920M is a GK208, which was the basically just desktop GT 630 still being sold 4 years later.

Kepler isn’t actually fully DX11 compliant, so some stuff is randomly broken. There’s not really anything you can do to fix that.

Does it work if you use a Texture2D instead of a RWTexture2D for the Input?