Compute shader issues on ATI hardware?

I’m working on a compute shader based system that’s doing quite a bit of work per pixel (ray-casts into an octree). On my main Nvidia system everything works fine but on two different (modern) AMD systems I’m getting weird or broken results.

I’ve seen a case where the entire compute buffer just comes out as one color - apparently for basically zero expense as though the shader is short-circuiting somehow. In another case I’m looking at right now, I’ve simplified down the shader and it is sort of working (clearly basic occupancy data is being read correctly from my data buffer), but with broken lighting (as if other data read from the same buffer is failing?) and some other background corruption.

Just in general terms, are their any known inconsistencies between use of compute shaders on NVidia vs ATI? Pretty much all my buffers are just set as ‘Default’ and in shader code are just using Buffer or RWBuffer depending on whether I need write access. Are there different limits on numbers of bound buffers between the two platforms or anything else like that to be aware of? Or maximum buffer sizes?

Thanks!

I imagine likely issues might be things like (relative to Nvidia):

  • Different error/NaN handling?
  • Different buffer size-limit/alignment requirements?
  • Different limitations, e.g. max buffers.
  • Different overflow handling?
  • Different out of bounds memory access handling?

…But I’d love to know if there are clearly understood or documented specifics before I dive in to further debugging (CS debugging can be challenging at the best of times) :eyes: