Something like this is done for VR rendering already. Some PS4 PSVR games do this, and Steam VR APIs have it as a built in feature. Large parts of the screen are occluded, and then reconstructed as a post process.
The “trick” to do this is the first thing the camera renders is an alpha tested all black mask at the camera’s near plane. Render black anywhere you don’t want to render anything later.
Whether or not this actually saves any performance depends on how you render your dots. It also won’t impact any compute or post process shaders directly since they just see black pixels in the rendered image they’re handed and unless you rewrite those yourself they won’t know to skip those pixels. Ideally you’d be applying any machine learning to the image before the post processing gets ahold of it anyway. That’s how VR handles it at least, as the reconstruction filter is applied before handing it to the post processing.
THx!
I have tested this method In HDRP with default template. And, It works somehow!
Standard rendering gives about 94 FPS
A camera with covered white canvas gives 122FPS
Amazing anyway, there is a way to achieve this without creating new pipeline!