So in a full-screen image effect I need to compute for each pixel its normalized view-direction vector.
Somehow I need to pass to the shader some matrix of the original camera. Do I need camera.projectionMatrix or its .inverse?
Then we have screenspace xy UV coordinates from [0,0] to [1,1]. Somehow multiplying this with the matrix (and dividing by w?) should give the viewDir, right?
Problem is, it doesn’t, when shading the viewDir directly, every pixel remains black (0 or less) 
I used to have a snippet for this but for the life of me can’t find it no more.
Thanks, I’ve seen that article before but it focuses much on “world-space position via depth-texture” and kinda expands into some multi-part rocket-science series on the topic of depth and world-pos, while this simple mind is still struggling with the in theory extremely simple viewDir topic.
While this would work for my simpler “viewDir” purpose, a bit overkill, I shouldn’t need the pixel’s world position / depth, I definitely want to avoid Unity’s extra cameraDepthTexture geometry-pass, and I know there’s some simple fast heuristic that turns screenspace uv[0…1] into clip-space xyz[-1 … 1] direction vectors somehow using one of the camera’s matrices… I had it before, ages ago, but nowadays can’t seem to work it out again from scratch 
I mean for x and y it’s super easy to turn 0 … 1 into -1 … 1. To have it for xyz I should do the matrix-mul and divide the result by its w component.
IN THEORY. In practice I’m kinda lost
