The first way only works in image effect/screen-space shaders, but is more accurate. Basically, the uv coordinate of a fragment in the screen-space plane will be equal to its normalized viewport coordinate. So that value can be trivially manipulated to get the screen-space coordinate (although UV_STARTS_AT_TOP needs to be accounted for). In that code sample I think to get the screen position in pixel coordinates requires the last line to be “o.screenPos = uv*_ScreenParams.xy” though.
The second way works in any kind of shader, but will not be quite as accurate, and potentially more expensive to compute. It takes a world-space position, multiplies it by the MVP matrix to get it into viewport coordinates, then translates that into pixel coordinates.
tl;dr use the first one when making an image effect, second when making a normal (object) shader.