Very out of my depth with graphical stuff (really need to learn more), and the main issue is I’m not sure what direction to be looking into that won’t turn out to be a dead-end, or too advanced for myself.
But currently toying with the concept of lighting that combines a Terraria style tilemap lighting with sprite lights. Thought I was onto a winner with sprite masks… but I guess it works on a stencil basis as it doesn’t support gradients. So currently a light that is both sprite and mask looks like this on top of the tilemap shading:
Which could maybe pass but I would like the mask to fade out at the edge of the light.
Currently the tile map shading is just a generated texture/sprite that matches the size of the proc-generated world. Only gone so far as non-air tiles being black, and air tiles being transparent, as I don’t want to spend any real time on such a system if this is never going to work.
So question is, what direction should I be looking into to subtract or cutout one or more sprites from another sprite? Is this approach going to work, or should I be trying something else entirely?
Worth mentioning I have very little shader graph experience, and zero shader coding experience. I understand I’ll probably potentially have to learn either/or to accomplish this.
1 Like
i also have a lot of trouble with this problem, one solution that I found was to make a render texture from a camera that culls everything except the light sprites, it will make a grayscale image of all the lights, and then feed that render texture to the shader of the terrain and use color values as alpha
this method prob does not scale well and will soon become expensive depending on your needs
That idea makes sense. Any direction you can provide as to how the shader would work? Did you code it or use Shader Graph?
using shader graph here is the setup
result, changing the scale and position of the cone
2 Likes
Okay almost onto a winner. I went with subtracting the alpha channel of the render texture from the target texture.
Though the issue is now correctly positioning the screen space render texture onto the object. Just screen space doesn’t seem to account for where the object may be, so I get this weird effect:
I have a white sprite sprite in front of a red one to test the alpha subtraction, which works. Though I need to translate where the render texture is correctly onto the object. Any ideas there?
I think I am at the end of my expertise, I dont understand shaders very well but i think there might possibly be some issues, for example my render texture is a full screen image (the camera that targets the render texture has the same viewport as the full screen game camera), when you subtract the alpha from the main texture which is not a full screen image aswell it probably will produce wonky results.
(also in my solution I was considering the render texture to have a black skybox/background so the render texture had no alpha, tho of course there might be other ways to do it which I dont know about)
Sorry I have no further knowledge to share this is as much as I know
Wait, it works, just for some reason the pixel perfect camera component was giving different orthographic sizes for the camera rendering the lighting layer.
When both camera match, you get the right result as expected:
Still need to play around with it I think. But in the end, I think this will work. Thanks for the help here.
2 Likes