Is there a way to read a lighting value?

Is there a way to read the lighting value from a given location?

I am experimenting with using 3D geometry with 2D sprites. So far it looks pretty good, but the 3D geometry receives light that the sprites don’t, so the sprites look super bright in dark areas.
Sprites have a color value, and I could easily write a script to change the sprite’s color to match a given value.
But how could I retrieve that value?
Is there a way I can read the lighting value of a given point? So for example, I could raycast to the point the player is standing on, and sample the lighting in that space?

Or if there are any other possible methods to achieve the effect I mentioned, I’d be interested in hearing them.

Interesting … I think this is what Light Probes are for, but for sure those would NOT affect the DefaultSpriteMaterial, as that is an unlit material.

You would need to set up light probes, bake them, and also use a lit shader (such as Default Cutout) for your sprites, which of course will have implications for draw ordering, as the heuristic in Sprites is different than traditional shader Z depth values.

Alternately I think you could raycast the geometry, then get the barycentric coordinate from the RaycastHit object, then look up the lighting texture coord, which I think is in the .uv2 channel, then look it up in the lighting texture, which of course would then need to be marked Read/Write…

Or you could implement your own cheeseball light probes (invisible GameObjects in the scene that have a Color property) and take the two or three nearest ones and calculate the dimming to apply to your sprite color…

Of those 3 options I figure something might work for you. Let us know, because I’ve never done any of the above, just thought about it. :slight_smile:

I thought this was the preferred approach to 2D lighting? (I don’t normally recommend Brackeys, but feature highlighting is where they truly shined)

Though forget about this, you obviously have a 3D scene and need 3D lighting/rendering anyway.
I’d personally go for Kurt’s cheeseball solution with some sugar on top, if that helps.

I might be out of my depth here, but would the “right” way to do this be to write your own custom sprite shader that incorporates 3D lighting data?

Well that’s why I was thinking of adjusting the “Color” value of the sprite.

Hmm, I just realized that I need to set up some tests using more sprite characters in my game to make sure I don’t have drawing order errors. I’ve just been using the player so to make sure that draws right in the 3D geometry, and I haven’t actually tested if there are problems with other sprite characters…

That second method you mentioned sounds like what I was thinking. Got any links that explain how to “get the barycentric coordinate from the RaycastHit object”?

Also, crap, I just remembered that I have a shadow that’s projecting onto the ground directly beneath the player. Hmmm…

@PraetorBlue
Probably, but that’s way over my head, and I’ve been trying for years to get a good grasp on shaders.
Truth be told, in my current form I’d be considered a shader guru for 99% of the population (whisper I’m not whisper).

Idk, perhaps the loads of function libraries and various shaders, surface shaders, vanilla shaders, HLSL with a dash of CG, multipass shaders, shadowcasting, tags, pragmas, and Shader Graph on top of everything is a serious mind bender. Thanks Unity et al.

That does sound like the best approach, but it is beyond my skill. Or more accurately, I think it would take me more time to research and learn how to do that than it would to figure out how to read lighting at a given point. I’ve never written a shader before, and this would be a pretty complicated one to build.

Physics.Raycast will get you a RaycastHit object with various properties. One of them is the barycentric coordinate of the triangle which your ray has hit.

https://docs.unity3d.com/ScriptReference/RaycastHit.html

https://en.wikipedia.org/wiki/Barycentric_coordinate_system#Barycentric_coordinates_on_triangles

It is very simple to get Cartesian coordinates from barycentric ones. Slightly harder to get to barycentric from Cartesian, but not by much. I mean you can already get the world coordinate computed from RaycastHit, but Kurt-Dekker meant using barycentric coordinates as a direct way to inject uvs to lighting map, because uvs are esentially in barycentric coordinates anyway.

Haha sounds like we’re all in the same boat then :smile:

Aaand … it’s sinking … fast xD

But then I watch Art of code on YT and everything is so nice and simple, I’m like all pumped up, I will make the best shaders ever …

Unity:
pragma
asdlksdf
fewflweitois
werlerlk;ek
lrkwerjw {}

@Marscaleb the best way to think of barycentric coordinates is to think of a triangle as if one vertex was (0, 0), the next one was (0, 1), and the last one was (1, 0). So technically if you’d graph on a typical (orthogonal) x,y system, it would look like a right triangle with all three vertices attached to axes. But now detach it from Cartesian space in your mind, and warp it into any triangle you can think of, but keep these coordinates intact.

This is how UVs work, and this is what barycentric coordinates are. Mathematically there are three coordinates, not just two, and they always add up to 1. That’s what homogeneity means. That (0, 0) vertex? It’s in fact (0, 0, 1). But it’s completely redundant to use all three (hence UV and not UVW).

In practical terms, your edge A-B becomes an oblique Y axis, and edge A-C becomes an oblique X axis, so you can find any point on the triangle and translate that back to Cartesian space simply by doing (don’t mind that it’s inverted in this case, but this is from real code)

p1 + uv.x * (p2 - p1) + uv.y * (p3 - p1)

This has loads of useful mathematical properties.
For example centroid is (1/3, 1/3, 1/3)

but you can find all kinds of special points very efficiently through this, circumcenter, orthocenter, incenter, excenter… but also find points that are strictly on the edges, or easily detect if a point is inside the triangle (it is inside if all three coordinates are positive) and so on

Figured I’d take a whack at #1 since I hadn’t done it before. It works! See enclosed package for ultra-simple setup with sprite dude moving in and out of light probe scene in a 3D light-baked world.

6453914–723032–CheeseballSpriteLighting.unitypackage (343 KB)

1 Like

Is it O(n^2) as one adds more dudes/lights? I think that’s the hardest part to do properly.

@Marscaleb
To illustrate simplicity, here’s a method that computes the nearest edge in UVW given a point and a triangle

int GetNearestEdge(Vector3 uvw) {
  if(IsInside(uvw)) return uvw.IndexOfMin();
  uvw = VectorEx.Clamp01(uvw);
  return Naturalize(uvw).IndexOfMin();
}

static public Vector3 Naturalize(Vector3 uvw)
  => uvw.IsCloseToZero()? centroid : uvw / uvw.Sum();

static public Vector3 centroid => Naturalize(Vector3.one);

Similar thing but returns the actual nearest point instead

Vector3 SnapToNearestEdge(Vector3 uvw) {
  if(IsInside(uvw)) uvw[uvw.IndexOfMin()] = 0f;
    else uvw = VectorEx.Clamp01(uvw);
  return Naturalize(uvw);
}

I don’t know internally how light probes work but if I had to guess, the answer would be “No.”

Observing the highlight and selection of the light probes in the scene at runtime while the target renderer (the sprite in this case) is selected would lead me to believe they subdivide the volume space with some type of data construct (tree or whatever) that allows lookup of relevant light probes in O(1) time. Across the entire sprite population it would probably be O(n)

Right, sounds great. Then this seems to be a superior solution in my mind than just doing naive raycasting.
I’ve always wondered about making 2D in 3D, I like the style. But obviously lighting is important to get it right.
Maybe I’ll try this approach some day. Would you say it had the potential to look like Octopath Traveler (without the effects of course)?

Wow, that’s a really good looking game!! I think the concept is likely similar, but they could have achieved their look in any number of ways and I would not presume.

And they have some legit artist(s)… :slight_smile:

1 Like

Ooh, nice!

I’ve never worked with light probes before. Is it possible to attach them to a character and have them read the lighting in real time? Or would that likely be too expensive?
I’m planning on having some rather large scenes, so placing static light probes over everything might be a bit more than I bargained for.

Well, you know… SquareEnix.

By the way, isn’t the idea of reading the light value directly beneath a character how it worked in early 3D games like Quake? Honestly I think that’s how it worked even as late as games like Halo 2.
I wonder if someone has already done this in Unity, like for a retro shooter, and the only thing new is applying it to a sprite color.

…Well, I’m about done for the day. You guys have given me some great places to start working on my next day off. Thanks a bunch!

2 Likes

Light probes actually work the other way around: you bake as many as you want into the scene, then each renderer can choose to look for lighting information from the scene’s probes.

The key is you don’t need to fill the scene, you only need them where lighting is changing over a short distance, like as through a shadow edge.

And the major downside is that you can’t use it on procedural levels with procedural lighting, because it has to be baked. But this is not an issue with an authored world. (I’m not sure about modular stitching of prebaked either, probably not possible.)