So I’m on a mission to incorporate dynamic shadows into a 2D game (think Thomas Was Alone style). I know such a feat is possible in Unity Free because that’s where Thomas Was Alone was developed.
Anyways, I got a hold of some code generously provided for us here by ivkoni a few years back. Although this code is nifty and gives the visual I’d like to achieve, it doesn’t necessarily function the same way. The provided code requires you to attach the shadow rendering script to every object in the scene that you want to have a shadow, along with establishing empty game objects on each of these objects from which the shadow would be drawn. My idea would be a tad different.
I’ve provided a basic graphic of the following mockup:
I’d like to have all of the code and operations be handled in a single gameobject (such as a light point). This light point will consist of a light point origin and a “shadow texture” in the form of a plane that will extend to the camera boundaries. This texture will NOT be immediately visible. The texture will be placed in the same Z depth as every other object in the scene, and will thereby detect all such colliders. It will extract the vertices of each object, from which some fancy math will be used to calculate a polygon from the outermost vertices of the objects extending to the edge of the screen. Finally, within this polygon the shadow texture will be visible.
From what I understand based on watching Mike Bithell’s videos/blog posts, this is similar to how TWA calculates its shadows - no raycasting involved.
How far fetched would this be for us to accomplish here for everyone else yearning for similar results?
Hey, yep… that’s precisely how I did it, warping 2D planes as you describe… only difference is that I didn’t think to crop based on screen space… because I’m an idiot
Only limitation of this solution is it really only works in a straightforward fashion for rectangles. Obviously fine in my case, but potentially problematic for anything more advanced.
i really hope i can improve my shader knowledge to the point of understanding this approach and modify it to my needs. http://www.catalinzima.com/2010/07/my-technique-for-the-shader-based-dynamic-2d-shadows/
the raycasts are just way to expensive for my needed resolution (~1000 rays) and above solution’s shadowtexture doesn’t grow with zoom level but with screen resolution giving you pixelaccuracy on any zoomLevel at a constant computation load.
we will see how far i get as i have finished the core of my game and start to learn shaders. But im not a game developer, i am just doing it for myself, i am not looking forward to make money out of anything
yeah, just stumbled upon his youtube channel: http://www.youtube.com/watch?v=KZty_GDX0dU
hmm, wondering if i can modify this to work with unity using edge points of 3D objects since his approach is completely 2D i guess
-ok, tonight i started to code on this thing for Unity, maybe we can figure it out together if you guys still want this,
i will try desperately at least…
-i just started with the mesh member variables so i am pretty sure i do nonsense yet, but its a start.
my progress:
PREPERATION i filter out all vertices that have their normals pointing down (.normals*.y=-1), we are now 2D*
PREPERATION then i create a Vector3[ ][ ] in the meaning of Vector3[polygon#][vertex#]*
-currently the order is wrong if i dont exchange the last with the first vertex, i have to study meshes some more*
FACES: then i calculate which edge would be visible (ignoring occlusion), just checking if the source is on the right half of the edge*
FUTURE, see serumas solution:*
SEGMENTS (whatever he does there. do you know?)*
LINES*
CUTS*
RECALC*
FINAL*
lets hope for the best… * - Download removed: outdated, check newer posts*
I contacted sarumas, he offered me his c++ code, now i am awaiting his email
Until i get it i will update the testscene and script a bit to have the 2D polygon extraction ready, i think my current polygon extraction will lead me nowhere… however, what i have now is:
a struct[ ] for the polygons
-to store additional stuff for each polygon (like relevant-radius or attached ScriptComponents to fade the wall out)
all vertices and triangles of the bottomplane
-but i cant get the outlining polygons points in the right order for every possible polygon
so basically i need to solve:
“Is there a way to get outer points in right order ccw or cc of any convex or concave polygon?(inverse triangulation?)”
found something useful maybe, im reading it… Link
Edit:got it! : “A boundary edge is an edge that belongs to exactly one triangle” with that i can create the polygon now… cant wait to get home
info:
why do i even make everything dependent on the bottomplane?
-since these 2D fake shadows are to be used in top-down or sidescroll 2D environments we could always flatten and use the invisible bottom side of objects as sight-blocking-polygon reference
-note: maybe of use for flattening the bottom Fracture
checking Size of valid Vertices for array initiation(i know, List would be better)
save all valid Vertices and their indices from the original mesh
extract all triangles using these points and set their indices corresponding to the new bottom vertices[ ]
now i basically have a new plane-mesh that is the bottom plane of the object. now i call ExtractPolygon passing these arrays
ExtractPolygon()
get all edges of the triangles
extract all boundaryEdges (edges only present in exactly 1 triangle)
sort edges List in a way that adjacent edges are next to each other
easily extract the vertices out of this sorted list
ccw-check, check if the inner angles in ccw direction are less than the outer angles
-this is needed if the poly was extracted in the wrong direction (depends on original mesh vertices order)
at last save the list as vertices[ ] in the Polygon struct, its now ready for use
EDIT v0.13:
-added random Polygon generator
-added some buttons
-added FixCCWOrder (now always detecting the correct faces)
-fixed a small error in the sort loop of boundaryEdges
-i am currently adding serumas algorithm to this package
I am still working on the conversion of serumas c++ code, i’m slow (inexperienced with c++ syntax, can’t use std::vector)
Does anyone know if my substitude for std::vector is good/bad:
-Serumas uses “std::vector ” where SEGMENT is a struct. He passes this struct by Pointers most of the time (to avoid copy overhead i think)
-In C# i imitated this by using a List but SEGMENT as a class (because it is passed by reference, since pointers to structs are considered unsafe i’ve read somewhere)
Going to bump this thread because I have something to add. I was able to modify the way shown here to cast shadows like what Clopay wants. It’s still using Raycasts, so it’s slow, but it’s better than nothing for now.
The way I did it requires Unity Pro, but essentially I have a separate camera capture the additive output from this script over a black background. I output that to a texture file using RenderTexture and put it onto a plane in-front of the Main Camera. I then set the material to Multiply. In essence this flips the what the script does and only darkens the environment where you can’t see. Here’s a few screenshots of it in action:
i feel ashame that it takes me so long to convert c++ code for the geometric visibility… but i am still on it, its just that i suddenly had very few time for programming it ~1hour/week
I just bought this asset to check its internals out
i found that on bottom it just uses raycasts again. There are some neat selective Update features however.
In the meantime i sporadically work on the geometric visibility based on serumas solution, i e.g scripted a kind of occlusion diagram to help me debugging/understanding his occlusion checks:
The unoptimized performance currently is about 150 fps while processing 500 segments with ~4 points average on them
My current algorithm is a kind of patchwork of different algorithms:
-first i extract all polygons of my wall objects with my extraction (only once at Start() )
-then i create segments out of them with a very modified version of serumas code
-then i use serumas check functions to identify and drop invisible segments
-remaining segments are converted to lineSegments (each lineSegments contains only 2 points)
-these lineSegments are fed into a c# version of the redblobgames algorithm i have not perfectly converted yet
most debugging features won’t show since they heavily rely upon Debug.Drawline which is missing in the WebPlayer build (i just discovered GL.LINES and consider them).
I could post the whole project if someone wants to have a look at it now, but it is unbelievable messy.
maybe their is potential to use threading for this algorithm in future, since its a big seperate junk with no Gameobject interaction within its gears