Hello. I’m a bit new to shaders (and Unity for that matter), but not new to game dev (worked with MonoGame for years prior to this), and I’ve run into a bit of a problem that has barely any Google results.
I’m trying to make a Vectrex shader for a game of mine (so far it looks like this):
I’m drawing these with a LineStream geometry shader, but I’ve run into a bit of a problem. I need to have the edges which belong to two visible faces to not be drawn. I thought I could easily achieve this by using triangleadj as my input, but it only seems to provide vert data in [0], [1], and [2], and the other 3 elements are all just verts at [0,0,0] (meaning I can get data about the current face, but not the adjacent faces).
Googling this issue has yielded me results from 5+ years ago stating that Unity doesn’t support it, and I was hoping things may have changed in that time. Is it still just unsupported?
…Aw. I guess that explains why I could barely find any information on it. Alright then, I guess I’ll have to search for something else. If it’s not too much of a hassle, which method would you recommend I use if I want to draw only these edges?
I know I can get an effect that’s kind of close by drawing the model again slightly enlarged along the normals and culling inverted, but I don’t think it would work for the ears (which are a 2D plane) or the horns (the horns are a tri which has two verts in the exact same location. I had to do it this way since Unity doesn’t seem to import loose (not belonging to a face) vertices).
I would recommend a camera depth normals texture based post process outline for the main parts of the mesh, and a line renderer for the antennae.
There’s also a thread or two somewhere on this forum of someone attempting this as a “on object” post process where they’re reading the camera depth normals texture in the object’s shader as an alternative to a full post process so they could limit the effect to specific objects.
The last option is you could try to store adjacency data for the mesh manually. Either by shoving data into the mesh’s unused UVs, or using a per triangle index array / structured buffer set on the material.
Hm… definitely going to have to look up some of those terms (I’m a bit of a shader noob), but now I’m at least pointed in the right direction! I’ll give options 1 and 3 a try, and if all goes well, report my findings. Thank you very much!
(P.S.: off topic, but I can’t resist. Quite the coincidence that my character’s head shares a resemblance to your avatar : P).
…this is embarrassing, but could you give me a bit more information about that option please? I did some searching, but I’m not quite sure how that would work. I know about storing data in unused UVs (I’m storing line colours in one of them now), but even if I was to store the index of every vert that’s adjacent to every other vert, I’m not sure how that would tell me what the 3 adjacent triangles are when it comes to the geometry step.
Since you’d know the ID of the adjescent verts, you’d have access to those vert’s data, so you get those vert’s adjacent vert IDs as well and viola, you can construct the adjacent triangles. I think one potential downside to this solution though would be when you have vertex poles with more than 4 edges, so you’d have to be careful with how your meshes are constructed.
Going the structured buffer way could be easier to work with and only need the data to be sent when it’s needed instead of embedded in the mesh data that’s being moved around everywhere.
I’m back! Took me a while (not very good with shaders yet), but after learning about and using structured buffers, I managed to make some progress!
This is exactly the effect I was after! (will share my code once I’ve cleaned it up).
However… there was an additional rule I needed to set afterwards. In the image above, the rule is: If an edge has no adjacent tri, or if its adjacent tri has been culled by the backface, then draw it (works perfectly). The other rule I needed was to allow some edges to be drawn at all times. I accomplished this by setting UV values, but then this started to happen:
(The one on the right was meant to have its mouth and feet have always draw lines like this):
Edges marked in red are edges which belong to only 1 tri (aka, the edge edges). For whatever reason, If I export the verts with UVs set in Blender, it breaks my tri adjacency calculating .cs code, and I can’t really figure out why.
I guess I’ll write back once I figure it out; just wanted to update.
EDIT: Ok, figured out at least what’s causing it. My triangle adjacency assigning script looked through the MeshFilter.sharedMesh.triangles array for triangles which shared two vert indexes. When I set the UVs in Blender, I make sure each tri in the mesh has its own UVs, and that I believe is what’s causing them to “unlink” in the MeshFilter.
EDIT2: After assigning the vert indexes in Blender to uv0.x; success!
So… sorry it took so long (Unity is just a spare time hobby for me), but I’ve finally gotten around to both publishing my work to a nice Git, and also fixing the many bugs I discovered while cleaning it up and stress testing.
You can find it here (where hopefully people will understand my ramblings. Years of working as the only programmer for a company has ironically improved my normal social skills, but reduced my ability to communicate with other programmers): https://github.com/Milun/unity-solidwire-shader
Milun, great work! I was working towards the same effect and stumbled across your git files before I saw this thread. You did a great job documenting everything and commenting on your code. I can get it working just fine with your Blender/Unity workflow, but I’m a 3DSMax guy, so with your permission, I might branch off it and try getting it working for my 3DS max / Unity workflow. Max can import python scripts, so that shouldn’t be too tricky…I’m might change your code a bit for some Max specific options, and modify the unity script to allow for some additional visual modifications for that Vectrex look. (Line intensity and colors affected by overlay graphics, etc.). Again thanks for your perseverance (and everyone’s assistance) in solving the ‘shared visible face edge’ puzzle. You saved me a lot of work and research!
I’m gonna bump this 'cause… WTF? Just wasted about 3 days before I realized that it just silently fails if you try to use triangleadj, and puts the main verts in [0,1,2] while [3,4,5] are left as zero. This seems undocumented, in fact the unity docs Unity - Manual: HLSL in Unity point you to the HLSL docs, and if you look up geometry shaders there, you’ll see Geometry-Shader Object - Win32 apps | Microsoft Learn , which actually has an example using triangleadj.
Is this something that could be added to URP by setting some state bits? This doesn’t seem like something that would need much engine support; it would be done by the API and driver, right?
The main things you need are to be able to set the mesh topology, and supply an index array in the correct formatting. But Unity explicitly does not support the adjacency topology types for its internal mesh format, and it’d need to be set when the data is uploaded to the GPU which happens fairly deep in Unity’s native code, so I have no idea how you could set it yourself.
You could pass setup the mesh vertex data you need to be accessible to the shader using GraphicsBuffers and 2021.2’s Mesh.GetVertexBuffer() to reuse the existing mesh data, then construct an adjacency index list for your mesh that you pass in as a structure buffer and use the primitive ID to get the offset into the adjacency data.
And once you get that far, you might try putting all of that into a compute shader to construct the mesh with that rather than using a geometry shader so it’s faster, and works on platforms that support compute but not geometry shaders … like all Apple hardware.
The reason I wanted to do it in a geometry shader is because I want to be able to simply render the whole scene (or at least a large subset of renderers) in an SRP batch with a replacement material, then let it do dynamic batching, animation for skinned meshes, etc.
Generating special adjacency data for every mesh is a no-go. Geometry Shaders seemed like the easiest way to just make that happen; I don’t know if that’s possible with compute shaders without totally rewriting pieces of the pipeline.
I believe you could write a custom importer that stores adjacency data in the UV channels of the mesh and do a re-import of all meshes and it’ll just automatically take care of it all for you. Then you’ll have access to this in compute and can avoid runtime cost of calculating it.
It’s definitely possible with compute shaders in 2021.2 though (technically possible before then too but more expensive due to memory copies needed). With the GraphicBuffers bgolus mentioned you can get the GPU side reference to a mesh’s data to be used in the Compute Shader, without having to modify the pipeline to run mesh data through your compute.
Adjacency data isn’t something Geometry Shaders make “just happen”, they using pre-calculated index data. Just like a triangle mesh has to specify a list of triangles 3 vertex indicies at a time, triangle adjacency is the same kind of data with 3 vertex indices for the triangle, and 3 more for the adjacency data. Since Unity doesn’t support calculating that data, you would have to do it yourself, no matter what.
So the idea is to get the edge info in some sort of hash-like buffer in a compute shader, use that to find which edges are silouhettes, and extrude from there? At least for 2-manifold meshes that might work if I’m careful about how the hashing is done and collisions are resolved.
No need for a hash. Vertex data is all in matching length sets of arrays for each kind of data (position, normal, UV, color, etc) where each index of the array is the data for a single vertex. You just need an int array that points to the adjacent vertex index per triangle edge, with some value like -1 to mark edges with no adjacent vertex.
Also this terminology makes me think you’re thinking about meshes in the format they exist for non-gpu mesh formats. Generally speaking every mesh used in a realtime 3D game is non-manifold because each vertex can only hold one set of data, so there are lots of things that can create seams via split edges. Most commonly from UV seams. 2-manifold geometry basically doesn’t exist in the real world for GPU rendering.