How to draw the front view and top view of the model

I thought of getting each vertex and projecting them onto the matrix of the front view and top view, but I don’t know how to get the vertices to line up correctly.

For example, in the model below, I want to draw it like the one on the right.

You need a camera with orthographic projection. Then just move it into correct position and point at the model

1 Like

Thanks, I knew that would be a direct projection. But what I want is for the vertices to connect the lines directly and correctly. The faces of the model are not what I want.

Basically, what I want is exactly the same as the picture I drew.

Well, you need a wireframe shader.

However, in simplest implementation of a wireframe shader the model will be triangulated, meaning it won’t look exactly like what you want, and instead you’ll get something like this.

7615303--946432--upload_2021-10-30_10-39-22.png

And the top cap will also be triangulated.

Avoiding that is more advanced topic. You’ll either need to preprocess the model somehow so it only shows the edges you want, or you could see if you can get it rendered as quads (i.e. Quad Domain for tesselator), which won’t help you with the cap of your model, or somehow mark the edges in it.

Basically if triangles on wireframe are not allowed, then you’re heading into custom shaders or procedural geometry territory.

Yes, I did try keywords like wireframe shader/geom shader.
But I either can’t deal with the triangular lines or how the lines should be connected.

It’s kind of frustrating.

Where are you getting the meshes from? Depending on your needs it may not be too difficult to export the edge data that you want and then use the GL API to render those directly.

Otherwise, an edge detection shader may be of use to you. This won’t give you as much control because it’s based on the depth buffer rather than on the model itself, but it may suit your purposes.

I’d suggest to preprocess the model and probably implement custom loader, then draw wireframe yourself.

If all the edges are hard edges, then you can try edge detection shader, like angrypenguin said.

Maybe when making the model, giving me the corresponding data would be more likely to be implemented.

But this is a bit problematic for me because our model is probably going to be cut in real time (although I haven’t implemented this feature at the moment) if it relies on external data, this feature won’t be possible.

Edge detection is indeed a direction of implementation, but it has a big limitation that it cannot satisfy the partial delineation in the middle of the model.

The external data is only initially needed so that it knows which edges are to be drawnand which edges are not. Once you know that you can account for it in your mesh slicing code.

The issue is that when Unity imports a mesh it’s split into tris. Once you’re looking at the mesh data at runtime you don’t know which edges of which tris correspond to the original edges in your modelling program, so you don’t know which ones to draw and which ones to leave out.

Once you do know which ones to draw, you can simply apply the same algorithm to that data as you do to the base mesh itself when you’re cutting things.

I’m not sure what you’re specifically referring to. If you need to be able to render arbitrary edges within flat (or near flat) surfaces then yes, that approach isn’t suitable. For that I’m pretty sure you need some way to indicate which edges are to be drawn.

Here’s another thought of unknown value: I wonder if there’s some way you could pack relevant data into your vertices to tell it which edges to draw and which to skip?

It actually can be done.

You can encode face ID into vertex color.
Then you render that onto a secondary render target, so each face will have different color.
Then you use that information for edge detection.

Still requires preprocessing and likely not something OP can do by himself.

Yeah, that’s what I mean. If you’re going to pre-process it then I’m sure other approaches would be much more straightforward.

thank you.

Because the model may be cut at runtime, the pre-process way does not work.

No, a pre-processing step does not stop you from also doing things at runtime. And the specific method mentioned will work at runtime, no problem.

What approaches have you tried, and how far did you get?

Of course it works. You can preprocess modified model at runtime. And if you are building it runtime, you’ll have easier way storing edge data.

Sorry I can’t think of any way to pre-processing.

The only way I can think of is to save the edges manually and then draw those edges out.

But if the vertices and edges of my model are likely to change, then my so-called pre-processing, is invalidated

  1. For all faces, find neighbors.
  2. For all faces, compare their surface normal with neighbor’s.
  3. The normals are not (roughly) the same, there’s an edge between the neighbor and face.

You can build neighbor information from any model upon load.
You can store the model in a form that includes that information and prepare rendering mesh when it is altered.
And you can perform operations on that “internal model” in a way that keeps neighbor information correct.