How do you dynamically tile sub-portions of a plane?

I’m working to render a small terrain mesh for a game I’m working on. The mesh building is working, but I’m curious as to how I’d end up mapping sub-textures to it.

Right now, for testing purposes, I use a single simple texture and tiled it, 32x32, but in the long run I want a texture with various kinds of potential tiles and I want a unique tile to render per square in the overall plane. I build the mesh from a matrix of what amounts to “height values” and a type for the area which will handle whether or not there should be walls around the square or whether to render a grass or sand texture in that square. The output texture is meant to be tiled as a grid, it contains 32x32 “squares,” and I’ve done my best to re-use vertices and map the triangles over those but I’m not sure how to approach the shader for it.

I originally though to flatten the matrix into an array and pass that in, and use the UV coords to determine the location in the array to determine what actual texture coords to pull out. But then I found out that you can’t really pass an array to shaders (forgive me if I’m wrong, my experience with shaders is limited) pre an article I read (which talked about a way to hack an array into a shader). That could work but I’d prefer no hacks if possible.

Another thought I had was to assign the UVs to be what I need in the texture so that the shader would be dead simple (even default, maybe). But that would mean that I’d have to create 4 vertices per grid space, since I can’t assign more than one set of UV coords to a vertex (some vertices are used to make up to 4 adjacent squares).

Any thoughts or ideas about to make a shader that dynamically renders part of a texture across a plane (think “tilemap,” but not for 2D) – is reusing vertices really a performance enhancement or am I avoiding that route for no reason?

You want to have one piece of position information and four pieces of UV information per grid point. You might imagine that you could do this with two separate arrays: a position array and a UV array. Each triangle would then require six indices total: three positions and three UVs. You wouldn’t use the same indices for positions and UVs because the arrays aren’t parallel. You would have two indices associated with each vertex that you draw.

Unfortunately, it is not possible to use more than one index per vertex. Here is a very nice StackOverflow answer to a similar question.

Can anything be done? Maybe. You can get a vertex ID as input in your shader using the SV_VertexID semantic. If you order your vertices and triangles in a clever way, you may be able to use the vertex ID to compute the horizontal vertex position.

Of course, you would still have to pass the vertex height to your shader. Clearly you can’t do it through a separate array, but you can attach it to your UV data, so that for each vertex you have three floats: (y, U, V).

With this approach, you completely get rid of (x, z) data, but you end up duplicating vertex heights. You also have to come up with a reliable way of computing the (x, z) position from the vertex index.

Is the above approach worth it? Instead of using 5 floats per vertex, you use 3 floats per vertex and do a bit of computation. Your shader depends on the order in which vertices are passed in which might introduce bugs in the future if you try to modify your mesh generation code.

Unless your grid is very large, reusing vertices is probably not worth the effort. The other meshes in your game will likely use up a lot more data.