I’m working to render a small terrain mesh for a game I’m working on. The mesh building is working, but I’m curious as to how I’d end up mapping sub-textures to it.
Right now, for testing purposes, I use a single simple texture and tiled it, 32x32, but in the long run I want a texture with various kinds of potential tiles and I want a unique tile to render per square in the overall plane. I build the mesh from a matrix of what amounts to “height values” and a type for the area which will handle whether or not there should be walls around the square or whether to render a grass or sand texture in that square. The output texture is meant to be tiled as a grid, it contains 32x32 “squares,” and I’ve done my best to re-use vertices and map the triangles over those but I’m not sure how to approach the shader for it.
I originally though to flatten the matrix into an array and pass that in, and use the UV coords to determine the location in the array to determine what actual texture coords to pull out. But then I found out that you can’t really pass an array to shaders (forgive me if I’m wrong, my experience with shaders is limited) pre an article I read (which talked about a way to hack an array into a shader). That could work but I’d prefer no hacks if possible.
Another thought I had was to assign the UVs to be what I need in the texture so that the shader would be dead simple (even default, maybe). But that would mean that I’d have to create 4 vertices per grid space, since I can’t assign more than one set of UV coords to a vertex (some vertices are used to make up to 4 adjacent squares).
Any thoughts or ideas about to make a shader that dynamically renders part of a texture across a plane (think “tilemap,” but not for 2D) – is reusing vertices really a performance enhancement or am I avoiding that route for no reason?