I’m currently playing with Unity Free, checking out how one would get a “huge hexagon map” to work (defined by e.g. a rather large heightmap or procedural routines). Also, the tiles there (actually, their type = height and texture) should be changeable dynamically during gameplay.
One common (from what I read) approach is to split up the map into “map chunks” and - at runtime - create and show only those chunk objects next to the player. As the player’s position changes, delete “far” meshes and create/show new ones nearby (get appropriate info from the heightmap).
Actually, I have such a system working (a separate thread creates the map chunk data in question, then passes that over to the main thread which in turn creates and positions the actual GameObject). Due to the “atomic nature” of creating game objects (can not be split up between OnUpdates), I keep the map chunks rather small (let say 8x8 hex tiles). This ensures stable fps (no “hickup”; Unity being rather fast anyway), but - naturally - results in a quite large amount of map chunk objects (given I don’t want to “fade out view” after a short distance by using e.g. fog).
A different concept I want to come up with is to move work on this map chunk adaption from CPU to GPU by shaders. As I read, geometry shaders (that can create vertices by their own?) are scheduled for Unity4, but are currently not available. So, for now, I create a complete (all vertices and faces in place) - yet “flat” - hexagon map chunk (larger in size than before). The hex-tile indices of each vertex are stored to the vertex color, thus being passed into the shader - along with a heightmap texture. The shader then does a sampling of the given heightmap and adjusts vertices’ heights and UVs to the required tile type. By just switching the texture (heightmap) of the shader, I could now adapt to different terrain chunks or map changes due to user interaction (a first working example can be found here: http://www.mikoweb.eu/tmp/UnityShaderTest/WebPlayer/WebPlayer.html).
This procedure is not really new, I reckon (ocean shaders seem to do such things). Well, for HexMaps, it might be new, maybe :roll:
Now, I’m not completely sure if this “different concept” would be a better way to go, though (this is why I’m asking for some feedback). Being quite new to shader programming, I’m concerned that those “massive” GPU calcuations each frame and for each vertex (got a lot of them, duh) could have impact on capacity finally (when more objects are on the map)? Also, ray casting (“clicks on the map”) would certainly be an issue, as the mesh collider would not represent the shader-adjusted faces of the map.
What do you think? Any feedback on this is welcome. Thanks.
I was going to ask if your hex map was flat, as it looks fairly flat, but then I see you actually are raising up each hexagon by varying amounts.
To do that in a shader you’re going to have to either manipulate the vertices and have each hexagon have its own mesh, or part of a mesh, or you’re going to need a fancy relief mapping kind of shader.
For example on the asset store there’s some shaders which let you raise up areas based on a texture, to create whatever 3D shapes… and I think they get raised vertically on a per-texel basis… so if you didn’t mind your hexagon edges being a little pixellated (or use high res textures) you could use that to raise up the hexagons, provided they don’t need to raise TOO high because I’m sure there are limits. If you need like really varied terrain with lots of height difference I think your only option is to do something with a mesh and adjust the vertices.
You can do vertex texture fetch in shader model 3 (if I recall), which was working prior to Unity 4, to allow you to read from a texture and adjust vertex positions based on that. Then you don’t need to necessarily upload new meshes, you can just recycle flat meshes and let the shader raise the height of the vertices. But to do that you’re going to have to feed texture data to video ram representing those changes - for a larger texture say 2048x2048 that gives you 2048 hex tiles across and deep which is a large area… you could quite easily break that up into smaller chunks and spool those textures to video ram in a smooth manner spread across frames. That way you don’t need to use huge amounts of video ram, but would need to keep those textures in main memory.
Another benefit to keeping the original images in main memory is you can use them with the CPU to manually handle collision detection ie don’t use colliders, use custom CPU routines… it depends if you want complex physics etc.
Thank you for your comments and hints. Very valuable informations.
As Unity 4 is out now, I’ve switched to using geometry shaders for testing the above mentioned concept. My current shader code (actually, the complete unity project) is available for download here: http://www.mikoweb.eu/index.php?node=64.
Although the code might be a bit ugly (I’m just starting out with all that shader mumbo-jumbo), it could maybe provide a starting point for people interested in the topic.