Hi guys!
Working on creating photogrammetry assets. I’m actually processing photos I took past months and trying to improve the result & workflow. I’m using photoscan, Zbrush & Substance Designer. Screenshots are from Substance Designer.
New texture. The close up are still blurry, I’m missing resolution. Will try to use details maps. When I can get new photos I will try to generate some material detail scan, or perhaps use substance to generate some procedural details maps.
For wall textures I’d be much more concerned about notable tiling. Imagine you’d try to build a castle with one of those cobblestone textures, with surfaces big enough to require 8+ times tiling the same texture. I’d imagine this to look rather poor with scans that are only 1 or 2 meter realworld size big. It’s a problem that I also see with megascans’ texture archive.
You need to design your textures depending what you need. A low density texture will be better for camera close-up. But if I sample enough area I can create a more dense texture variant with less noticeable tilling.
Most modern games use tricks to break the tiling :
mixing textures in the shader with a mask
Using decals
Using geometry to add details.
But I’m open to any suggestion. When I can I’ll try to create more dense variations.
Updated previous stone wall test. Tweaked the textures in substance designer. Views from Unity editor, using Shader Forge Tesselation + Details map (from Photogrammetry too).
Those are really nice and the three tools are the same I am going to use the next weeks. Could you tell me how many pictures you have shot for it? Or maybe even include a photoscan screenshot with the alignment shown.
Used 7 photos with an APS-C. Pretty low but I’m really limited by Ram (8go). Now I should try to take closer shots for the details. Keep in mind border zones of the shooting area will get lower precision and noise so don’t hesitate to take more shot around the aera that interest you to increase the amount of overlaping in the edge (something I always forget), you can mask later in Photoscan.
I don’t have problem to generate the first point cloud (hight/hightest) & dense point (hight) cloud but I miss the ram for mesh generation so I copy the chunk and generate small part of the mesh (with overlaping). Then you can align chunks with cameras and merge the meshes.