Unity newbie here. I have several questions that I need to get answered in order to understand graphical and visual workflow. Any help/answer is appreciated.
1- I will start to learn Blender to design levels for my Unity game. Question about this part is, do you design modular models as much as possible and import them all in unity scene and then align/place them to create levels or go with a one giant level architecture? I’ll make a 3D side scroller so terrain is probably a bit out of question in my case. Should I model all cliffs, caves etc separately? What about main ground(longest path my player will move on)? It is easy with tilemaps in 2D AFAIK but don’t know how to create a base ground for 3D. Use planes?
2- Mesh, materials, textures, shaders, normal maps etc. What are these each?
I mean, I definitely read what they stand for but what is the order of them in the creation process?
i.e. are normal maps, height maps kind of luxury/extra to get more detailed surfaces/models?
Do shaders replace textures or use textures but just manipulate them to an extend so that different visual effects are achieved?
3- When creating models in Blender (whether it be a character or a game prop) what would be a typical process made of in order for the model to be URP-ready?
4- Lighting - This is a huge topic but I want to know some basic concepts especially relating to performance optimization. Is it expensive to move lights? =)
Are the calculations regarding how affected pixels/vertexes made over and over again when you move a light source that affects them? When do you use lightmapping and when do you not? Do you prefer lightmapping for static objects like walls, furniture etc? Or can/would you also use it for animated models?
5- VFX or Particle system or Shader Graph? AFAIK, particle system uses CPU and VFX uses GPU. For a game that won’t be AAA but at the same time having lots of fx’es that targets middle-class systems/platforms, how should I distribute graphics processing between these two? i.e. Is VFX graph too much for the job or does it all depend on the level of visual artistry?
6- Shader Graphs - How does one start to learn about it? I mean there are tens and hundreds of parameters/nodes to manipulate (compared to Particle system). How do you memorize what each node is used for? Do you just randomly experiment with them until you get it or are there logical and easy steps to apply? What should I pay attention for when creating an URP-ready shader-graph asset?
When you create shader-graph material, does it override the typical/static material of an object?
7- Post processing. What is the difference between a Profile and Volume? I have seen that you start to create post-processing when you create a post-processing profile. Do you create profiles for each and every visual element? Like 1 for UI and 1 for in game screen OR do you create one profile but use Volumes for them to override it?
I will update this thread of mine from time to time if I came up with a broad question but for now on, any help/answer is appreciated on any of these subjects.
Sorry for noobish questions but I couldn’t find answers for most of these on the web since they are a bit broad questions and mostly the answers come as a result of experience-level.
modular construction is important to save memory and boost scope/size of an environment at the cost of unique assets. We can’t know exactly your use case but assume Super Mario; Tackle the assets the same way you’d tackle reusable sprites. The down side to modularity is uniqueness and a burden on level load times if any dynamic batching overhead is at play
Mesh is the virtual 3D sculpture
Materials is a connector between a Mesh to a shader, it is also a capsule of data exposed by the shader to be localized per material. Many Materials store specific properties exposed by a single shader without needing multiple shaders.
Textures are pictures that can map to mesh data such as environment objects, billboards for VFX, sprites, etc
Shaders are a script that communicate to the GPU about how to process an object
Normal maps are a form of texture that is intended to encode XYZ data as amplitude in Red,Green and Blue
Height map is an 8 bit image to offer pre-computed subtilties without the need for vertex displacement on mesh or on CPU
Textures are supplemented with pre-computed surface data such as Normal, Height, Specular, Gloss, Roughness, Metalness, Shininess, Curvature, Sub-surface maps. They are all a way to encapsulate detail that is more optimized than trying to do it with mesh data or shader calculation
ambiguous: your project/game will have defined bottlenecks that you must profile and account for as you prototype a vertical slice of the game. The only foundation is to recognize what rendering quality the URP can achieve and how many features you need or could discard to serve gameplay
you need lights to have dynamic lighting as your objects move around. As for performance it is a moving goal-post to answer and requires a context of things that they interact with.
URP shuriken particle system is largely CPU based. (VFX graph utilizes GPU but is HDRP not URP at the moment). Shadergraph is a node shader building system, to build visually : This tends to be very artist friendly rather than direct scripting.
Insurmountable challenges are taken 1 step at a time., Use tutorials about basic concepts and adapt the techniques to tackle or invent larger visuals. Look at examples of shaders that build visuals you want to learn and borrow from. Brush up on the nodes available and practice abstract problem solving to digest CGI into mathematical tasks.
First of all, thank you very much for taking your time to answer my questions.
So does that mean, if I don’t want to sacrifice from performance, I have to further prepare these “maps” manually for each and every texture in order to provide more realism and depth of detail for my 3D objects? I mean they are not a “must” but a preference of optimization vs level of detail OR having manual workload over automation?
About this part, actually what I’m asking is, say, I designed a 3D character and used textures, UV mapped it and did the skinning etc in a typical way. Then, imported it into Unity. Since URP needs to upgrade materials and I will use lit materials, do I have to work on any specific properties in Blender beforehand or will it upgrade my 3D object’s materials automatically without a problem?
Unless lightmapping is used, are all the lights considered to be dynamic?
What about baking, is it part of lightmapping or are they different things? For example, if you bake lights onto an environment, does this mean that you’ll no longer have the option to, say, create day/night cycle? Does it mean it’s lit and computed beforehand and stays as is?
Also, can/do you create lightmapping per GameObject or per scene? I assume it is done on whole scene level and later every object is adjusted accordingly whether it is a frontal(that needs to provide more details) or background object?
Thank you very much once again! Your help is much appreciated!
Textures function as a precomputed, simplified and visual way to detail a mesh that is easy to edit/work with visually… nothing about them is required. They achieve a look to your rendering faster and cheaper than programmatic effort.
Oh i think there is a breakdown in communication. Blender Materials are a hybrid Material that is shader operation through materialNode editing.
in this regard the Blender materials are not unity materials, but the nodes could be injected as shaders and then applied to a material … as far as I can tell you have to setup communication between the software if you want to automate an equivalent creation in Unity (setup/Technical direction)
my Blender experience is very limited AFAIK getting 3D packages to export/import shader/material work into Unity and get a 1:1 look is a challenge that needs time to setup, support and update.
baking lights is a texture, a specific name convention but it is a pre-computed texture to achieve the look of static lighting to save performance at the cost of time for setup +texture memory
any object can have lighting baked to a texture at your discretion and applied to this object; The amount of mileage you can get out of operating on this texture through a shader is at your discretion. This is something like a Look-up-table for day/night color and value shifting.
an Ambient Occlusion map is frankly baked lighting, but a very unique form of ambient lighting for micro details that can accept dynamics from a scene to operate with highly refined characteristics (though TBF AO maps are being removed in choice of dynamically calculated AO as hardware progresses)
lights are dynamic, it’s the main reason you use them; I.e. specifically to have lighting move and change or cast shadows that are not limited to a single ground plane. Light probes facilitate this to save performance.
I’ve also finally found a page in Unity Documentation.
Do you think this page covers all the essentials about Lighting topic that I need to know and would make it easier for me to decide which way to go?
By the way, Blender materials are mostly incompatible with Unity. Blender is useful for UV unwrapping and baking AO maps, normal maps, stuff like that you can use in Unity, but don’t spend a lot of time trying to set-up material properties in Blender because they won’t import export/import.
It’s definitely overwhelming when you first get started. Making games is hard – there’s a reason some games are 700+ people working for 4 years with budgets of hundreds of millions of dollars. Many indie games are succesful with just sprites or abstract/simple graphics – the route you’ve chosen is the most difficult, but not impossible for a small team to handle, especially if you’re willing to work 16-hour days and make judicious use of asset store, turbo squid, etc. I’m currently working mostly-solo on a 2.5D JRPG; it’s possible just a TON of work and learning.
For Blender in particular, I’ve found it’s brilliant for modelling, but not as much for material/texture baking (you can bake maps from it if you want). Here’s my approach. First, for anything I can get from Asset Store/CGTrader/TurboSquid for a decent price, I’ll do that. For the things that need special treatment:
Model in blender
UV unwrap. Blender is fine for this, especially with the annotations feature. This is very rote/boring work once you figure out how to do it; it’s like cleaning the floor or something, just something you gotta do.
Optionally, here’s where you’d do your ZBrush export or blender’s painting tools to add extra details.
Export to ArmorPaint. This is open source/free if you’re willing to compile it yourself, or you can kick them $15US for the installer.
Download materials from AmbientCG: https://ambientcg.com/ . These are all already set up for PBR. Hook these up in ArmorPaint and you have a whole library of materials for free, at least for artificial materials and a few more naturey things.
Painting in ArmorPaint is super easy (assuming you have a good UV unwrap in step 2). Just be sure to use the correct projections.
ArmorPaint can export all the maps in the correct formats for Unity (I don’t think it combines AO into the green of MOS, but that’s doable).
Use paint.net with a couple plugins or some random C# scripts to resize/convert/clean up textures as needed.
Use regular Lit shader for as much as possible in Unity (you may end up making your own variant of lit, using jbooth’s Better Lit, Lux Uber, one of the bakery versions, etc… but the fewer shader variants you need, the better for both performance and sanity)
You’ll probably want to come up with your own workflow. If you have the financial backing plus wherewithal to learn the tools, substance painter and photoshop are probably far superior to armor paint.
As to your specific questions…
1. Modular/reusable assets are the way to go. You can design your whole levels in blender, but you might run into some issues with stuff like lightmapping and culling, especially if your levels are big.
2. Other people covered these. Only thing to know here is that “height map” is kind of an overloaded term. Generally, for a “height map” you want to use a normal map. Most models will have a normal map; it lets you capture small details like little scratches/imperfections, improve curvature of round stuff, etc. You can convert a height map to a normal map easily, for example: NormalMap-Online . URP uses “height map” to mean parallax map. Parallax mapping is a performance hog and usually not needed, but can be nice for some surfaces (brick walls in particular). Height maps can also be used for tesselation (Better Lit shader or manually coding this for URP), which looks better than parallax and might even perform better depending on the GPU and usage. Of course, you can use both techiques.
Either way, 95% of your models will not need a height map (parallax or tesselated). Just use normal maps.
3. See above for my process, but every person/team is going to develop their own.
4. Honestly, I would wait until you have some levels set up before diving too deep into lighting. Lighting is a massive, massive topic, but it’s something that largely can be tackled later in the process. You can do a rough draft with just a directional light, skybox, and some occasional digetic lights where there are actual real lights (lamps, ceiling lights, streetlights, flashlights, fires, etc). Then schedule 2 weeks to a month to do a timeboxed deep dive into lighting.
5. VFX Graph has a MUCH better UI to work with. You won’t actually be doing much graph stuff unless you want to. This series of tutorials from Unity is really helpful, even though it’s geared at HDRP:
. Shuriken to me is a PITA even if does perform better for small particle systems.
6. Like lighting, I’d put this in the “do later” category. Once you need a particular effect, learn how to make that effect. You need an irridecent oil slick? Make that. You need refractive water? Do that. At some point, you’ll have learned most of it, then be cussing it out and wishing for surface shaders like the rest of us.
7. Another one you can wait for later to learn. Learn this at the same time as lighting, because lighting and post-processing go hand in hand.
First of all, thank you very much for the detailed response and exposing your workflow.
Even though some of these info that you’ve given overweighs my skills at the moment, these are definitely good starting points for me to make more research on them thus learn things as I make some progress.
I’ve also had a quick look at https://ambientcg.com/ .
What I wonder and also kind of related to my 3rd question is, are these materials only to be used in modelling software(Blender, Maya etc) or can I also use them in Unity with setting some properties of theirs manually via inspector?
About lighting, just like everything else, I’m just experimenting and prototyping atm and started to look up some terminology while working on it to get brief ideas on how things work.
I haven’t used this website but I just took a look. It looks like they include basically just the texture maps that you need to create a material, whether in Unity or some other software. This is pretty typical for platform-agnostic materials available on the internet.
So, create a material using Unity’s standard shader.
you could use the provided albedo map and AO map in the corresponding slots with out any modification. Also the displacement map, I think.
It looks like there are two different formats of normal maps provided. For Unity you want to use the blueish one and make sure that it’s specified as “normal map” in the image map’s import settings. Then you can just assign it to your normal map slot.
The roughness map is going to be more tricky. Since the standard shader doesn’t use a roughness map, but instead it uses a combined smoothness/metalness map where the smoothness is stored in the alpha channel and the red channel controls how metallic the material is. In order to use the roughness map, you’d need to open it in an image editor like Photoshop or Krita and invert the map (because smoothness is the inversion of roughness) and then assign it to the material’s alpha channel. This is not too difficult once you’ve established your workflow, but also it’s possible that you don’t need to use this map, especially if you’re not going for an ultra-realistic look.
If you are using URP or HDRP, then the shaders are going to be different, but you’d use the same image maps in a similar way.
You download the PNG version of the zip from AmbientCG and drag it (the whole zip file) onto the executable; it spits out the maps in the correct format. The occlusion, metallic, and smoothness are properly packed together in the _mos.png, and you can just ignore the _plx.png. Will need some massaging to work on non-windows platforms.
Worth noting that there’s a popular asset that opens up some new lighting options: https://assetstore.unity.com/packages/tools/level-design/bakery-gpu-lightmapper-122218 . With this, you can use fully baked SH lighting and still get working specular on lightmapped objects. Light volumes (not to be confused with volumetric lights ) are a replacement for light probes that can also do specular, and if you bake them densely enough you can get a passable approximation of shadows from baked-only lights onto realtime objects. SH lightmaps are used in the frostbite engine, and are a fairly new technique.
These are somewhat advanced techniques that are good in certain situations (particularly if you have indoor levels with complex/lots of lights and want it to run on midrange hardware – or are running on high-end hardware but want to squeeze out every last ounce of performance for AAA visuals). Combined with some nondigetic backlight and shadows (eg Persona 5), you can achieve a lot of cool results with great performance. However, this isn’t supported out of the box in Unity, and will require special shaders. I think of it as a “tax” – every time I need a shader for a certain effect that I can’t use regular Lit for (eg hair, vertex deforming, irridescence, water, lava, etc), I need to spend 4-8 hours rewriting it to work with light volumes instead of light probes.
You can, of course, combine this with realtime lighting in various ways (eg realtime for fires/particles, main lights, a day/night cycle, flashlights, etc, and fully-baked for indoor scenes).
But again, I highly recommend punting on that stuff until you already have some other pieces in place, then doing a deep dive later. My approach was to spend November just learning/thinking about/prototyping lighting with no distractions. And February is scheduled as animation month ;-P. Obviously, though, this is only really an option if you’re a solo/small-team dev without deadlines and other work-related responsibilities.
Thank you very much for further details.
I have purchased some updated editions(mostly from 2021) of Unity books and downloaded many YouTube vids to deep dive in these areas along with scripting.
Yesterday I downloaded some textures from that website and started playing around with materials etc and have seen the use case of each property/attribute within that panel.
I’ll see what I’ll be able to make in time.
Again, thanks for the insight and invaluable information!