I’ve been wanting to make some maps in Unity 3.0 but the problem is that there is no BSP mapping. I can model and make basic houses but I prefer the BSP system much much more. If I wanted to make a floor and a wall in Hammer, I could just make 2 boxes, put a texture on them and it’s done. In Unity, I have to open Blender and make a model, then do UV mapping, and put it in there. The pain part is the UV mapping, and it’s annoying as hell to UV map every wall you want to model and crap. It also drastically hurts workflow as well. It’s hard to reuse the model you just made as well which is annoying. Let’s say I wanted to make a hallway and then another hallway connecting it which is slightly shorter. I would have to make separate models for both of the hallways, UV map them, and never use them again. It makes a huge annoyance and it becomes trial and error of " Did I screw up on this thing? Oh damn I need to make the hall way longer" and then you have to export the model and reimport it while in Hammer you could just edit the thing on the fly. Why should I use models as my mapping method?
Just learn how to model then. BSP tools most of the time are pure shit and while it might seem easier at first, it’s really just incredibly limiting. Just learn how an app like Maya, XSI or Max works and build your shit there. UV Mapping simple walls and floors is a matter of seconds, really.
In your case, if you like the BSP aspect, why not just create a handful of tiles that you can then reuse all the time to layout your map?
After you understand how much better 3d apps are than what a dev throws into their BSP tools (there’s no comparison, really), you’ll be glad to have made the change.
I find UV mapping really annoying though, the tutorials on there can only tell me on dragging a texture on photoshop on where I think the face will be from that green box. It feels weird and I can be off grid and have the texture be too large and have part of my texture clipped off. Or I could try painting the textures with Photoshop 3D but that’s time consuming. Is there anyway in Max I can apply a texture, and then offset it like in Hammer?
Yep. There are controls for UV tiling and scale and all that in the material options as well as the controls for the UV mapping itself. It’s a little more roundabout than what you’re used to, but every 3D app has to be general enough to be used for anything.
Invest into Ultimate Unwrap 3D, let your mapper spit the map files and convert them (or see if its bsp support is now up there) to fbx
ding
UV Mapping isn’t that bad once you understood the basic concepts of it. Here’s a video I once recorded for some students of mine, maybe it helps you out:
Is there anyway I can texture it “Hammer” style where can apply a material to a face, change the stretching and the scale so it looks good, and do that for different materials on other faces?
thats a basic UVW modifier usage, about any modelling application can do that. alter the UV positions of the vertices and it will stretch
So if I did that, would it automatically make my multiple textures assimilated into 1 texture that I could import into Unity, put over my model, and it would look like it did in 3DS Max?
tiling + texture atlas is not possible, independent of what application you use. UV tiling bases on using UV values > 1 and < 0
BSP walls and alike are normally always tiling.
so if you create such things, you will get multiple drawcalls or you will no longer have tiling.
I mean like is there any way to drag a texture onto a face, and drag another texture on another face of the object, like in Hammer, and have these two textures on there when I go in Unity?
It’s like Multi sub texturing in 3DS Max.
http://www.youtube.com/watch?v=HNMxO4VDcwA
Maybe you’d be better off with UDK or something along those lines.
Yes doing that is no problem actually. you just need to assign the different faces to different materials with the material using the corresponding texture.
thats no problem.
but if you love hammer that much, why not get a license for Ultimate Unwrap 3d (http://www.unwrap3d.com) which allows you to load in Quake 1-3 / Half Life BSPs and export them to FBX, Collada etc. The model and texture apply data will go without a problem, potentially even the baked lightning through multi layer texturing.
unsure if this would work with HL2 BSPs, never checked it with that.
Hm I guess i’ll have my shot at model based mapping now, thanks for everything! Also how do you guys usually texture your model? Do you drag the texture over your model in photoshop, or use model painting?
Both, depends on the model in question. Whatever works best.
Also I heard that Unity can only do static lightmapping that doesn’t do global illumination. Would it be a good or bad idea to render my lightmaps in 3DS Max Mental Ray? Or is this too static?
You don’t have global illumination with Beast in the free version, you do in the pro version. But either way you don’t need to use Beast, you can do them externally in whatever renderer you choose, the result is a texture mapped to the second UV, so either way works perfectly fine (and it’s the only way to get GI lightmaps in the free version). Though the benefits with Beast are worth making it the better choice, unless you’re stuck with the free version. As for static lightmaps, all lightmaps are static, imagine a texture based lightmaps that had to account for all possible movement in a map, ouch.
That’s what I was thinking, How would I light my characters or moving objects when basically the lightmaps are just painted shadows.
Like this:
Well you’d use realtime lights for that. See how it works is this (not 100% sure about the free one so this is how the pro version works)
When you create your lightmaps with Beast. You have the option of choosing which objects are static, and which aren’t. Only the static objects (anything that doesn’t move) will be lightmapped. You then decide on the quality settings, resolution etc. For your lighting you get the option of automatic, realtime only or baked only. That lets Unity know what the light is going to be used for, if it will effect the lightmap AND realtime (auto), or if it will only effect things in realtime and not show up in the lightmap (realtime), or if it will only effect the lightmap and never effect anything in realtime (baked).
Then Beast creates two lightmaps (if you set it to dual lightmapping), one is the near lightmap, one is a far one. The near lightmap contains all the lighting you just baked, while the far lightmap contains all that PLUS baked shadows from whichever lights you had set to cast shadows from. The point of that is you can then greatly lower the distance your realtime shadows will max out at (optimization), at which distance, Unity will gradually fade in your far lightmap, which has baked in shadows that match the ones in your scene.
The end result is your entire scene looks amazing, and shadows are still realtime close up, but not far away where they would be wasting resources. That’s like the most basic explanation I can think of. You can do that in an external application too of course, but naturally as you can imagine, it’s so much easier to make sure everything is lined up right there from within Unity. So you still have your realtime lighting, realtime shadows, but faaaar fewer resources required to keep things looking awesome.
You’ll have to remember that Beast doesn’t (currently) do anything different for you here compared to for instance Mental Ray. It still only gives you a static lightmap that won’t affect your characters etc.
I am using the 30 day Pro trial now and I will much rather use Mental Ray than Beast for my lightmapping. I think the results are far superior and I have much more control over the results in Mental Ray.
There are numerous ways of having lightmaps affect your characters. You’ve probably noticed that very few games use real time shadows for anything other than simple character and object shadows so they all have ways for baked lighting to affect dynamic objects.
I have a small Unity project going where there is no caves, overhangs or roofs. I’m thinking that making a simple greyscale texture of my lightmap and then darkening my objects based on values taken from that texture will give me a satisfying result. With some work this could probably work for maps with overlapping geometry as well.
Developers like Bungie and Valve has written numerous articles on their lighting and technology which are very interesting reads. In Halo Reach there are almost no real time shadows, but characters and objects get every bit of lighting from the environment, even global illumination, so Bungie is obviously doing some powerful stuff there.