Next Next-Gen game assets?

So does anybody have any information on how developers are making characters, environments, and assets for “next next-gen games” such as ps4/xbox one? New polycounts, new shaders, new techniques, new whatevers? I honestly know nothing about what’s going on out there… Does anyone else?

Just like before…with less constraints, some new features in the hardware and dx11. I’m not really sure what you’re asking…? There wasn’t something magical that happened. It’s just now they’re allowed to use more expensive assets techniques that were previously exclusive to the PC.

  • This

Yeah it’s basically the exact same techniques as before, the difference is that now they have much more ram, more processing power, dx11, new technology, etc. The Xbox 360 only has 512mb of ram and to get more realistic characters and stuff, you need to use multiple textures (Diffuse, specular, normal map, gloss, illumination, etc). So if you’re on 512mb of ram, you have to either limit the number of textures per character/asset, lower the texture resolution, or lower the number of assets.

Developers have gone from that to 8gb of ram so they have a lot more to work with now. It’s not uncommon for high-res film assets to use multiple 4k-8k texture sets for a single asset but that’s what it takes to get assets of that quality, and it’s now realistically possible to do that in-game. The main character model for my own game BHB uses about 3-4 textures down-res’d to 2k and two materials (each material having it’s own texture set).

So basically just 2048x2048 textures instead of 1024x1024 textures. Games on the current generation already use an average of three diffuse, three normal, and three specular maps per character of around 12000 polys. There really shouldn’t be a need for that much more polys because most detail can be achieved through normal maps. I guess I am just looking for new ways our engines and plugins for them will take advantage of the increased processing power and ram. Like nvidias furry wolf demo of the witcher 3 YouTube. Or real time global illumination. I am wondering if there is anything interesting or new with unity or its plugins to take advantage of the next next gens increased power.

I already use 2048x2048 in a production game which also supports iPhone 3GS. I think you’re talking about 4096 :slight_smile:

But most AAA developers will not be changing anything. They’ll just be allowed higher budgets and a few better shaders. They’ve invested too much to just magically make new, and new isn’t really needed. DX11 etc is well understood. What might happen is some engine guys will drink more coffee than is required and get some compute shader action going.

But to answer your question, nothing much changes.

I have always wondered about that texture size thing. I was told by several of my professors (when I was in college) working in the game industry to use several 1024 textures instead of one 2048 because most engines can load several smaller textures faster than one bigger one. I see a lot of people using both methods though and wondered if it really made that big of a difference anymore.

I have no idea what different engines limit textures to, but I’ve seen lazy code simply hard-coding the maximum sizes. If you query the graphics drivers you get the actual texture constrains, though.

Mobile texture constraints are typically smaller, but desktop (and laptop!) GPUs of all sorts have been able to use much larger texture sizes for some years now. 9400m, the GPU in my old laptop, was limited to 8192x8192, and it was probably an old GPU even in 2008. It’s also possible for the graphics API to introduce workarounds to increase these limits (which is why engine makers should query that). I know that some HD4000 integrated GPUs were showing 8k by 8k as a limit, but my Mac mini shows 16k by 16k now.

The next generation will basically have higher resolution textures which work more efficiently, making things look very good up close even at high resolutions. They’ll all hopefully never go below 1080p. And hopefully turning your back on a car in the next GTA doesn’t make it disappear :wink:

Well, iOS hardware doesn’t really give a damn about texture sizes. It’s got plenty of bandwidth for it and you can use a lot of the ram available just for textures. With Android, it’s a slightly different story - some Android GPUs aren’t so hot on the bandwidth. It’s the reason why mip maps don’t typically have an affect on iOS but do on Android.

But times have changed quite a bit. I use 6x 2048x2048 textures on an iPad1 App - physynth, and The Other Brothers has tonnes of 2048x2048 atlases which it abuses with impunity. We found no gain in performance testing with 1024.

Back in 2004, I would have felt 1024x1024 made sense, but now I feel confident about 2048x2048. But remember, for some drivers/hardware, it’s how much of that you’re showing as well.

For me, I will continue to use large textures for atlases if need be, but I’m not afraid of using smaller ones for efficiency. Just don’t be too shy of it for no reason.

There’s a great post on this topic at http://www.polycount.com/forum/showthread.php?t=134911

PBR is the new toy in town, shaders etc. well we actually might get away with parallax mapping and tessellation in areas without the PS3 crying in the corner. Apart from that, more… More of everything, greater view distances, grass that doesn’t sneak up and shout surprise.

More complex particle shaders, the use of post without latency spiking causing choppy frame rates… Y’know all the stuff we had around a decade a go we’ll actually be able to use. Looks like truly dynamic GI is off the cards for this round…

Light mapping sucks for large games too, because of the bake times mainly. So hoping something like Nvidia’s VGXI will actually take off. So the chances of making realistic looking games is still off the cards as games grow, arch viz sure, actual games nah!.

Bloom+Lens Flares.

If I asked someone for a next gen model I’d straight away smooth it with Catmull Clark subdiv and see how it looks.

In “short” term peroid of 5-10 years you are most likely right, considering how asset development trough normals, maps, triangles have changed not that much in terms of technique, only numbers go up a bit. Systems like DX11 and fluid simulations come out, but they are so ahead of hardware that they arent next gen, but rather next next gen as topic is called.

There aren’t standard ways of doing things. Its a zoo as its always been. Every studio has a different pipeline, every artist a different workflow, and every game has different things that are important on screen. What you’re asking is too broad for a single meaningful answer in a forum post. A lot of studios release tech write ups, GDC records and releases most of the talks (though many are behind a ridiculous pay wall) and then there’s thousands of tutorials on various subjects coming out as there have always been.