Hey guys, I am wondering how would you make an open-world map? the reason I am asking is that amongst various methods such as “world machine” and “gaia” which I have tried there seem’s to be better ways according to this video:
Basically, he just puts down a flat plain and builds on it with Megascans objects etc, the result looks amazing, so instead of building an open-world map using certain textures, how would I go about building a whole (really big) open-world terrain using Megascans and the way it’s been done in the video above?
Would be OP? as the detail on such a scale would be MASSIVE lol so I’m wondering what is your opinion and how to make an open-world so beautiful and detailed such as in the video though that’s only a scene, I’m wanting to do one better.
bare in mind a lot of the megascan assets are many 100s of MB in size. Good luck doing a truly open world game like this without serious loading / memory issues.
First of all decide on your target: PC? Mobile? Console? Which console? Identify the resources available to you (CPU power, system RAM, hard drive or other storage bandwidth, distributable binary RAM), and then begin to engineer a solution within those constraints.
The solution you choose for running on something like a Gameboy Advance (for instance, Grand Theft Auto for Gameboy Advance) is going to be completely different than one you engineer for the very latest and greatest PC running the very latest and greatest hardware.
Identify your problem, do some experiments, learn more about your engineering constraints, and work towards an engineering solution. And keep us posted on your findings so everybody can benefit!
I was thinking the same thing, so how do such games as Battlefield 1 and Starwars Battlefront 1 and 2 (the newer ones) handle such? is it because they are smaller maps? or maybe a different engine? so how would one go about making a map of such detail BUT without problems such as loading and memory issues?
Of course it would be a PC game, as for the PC setup it’s enough to handle anything right now in software/gaming areas.
Would World Streamer be able to fix issues of doing this and then streaming the world into sections as to say…
They create extremely high poly sculpts and then retopologise those into low poly models, then bake the high poly information onto the low poly. That means it looks super high poly but is actually low poly, just using normal info from high etc. Then whwn getting close they use tesselation to increase or decrease detail as needed.
Its not jus tas simple as “high detail”. You cant just wack high detail assets into a game and have a nice scene, itll run like pants.
Frostbite 3 Engine as used by battlefront 2 or battlefield has done tons of GDC talks and they seriously over optimise. So again, you wont get that kind of quality without an artist who can do a high - low or low - high workflow or without photogammetry. But again you still need to retopo most of the time so you will need that artist.
TL;DR: hard work and tens of thousands of man hours of engineering time accumulated over the lifetime of the engine and each product that uses it. It’s a hard problem and it has been solved in myriad of ways by many different teams.
As @MadeFromPolygons_1 pointed out, google for talks by these guys, as I’m sure by now some of them are publicly available outside of the for-pay GDC vault area.
have a mesh streaming technique or reuse everything and store everything as a list of points which get instantiated or pooled
require a compute culling or very optimised culling scheme
require a lot of missing unity functionality
Vegetation Studio comes close but I’m not sure it works with HDRP yet. So right now that’s a lot of work you have to do or Unity will die from bad performance, and it won’t be Unity’s fault, just you as a single person don’t have enough tech to make a bespoke job like this work.
I think Unity aims to address a lot of these problems this year however. For now I would look at Vegetation Studio and the builtin (not HD) renderer as a starting point for small level based games. I would not be doing open world games with these assets unless I was able to pool or stream them.
That’s your rendering problems but what about authoring?
The problem here is that you can’t even hand-place these assets cos it would take too long. They do require quite a bit of by-eye adjustment, intersections, aesthetic quality and so on. If you place assets like these procedural (which is a must for open world) then it will never look this good.
I’m sorry but that video is selling you dreams. You can come close if you avoid the whole open world idea or if you’re into carefully (and I mean this) evaluating your options for procedural placement and rendering.
Would you mind links of the GDC talks and also any such tutorials about doing such? I wouldn’t mind taking a look into it, thanks
Thanks mate, I never heard of Vegetation Studio until now, so I will take a look
Also about the video, it’s basically not selling me the dream as its a scene, I alone took it further by asking if such thing could be done in an open-world way, I thought it would maybe be impossible, at least right now anyway, but it was worth asking, haha thanks again
This my friend is how software engineering works. As long as code runs on actual hardware and is delivered over actual networks (or other media), there will be engineering applied to fit within those constraints.
This was true with Pong and it’s true with every game out today, 100% of them.
Hopefully your game is fun enough or your artists/designers are good enough to work within those constraints, otherwise the constraint is more noticeable than the content.
Regarding the base surface for a huge open-world environment, no matter which platform you are in; you will never get practical performance in a static scene in real-time.
You need a Streaming system in place as shown below:
Then you can continue scattering objects on loaded terrains in the background and show them when they are ready.