What things can hollywood movies do that game engines cannot yet?

This is just for fun.

No engine bashing or UT with set their Hippo or Ape on you.

The idea is every day we see things on TV and in Movies that are made using high end CGI or it could be things we take for granted in reality that games and game engines do not even touch.

Ideally you will post a video clip or still image to make your point more interesting.

Tip: google your suggestion text and select images or video and I bet you get some multi-media that matches.

Most game engine have:

Terrain systems but can they do terrain destruction…

Sea or Water systems but can they do Tsunami or Mega Waves…

Can do cities but can they warp them…

That’s some of the larger things that are easier to spot what about more subtle things we take for granted?

Also in a way is the game industry pushing Hollywood to do more spectacular effects just to show that it can outdo video games?

I think the only difference between hollywood and game engines is their scale.

To handle the millions of complex mesh collisions and rendering you’re looking at massive server farms which take hours to render one frame.

Also you could argue Game engines are better in that you can have any scenario happen where with hollywood it’s all directed.

4 Likes

They sure can …

Games and movies are as equal as stars are equal to boats.

Game devs have 16 or 32ms per frame.

Movies have hours per frame (and that’s running on renderfarms).

5 Likes

Can you play “The matrix” as a movie?

Just imagine if you were developing for VR. You’d be working with only 11ms. :stuck_out_tongue:

1 Like

If you’re fine with 1 frame per day they ain’t much different

1 Like

Movie are using brute force because of time and money, it’s best to let the artist do whatever he want on model regardless of cost of rendering, except for a few necessary optimization, the thing is that it’s assume to be able to achieve the best quality possible. Also things are made per a frame basis, so you have stuff made to look good only in that frame.

Game are on a different budget, you need 16ms to make awesome and fill that ram sparingly, so you cut everywhere you can and cheat whenever possible. Currently lighting, fluid and particle effects suffer the most.

That’s how agni philosophy (the square enix demo) could run real time on ps4, the main engineer lied to artist and use tools to drastically decimate, remove and downgrade stuff on their back. Original asset where made by a movie team and the engineer was just content to let the do whatever they wanted to have a short deadline. There was some visible downgrade that was negociated because that wouldn’t be simply possible (mostly lighting and particle) but the rest was technical people saying to themselves “damn artist” and doing whatever.

BUT game are also much more dense than movie, they don’t have a frame to optimize from, they must look good at all angles on sets that are vastly bigger than anything produce in a movie (like modeling an entire island, country or continent size area), so the paradox is that this made them looking less dense even though they are. Yet game like uncharted are dangerously close to state of the art movie.

Currently CG movie have a ram budget of 20mb per frames, they generally render frame 4 by 4 which mean their budget of ram is 64, game currently only have 4gb that isn’t only for graphism, each console generations used to have a 10x improvement on cpu and 8x improvement in ram, so next generation would have match current cg load BUT MOORE LAW IS DEAD and 4K is killing the improvement by 4, so next generation with similar improvement will only look 2 times better than now. Not enough to get all the sugar like fluid, light and particles.

But not all movie use brute force, sometimes artist driven solution use “dumb” technique that look good. Just because you see awesome wave and water you should assume simulation, most of the time it is true, but a movie like Surf up, which has excellent water and breaking wave (due to the theme of the movie), use a simple rig driving a blend shape (with a lot of particles).
https://renderman.pixar.com/view/wave-effects-on-surfs-up

1 Like

Fire a machine gun without having to realistically reload…

Everything. Movie CG pretty much doesn’t have any limitation. To the point you won’t even have “objects” in scene and operate on clips where same character/scenery can be represented by completely different things.

Movies don’t need terrain destruction systems. You can split scene by hand in the way you need it.

There’s pretty much nothing in common between movies and games and approaches used.

1 Like

Maybe some one can help me. when i watch the movie the matrix, and i press my move buttons on my keyboards, Neo doesn’t move on my screen. Maybe its just a limitation in the movie industry?.. ohh wait, i am watching a movie not playing a game, how stupid if me…and ofc when i play a game i am not watching a movie…

Alternatively you can just throw a ton of explosives under the terrain and really destroy it. In the average game, 95% of what you see is CGI. In the average movie, 95% of what you see is actually real footage.

Combine that with the ability to spend days rendering a single frame of special effects. Movies have little need to optimize the way games do. You can simply run a simulation with a million high quality units and render it all out.

I think you might be mistaken.

For example:

3086662--232570--movies-before-after-green-screen-cgi-boardwalk-empire.jpg

You can find more online by googling “before and after cgi”.
Here’s a decent example:

https://www.youtube.com/watch?v=KaEvW01-ZKg

2 Likes

Even movies that look like they have no VFX at all, have a ton.

1 Like

Yeah, my number was high. Still, the point stands is that movies rely far more on recording reality then games do. Games tend to be all smoke and mirrors. As opposed to movies which are just a lot of smoke and mirrors.

The sign of a truly masterful special effect is one that you don’t even notice.

That said, there’s a few tricks to make a stunning spectacle.

  1. Give players a close up of an ultra high res model and the player will then put that image in their own head when looking at a low res model at a distance.

Ubi does this left and right all the damn time. In Farcry in particular they love having the villain do some monologue to introduce them. They load up an absurdly detailed model that’s really just a 3d scan of the actor. Then they motion capture them doing the whole scene. When the monologue starts the camera fixes on them, the background blurs to hide the fact that everything is really crappy quality, and the entire machine is just rendering the one person.

Then when you see or hear that person again your brain is filling in the gaps to make them look like this ultra high res model.

  1. Sneak in videos whenever, where ever. If the camera locks you can slip in whatever video you want and the player would be none the wiser. Or if something is happening in the distance, like a giant explosion or a giant monster is moving, you can absolutely just make a video in the skybox and the player will never know because the perspective is simply to narrow. Also, you can use a video on a 3d mesh.

  2. Post processing! Post processing! Post processing!!!’

BUT, to answer your question, the one thing that games still severely suck at is physical interactions of objects with one another.

In 2017 hands still don’t look natural holding objects. The meat of the palm should compress and squish against the hardness of the object.

Loose skin does not move and sway like it does on Khan’s belly in The Jungle Book.

And good luck living to see the day when people change clothes in games on camera. Or have clothes that act like fabric at all.

The only way I can see this being done in game is if physics cards become standard AND engines make a distinction between aesthetic physics and gameplay physics to prioritize them.

Movies are pre-rendered / frames, games are realtime. Pretty big difference and not really comparable. Making a movie give you all the time and room you need to produce a scene. In a game you must take care of mechanics and events, so that things that happen in realtime happens they way you want. Even if you produced a game for a super computer with heavy poly count capabilities, and physic calculactions, you still be facing the same thing, “things happens as you go”, not pre-rendered and polished frames by frames. So the simple answer to this post is ,movies can be any thing, games can not… even so they don’t need,to since they are games. Your not interactive when you watch a movie… its liner sequential entertainment

I whole heartedly disagree.

By 2030 disabled people will opt to live entirely in a simulated environment. Shortly after that the simulated reality will be improved to the point that perfectly able bodied people will start to do so. People may choose to completely discard their bodies, possibly sell them. But soon it will be more expensive to live in reality than in a simulation. And by 2100 most of the human race will be living in simulations.

Pretty soon simulation will be indiscernible from reality.

I’m @Not_Sure about that

1 Like