With game streaming becoming a big industry e.g Google Stadia will we end up rendering to spheres?

So the classic render to sphere is your Skybox/Sphere, that lets your game look like it has scenic vistas.

And allows your 3d game world to be a small fraction of what it needs to be around the player.

Will streaming games become more like this as the technology evolves?

If the streamed game covers a small 3D area and then uses a Skybox to render the larger game world.

  • Allow for local head movement at no extra cost
  • Allow small player movements between scene updates, reducing bandwidth.
  • Provide the streaming service with a network lag time window
  • In theory it would provide a smoother experience for the player

Also the 3d sphere around the player could be expanded or shrink depending on the performance of the hardware or the bandwidth available.

THey probably have better compression algorithm, you are not guarantee any continuity of image frame, so a skybox is redundant. Also it suffer from parallax effect missing, negating the usefulness. Although that’s a technique I will personally use for distant terrain on weak machine.

Been tried before in the past. There are a handful of Microsoft Research projects that do stuff like this that date back to 2014, though they were probably looking into it even before that. Send over a cube map w/ some depth information and let the user move the camera around within that “depth cube” locally to mitigate latency. Modern VR systems all do something like this now to reduce the perceived latency of head movement. There’s even an insane version of the idea from 2016 that looked at pre-baking cube maps every few feet and injecting dynamic objects as video sprites.

The problem with all of these is they assume near the future was infinite bandwidth, high latency, and relatively low display resolutions. Instead we have lower latencies (due to the big 3 just building a lot more data centers closer to everyone), much higher resolutions than the “no one will ever need more than 1080p” mid 2010’s, and kind of crap bandwidth in anywhere not Japan or South Korea.

Going back to early mobile VR games, or hell, even some PS2 games, the above technique was actually really common. Google even released a tool to help generate these vignettes to get higher quality scenes that would run on mid range phones for use with the woefully underpowered Daydream devices.

Really, the biggest problem with all of these techniques is they assume a static world. Lots of games these days have very dynamic lighting and skies which all of these techniques fail to capture.

1 Like

Nice historical breakdown!

How else could services like stadia overcome the inherent latency issues they face (see article below)?

That 2014 Microsoft Research project I mentioned is exactly that concept of using speculation to hide latency, the cube map thing is just a small part of it:

How do online action game do?

Well locally to the player they don’t have an issue as there is minimal input/scene latency.

However there can be latency between players. Most games tend to give authority to the player who shoots first. The peekers advantage.

Issues occur when players who move behind cover are then shot by players who are latent and see them out of cover and shoot them. With a shooters have authority approach the shot player feels as though they were shot behind cover.

Question: Would playing a multiplayer game via streaming compound or reduce this problem?

In theory…

The networking latency side of the game could be minimised as they could be running in neighbouring servers or racks so it would be akin to a super low latency LAN game.

If they can mitigate or distribute enough game servers to mitigate ping latency and there are enough players in that region it could be an improvement over current networked games.

Isn’t this discussion in the context of game streaming? Under perfectly ideal conditions you can get down to ~50 ms input to display latency with streaming, but more realistically you’re going to be around 100 to 200 ms even with Stadia because you’re not playing on a server that’s within 5 miles over a wired Google Fiber connection.

That might seem crazy high, but on consoles with a TV left in its default settings, that’s the kind of latency you’re getting anyway (or worse). On PC if you have a super high end PC, running a game at >144hz with a super fast gaming monitor* from the last year or two, and gaming peripherals you can get down to 30 ms.

  • As an aside, the “1ms response time” thing you see slapped on monitors is 100% marketing bull. Every LCD made in the last 20 years is a “1ms response time” monitor by the metrics they’re using, because there’s no industry standard for how to measure response time, any LCD panel overdrive hard enough can go from black to white in 1ms or less, but that’s not a realistic use case. Also response time is completely separate from input lag!

All streaming services are trying to reduce the time it takes for inputs from the user to reach their servers, how long it takes to update & render the game, compress the rendered image, and send it back. There’s a ton of latency with normal PCs & desktop OSs that using a mobile or dedicated thin client can skip, so that gets rid of a few ms. There’s the issue of how far any data has to travel over the internet, and the “speed of light”. Also the number of nodes the data goes through before getting to it’s end goal can have a dramatic effect both on the overall distance and the latency. So they’re going to have custom networks that try to get the packets off of the “public” network sooner so they have as straight a shot to the data centers as possible. Then when running & rendering the game, they’re likely going to be running the game not so it’s going to take the usual 16ms to update and 16ms to render, which is what PC and consoles do, but on super powerful CPUs and GPUs that can do both combined in less than 5ms. Then they have custom video compression hardware that can spit out a compressed image in a few ms. Then back over that custom network to get the data as close to you as possible before going back over the public network & your ISP’s network.

With all that they’re still struggling to get much better than 100ms in the real world.

The idea with speculation is you make a guess at what the player is going to do before they do it, and start running the game as if they did that action, and rewinding the game if their inputs don’t match. All this to try to get down to less than 100ms of perceived latency.

There are a few ways to handle how that’s exposed to the player. The original paper I linked to above sends a cube map + 4 alternative futures, the later in the form of lower quality “flat” images that can be projected against the cube map. If the player’s input matches, or is at least close to one of the “speculative” alternative futures, it shows that image instead of the main one. Then on the server it rewinds the game, applying the user’s real input, and fast forwards to “now” with the real inputs. This is enormously expensive in terms of computational power on the sever and bandwidth, but has the benefit of as soon as you shoot your gun or swing your sword you see visual feedback similar to what you expected to see without having to wait for the round trip latency.

Later versions of the paper I linked to instead seem to not send all 5 video streams, but instead rewind the game and fast forward to apply the inputs the player intended, using speculation to limit how often that has to happen. Instead you just see a single video feed, and when you shoot your gun you simply miss the first few frames of animation and the video you see locally just skips ahead as if you’d pressed the button earlier. This means things react, but it means if any game has effects or animation that happen in the first few frames you’ll never see them. Oddly this might actually “feel” better than seeing all of the frames. Skipping frames to make something feel punchier is a common technique for movie action scenes. This is likely what Google Stadia would be doing. The human brain is weird too, and if you’re not actively looking for this it’ll back fill your memory to make you think you saw everything happen anyway.

Not exactly correct. Specifically your use of the term “authority”. Most games competitive games are running simultaneous simulations on both the local player’s hardware and the server and validating what the local client is saying it’s doing. The person with the lowest ping does indeed usually have an advantage, but some games will actually apply high ping players’ actions to the simulation “back in time” to account for their higher ping. Games that allow for two players to kill each other at the same time for example may be taking into account the individual players’ pings for their attacks, effectively letting them get a couple of shots off “after” they died by pretending they happened earlier. But almost no multiplayer game today gives full authority to the client as it’s just too easy to exploit.

Hower the “peeker’s advantage” you mentioned is actually a direct artifact of speculation that already occurs on modern multiplayer games! If you’re standing still behind a corner then step out quickly, you will indeed step out and be able to see & shoot before your opponent is able to see you do it. If you just run around the corner, you do not get this advantage as the server and your opponent’s client is speculating that you’re going to run out anyway and so they see you at the same time (or sometimes sightly before) you actually step out. It can even go the other way where you might think you stopped just before going around a corner, but your opponent sees you step out and then duck back. Again, speculative prediction of the data your opponent’s client has knows you’re running that direction, so it expects you to keep running, and that’s what it shows to your opponent.

However in the case of your opponent shooting you at the moment you “stepped out”, most modern games will validate this on the server’s version of reality and see that no, you stopped before walking past the corner, so the other player’s shots will not do damage. That doesn’t stop people from feeling like they got killed when they shouldn’t, but that comes down to a combination of the peeker’s advantage, and usually some of your body is visible around a corner before you can see around a corner. :stuck_out_tongue:

The big benefit that streaming services have is the reduced latency between the clients and the server, and the possibility of not even having a separation between the two. You could be literally playing the game “on the server”, as in there is a single executable doing all of the simulation, and the “clients” are just rendering out the game state to stream to you. This removes a lot of the complexity of modern multiplayer games, and could legitimately make for multiplayer games that feel just as responsive as existing console single player games. Again, ~150ms of total latency for a single player console game is pretty normal, so if that’s what you get from a streaming service playing games, then you could have a multiplayer game that feels exactly as responsive, because it is. There’s also a lot of things modern multiplayer games just don’t bother attempting because of the inherent latency between the clients and the server that could now be possible, like significantly more dynamic and physics based elements, as well as much larger player numbers.

2 Likes

That’s pretty exhaustive answer we got, but my problem is that I play nintendo game, and they tend to have pristine reaction time, this will be bad for me lol, also ninja gaiden black, we can’t play it anymore on modern tv, that game use to have 33ms latency, you can def feel it on modern tv, you die in that game a lot, you die now even more. 150ms average is insane, that explain a lot why I can’t hold controller so much on these new games … Also probably why my attempt at making a decent character controller failed with unity, I’m outside the norm.

However the point that can be made, is that modern game can probably hide the latency with the slow modern animation blending they all have, that make action so sluggishly realistic. Which is probably a win for modern gamer accustomed to it. Also back in teh day Phantasy star online on dreamcast hide the 56k latency with gameplay mechanics, mostly in that way, you couldn’t do combo by mashing button, the game encouraged you to wait and tap at a specific rhythm to do combo, too soon you fail, too late you fail, and you don’t move in the process, that’s a great way to reduce the density of data over time, similarly, hit where only possible if a cursor locked on the enemy part, so it was already resolved before you press a button (it lead to curving gun bullet too, but it’s high tech).

IMHO if stadia become standard, gameplay will simply adapt, that’s smarter than smart tech. People will try to hold on to their fast pace action game though …

Presumably you’re referring to the Nintendo Switch? If you’re playing in handheld mode with docked controllers, your latency is ~66ms. If you’re using the pro controller or undocked controllers in handheld mode, you’re back to nearly 100ms. Docked w/ pro controller … 150ms, just like any other console.

Technically it can get down to 33ms in handheld with docked controllers, but I don’t think any games actually reach that.

Nope. Still 100ms. Ninja Gaiden for the original Xbox maybe, but not Black.

BTW, PS2 games regularly had 200+ ms of latency. The original Killzone is the worst offender, with nearly 300ms even removing latency from TVs.

Also, just to really mess with you. Modern consoles like the Xbox One X and PS4 Pro on a high end LCD or OLED tv in game mode can have less latency than the original NES on a CRT.

NO I mean nintendo in general, as a culture thing, I have mostly play nintendo game my live, and everytime I switched I can feel a net loss on control (mostly after the 16bit era). Though it’s only now I knew about ms. Even the wii U had its screen reacting faster than TV.

Ninja gaiden black was def an original xbox game, I remember we (my brother) bought that game twice, the original and teh black version. Proof Ninja Gaiden Black | Ninja Gaiden Wiki | Fandom
It was one of the tightest game. Also Someone actually measured the response time on gamasutra with the old xbox.

On switch I only play 4 games so far, which aren’t that action latency sensitive anyway: zelda BOTW, ARMS, fortnite and ubisoft’s starfox “wat’s a starlink”. Starlink does feel like sloppy, fortnite and ARMS are online so lag is mandatory and I don’t play fortnite much competitively, I do for arms but then it’s a game about mind game through space control, botw is an action game, the focus not so much on precise control.

I kind of caricature a bit, not all nintendo game, I would have dropped of mario galaxy, it was so painful at many level I skipped the second, coming from the snapiness of mario 64 to that is a source of grievance to me.

So while the console themselves probably have less latency, they still have many frame before the character effectively jump when you press jump, even when they start blending animation right away (damn realism). It’s not the technical latency as much as the gameplay latency.

When I wanted to make a “open world skate park sonic galaxy 64” (elevator pitch) with unity, well the way they poll input back then was not conducive to have snappy response (needed when moving at the speed of sound), to make thing aggravating, they also had bad collision setting (I needed to listen to the sphere surface for hit, to smoothly move on any surface at any angle transition, you couldn’t with kinematic object, you weren’t supposed to use rigidbody, raycast only work as support) and salt on injury I had to stream level because of the speed the game where going, but unity couldn’t do it without itching. Glad they shift to performance by default!

For games that care about it, like Destiny or most of the Treyarch Call of Duty games, those are pushing the boundaries of what’s possible on a modern console for a real game. They hit lows of ~70ms when the stars align, but more likely average around 90ms, not counting display latency, just input to video signal output.

33ms of latency for an original Xbox game seems way outside the realm of possibility to me. The Xbox controller had ~20ms of latency on it’s own and Ninja Gaiden for sure wasn’t running at 120 fps (what would be needed to run the game code & render in 13ms)

The original NES on a high end CRT is capable of 33ms of latency. Modern emulators on the same CRT are capable of 16(!)ms of latency using prediction, the same kind of idea Stadia is talking about.

1 Like

Probably a personal mandela effect then, because I had stuck to this number for a very long time :hushed:

1 Like

I suspect it’s called getting old.

2 Likes