VRgame design problem

Hello guys,I’m beginner in unity.
I want to create a VR game scene by using vr360 photo or video,and I don’t know how to let these photo/video turn into a scene that player can move around.

Now I just followed YT video to create a sphere to look around only one 360 photo.
So is there any way can combine many photos or video to make a scene player is movable?

You model the scene for the player to move in by hand.

VR 360 photo or video can only be projected on some sort of distant sphere and have no information in them to let the player move inside of it. Meaning, no distance information.

1 Like

I tried to use pre-rendered stereoscopic 360x180 panoramas. It can be used as a Panoramic Sky = there is a sense of depth (URP, Android build).
However, the demands on the size of the data are terrible.
3DOF showroom 2019 05 23 18 54 03 - YouTube

You CANNOT currently MOVE within a VIDEO or IMAGE. Even if it is a 360 video.

You’ll have to reconstruct the showroom. meshlab, agisoft metashape, and so on. Won’t look the same, but close. Or model it manually by hand.

no, but you can jump from point to point…
…or this PresenZ VR | 6DoF VR movies and images

Kinda not good enough, because with “VR movie sphere” the sphere is locked to your head position and this is not very pleasant, albeit tolerable.

The right technology for this kind of thing is Light Fields.

Your link looks like someone could’ve implemented those. Or it is simply an interactive render.

Yes, this technology is not suitable for (normal) games.

However, for me, some of the results of these experimental projects were useful…

Google have experimented with light fields. Though even with that tech you can only translate a very small distance before information gets too distorted

Demo here

Light field requirements and production processes are cumbersome to say the least.

Yes, once in the past I programmed a 3D (three-point perspective) “game” using Basic on an Intel 8088 processor, it was also cumbersome.
I still hope the lightfields start to be used before I die.

The most practical approach is probably photometry. With the tech today you can scan pretty large objects

Though without nanite the workflow is a bit cumbersome

This stuff is extreemly finicky to get right and requires a lot of computational power. You’ll also lose reflectivity information completely, as far as I’m aware. It will look like colored paper-mache. Light fields, in comparison, despite their shortcomings, give you definite “you’re there” feeling.

Yeah. But the translation is very limited. Test the demo above if you have a steam VR headset.

I played with lightfields when I just got VR headset. While the translation is very limited, it has immense impact because it is within range of natural head movement. That creates the “you’re there” effect.

Additionallyh, steam demo shows weaker implementation of the effect with static images.
What you ACTUALLY want to check is this:
https://augmentedperception.github.io/deepviewvideo/

This demo uses video streams. Now the data size is quite large, but the impact is huge.

1 Like

I presume you mean photogrammetry, since you referenced Reality Capture. Photometry is something different.

https://en.wikipedia.org/wiki/Photometry_(optics)
https://en.wikipedia.org/wiki/Photogrammetry

adding for reference,

see the intro video to see how it works (like: generates depth map from 360 image - then looks nicer when moving head, and has some view dependent rendering stuff)

1 Like

Ops typo. But the provided video should have cleared that up :slight_smile: though not having a equivalent to nanite makes workflow less optimal. Plus like pointed out that metal/smoothness map is not captured.