Code: https://github.com/ZhiqiaoGong/VR-Campus
Used Unity and panorama video to build a virtual campus roaming system without 3D modeling, so that the user wearing a VR headset can walk freely in the scene.
Users only need to use the panoramic camera to record the video of the scene, and upload it to the system, then they can roam around the place.
A whole new interaction method for the current applications and systems on the market.
Compared with the traditional campus promotional video, this system is more active, immersive and realistic, and also greatly increases the ease of use, which can be put into other fields such as children's education and tourism in the future.
MY ROLE
group leader
responsible for the internal logic implementation
including speed, playback speed calculations, Video switching/intersection steering code writing
part of the UI design.
SYSTEM DESIGN
Traditional scene roaming methods such as video, discrete pictures, and emerging modeling-based scenes each have their own strengths and significant weaknesses in terms of realism, interactivity, feasibility, and freedom.
Therefore, we try to find a roaming method that retains the realism like a photo-video, but has the flexibility to roam freely like a modeled scene.
In this system, the way of controlling video playback by motion speed has a great advantage, compared as follows!


Campus Roaming Experience System based on VR
TECHNICAL DIFFICULTY
-
Vr headset can only get the user's location coordinates, we need to use the user coordinates to calculate the user speed and use it to control the speed of the video playback.
-
In the campus roaming is not always forward, there are many intersections need to choose the direction of the user, we also need to take a way to let the user choose the direction to move forward, and use this direction to switch the corresponding next video to play
INTERSECTION TURNNING / VIDEO SWITCHING
Influenced by other vr applications, we first choose to control with the vr helmet matching handle. That is, the option of popping up at the intersection where the user can choose the path forward next, using the handle to launch the ray to choose.
However, the way the options pop up at the intersection seriously affects the immersion experience of the user, while the user has to keep holding the handle forward, is also extremely inconvenient.
Is there a more natural way to interact?
SOLUTION
Use "eyes", not handles!
When the user travels to the intersection, the user turns his head to the direction he wants to go, and I calculate the angle of the helmet to determine the user's target direction and switch the video. This does not require the use of a handle for interaction and is more similar to a real scene and provides a better immersion experience.

SITE SIZE RESTRICTION
Users need to wear vr helmets when roaming and keep walking forward, but the size of the experimental site is unlikely to meet the needs of users walking all the time.
Initially, we designed the user to walk for a period of time to reach the boundary of the site, the system prompts the user to turn around, the user can continue to walk after turning around, the system shows that the user continues to move forward, and due to site restrictions, the user almost forward a few steps to turn around once, more troublesome, seriously affect the user experience.
SOLUTION
Redirection algorithm!
I searched for information about the latest redirection algorithm: during the user's travel, the scene is slightly rotated to induce the user to go "straight" in a certain direction, while in the real physical space the user is spinning around in a small area. I consulted a PhD student, who is a major researcher of redirection algorithms, and finished writing the algorithm, which also achieved good results.
