Comparing and assessing spatial navigation across VR environments: OMNI treadmill, Vive VR controllers, & desktop
Students: Calinikos Price, Saumya Puru Bakshi
in collaboration with: Aaron Gabryluk
under the supervision of: Dr. Scott Moffat, Dr. Bruce Walker
for the research of: LaTajah Lambey
Welcome to the VR Navigation Research Project website, where you can stay updated with the latest milestones and developments by the team. The aim is to understand differences in spatial navigation abilities in different virtual reality modalities. The assessments and designs were created for an experiment that will test an individual's spatial memory in different virtual contexts.
The three modalities are OMNI treadmill, VR Vive controllers, and desktop computer. The OMNI is a stationary treadmill, allowing for 360-degree locomotion while using VR equipment. The VR controllers are used in a sitting context along with the headset, with movement using two joysticks. The desktop navigation is through arrow keys and mouse movements.
This project was created in Unity using C# scripts in collaboration with OMNI tools and HTC Vive software. The tasks are distinct but overlap due to experimental flow and procedure.
At the beginning of this project, our main goals were to complete several items before the design was released for use in the upcoming experiment.
Literature review on OMNI procedure and VR experimental design
Create an exploratory VR method for initial exploration and training procedure for each
Upgrade the existing New Town environment's fidelity and add recognizable assets
Create an assessment procedure to gauge relative location ability in VR
Create an assessment procedure to test macro-spatial abilities
New Town is a pre-existing virtual environment in VRLandia. For the purposes of this project, it was revamped to include:
more roads for increased path length
distinct buildings for recognizability
large skyscrapers for separation
mountains to gauge direction
These changes allowed for it to be used as a host for spatial memory assessments and make it more easily navigable.
To increase the usefulness of the existing environment, we increased the complexity and fidelity of this basic space by adding textures, buildings, and lighting. We purchased these in the Unity asset store and used available prefabs and kits to create some aspects of the design.
The initial task created was the relative spatial memory task. This was comprised of two laps around the full path, followed by a series of teleportations to buildings. Once teleported to a building, the participant would have to point to another building indicated by a visual UI prompt display. The participant then points using their specific modality's tools and confirms their guess by pressing the submit button. The Euler angle from the ray produced by the pointing location was compared to the locations of all 8 corners of the building's box collider (hitbox). If the participant hits the box collider, the angle is ignored, meaning that they hit the building correctly. Otherwise, the angle is outputted to an Excel file that will later be used for analysis. The participant is then teleported to a new location in front of a building, and the process is repeated for a total of 20 times per participant.
There were several iterations in the design of this task due to feedback from our advisor. We initially created pointing tasks in each of the modalities, and a procedure and script for each one that automatically began after the initial exploration. However, we received feedback that the experiment would be more controlled with one pointing task in the desktop format and for the spatial "learning" to occur in the different modalities. The pointing task in the desktop mode is now the sole current assessment method for this, and following the Exploratory Task, the participants will pause before beginning the next task due to time differences between exiting each modality.
After the creation of the Relative Spatial Memory Task, it was evident that the participants needed an initial way to explore the town to see the locations of each of the buildings.
We created an Exploratory Task in which participants follow a trail of glowing orbs along a set path to learn the layout of New Town. After review and preliminary testing, we decided to remove the orbs and replace them with arrows. This helped shift the focus off the path and more to the surroundings, which was the purpose of the task. The orbs were too distracting for participants and placed too much emphasis on where the participant was walking, and not what was around them.
A compass feature also was instated as a redundancy factor to greater indicate the direction of the next orb for participants. The initial compass had a triangle pointing in the forward direction at the next orb. Then, the triangle was swapped for a more intuitive arrow, and the location was adjusted to be from the middle of the screen to sit on top of the user's joystick if using VR navigation. However, after the switch to arrows, the compass no longer served a functional purpose and merely pointed to the gas station. Therefore it was removed.
The distraction in the Exploratory Task brought up another valuable change that needed to be made in the order of tasks completed by the participant. We instated a training procedure for each of the modalities again to take the focus off the walking mode and shift it back to the surroundings.
This way, participants using the OMNI treadmill were not significantly slower than participants on the desktop computer. We also adjusted the speed of the alternate modalities to match the speed of the OMNI because it was much slower than the others. The learning curve for using the OMNI takes longer than the other modes, and to be quite frank I am still not fully used to it after a semester of working with it. However, the speed standardization helped for the experiment, and allowed for it to be controlled as a factor rather than being variable.
To create the training task, we found a pre-existing scene from the Unity asset store that featured a path along a river up to a mountain. The participants would walk along the path, and their distance from the main path would be recorded and outputted into an Excel file. This recording of a "training score" was not able to be quickly seen by the researcher, and often took several minutes to retrieve. A new method was created to fix this problem. We purchased a fence asset from the Unity store that would be placed along the path that participants walked. The fences were rotated to fit the undulating path, and each was fit with a box collider that prevented participants from passing through, but also to be used for recording a "bump counter" for the number of total variations off the path of the participant. This way, the participants were both stopped from leaving the path and tracked on how many times they veered from the target path. Also, the script was accessible for either modality, with a simple checkbox in Unity to change it.
After the task is completed, the bump counter of the participant and the time taken are shown on a UI display. The file it outputted, so the researcher can use the information later, but also view the participant's apparent ability to see if the training needs to be completed again before switching to the later tasks.
The final task completed by the participants is a placement task, where the participant sees a bird's eye view of New Town, and has to place each building, as if it were a chess piece, onto the map. They have to remember not only the relative location of the building (next to the Police Station) but also the large-scale spatial location of each building in the scope of the mountains for reference. Once a participant clicks on a building's image sprite at the top of the screen, the building will be on their mouse as they move it around to select a location. They will then click to place the building. After placing each building they confirm their selection and can select another building. This is repeated until all buildings have been placed and are greyed out at the top of the screen. Participants can edit the position of any building by clicking its image sprite at any time. They will then confirm their final selection and conclude the placement task.
An earlier version of the placement task included the construction of a building using a Builder kit from the asset store and a tabletop with the placement task on top. This version felt too small and far away, and was difficult to implement the correct lighting to make everything visible.