User Tools

Site Tools


virtualcontrolml:virtual_reality_control_schemes

Virtual Reality Control Schemes

VCML can be used to create control schemes for virtual reality (VR) and augmented reality (AR) games and applications. When using Head mounted displays (HMDs) windows games and applications can be over-ridden with new user-defined virtual controls that can present dynamic, animated overlays with rich multimodal blended interaction controls.

Many PC games have already released updates that VR enables games to use new head mounted displays that have recently become available to developers and early adopters. For games and applications that have not been updated there are utility applications (VorpX.com) that can automatically convert the standard rendering pipeline of an application/game and create the appropriate distorted, stereo video output needed for popular HMDs.

Native Head Tracking & Mapping

As shown above the latest generation of head mounted displays (HMDs) used for virtual reality based games and applications. These HMDs have multiple sets of built-in sensing methods: multiple embedded IMUs and patterned IR emitters that create point constellations which can be used to track the exact position and orientation of the user's head (9DOF).

crystal-cove-proto.jpgOculus Rift:
IR Constellation Tracking
htc_vive.jpgHTC Vive:
IR Constellation Tracking
sony-morpheus.jpgSony Morpheus
Optical Constellation Tracking

Data from these local head tracking arrays is typically used to directly map HMD scene view controls. This is typically done on an independent process from other input methods as an important step in minimizing input delay (electron to photon latency).

Integrated Eye Tracking

HMD integrated eye tracking data can be used to enable powerful rendering optimization methods called “foveated rendering”. This allows only regions of interest (zones that the user is looking at) to be rendered in full detail with other areas rendered in less detail. Foveated rendering techniques can improve GPU performance by over 30x and allow high-end graphics cards to achieve frame-rates over 200fps on desktop displays and stereo HMD's which critically reduces rendering lag and improves performance on mobile devices.

Fovevr: HMD with Integrated Eye tracking OSVR: HMD with Integrated Eye tracking SMI: 250Hz, Foveated Rendering

Some devices like Fovevr (as seen above) have been designed from the ground up to track eyes in the HMD. Others like OSVR can be fitted with IR lEDs rings around eye focusing lens along with a SMI modified rendering pipeline (developer software utility) can then track eyes and incorporate foveated rendering techniques on the graphics card to improve performance.


Integrated EMG and EEG Sensors

Wearing a head mounted display obscures a large part of the users face which can make camera based facial tracking difficult. However EMG sensors such as those seen in the Conjure (conversion kit for Galaxy VR HMD) can identify facial expressions of the user by tracking facial muscle activity. This can be a great way to add emotional depth to avatars or simply use face gestures to control aspects of an application.

Conjure, tracking facial Muscle movements (EMG) MindMaze & MindLeap, tracking brainwave patterns (EEG) Emotiv Insight, tracking mood and attention

Devices like MindLeap use EEG sensors embedded in the head strap of the HMD to directly track brain wave activity. Brain computer interface devices (BCIs) such as the Emotov Insight are lightweight peripheral devices that can be used in conjunction with HMDs to evaluate the users mood or levels of attention and concentration. This level of real-time insight into the user's state of mind can be used to subtly influence VR environments such as changing the weather or scene lighting. Other BCI devices such as the Emotive Epoch can be used to develop solutions that allow the user the directly control VR environments, characters, 3D objects and 3D world navigation within scenes. Users typically have to practice and train with the BCI device to both finely calibrate the system and learn how to control aspects of their own brain response in order to reliably trigger system controls.

Hand-Held Controllers

Handheld controllers present less constrained bi-manual input options as they allow for free moving button-based controls from both hands independently. Hand-held controllers can currently operate with best-in-class tracking precision and response fidelity and can be very effective tools. However, because hand-held controllers require the user to hold a controller in their hands, it puts fundamental limits on the perceived level of immersion (presence) as it prevents the user from grasping other objects or simply opening their hands to perform free form natural poses.

The VR controllers shown above use a combination of IMU sensors, global optical/IR computer vision-based 3D position, 3D orientation tracking and button controls to present multi-state, 6DOF, bi-manual controls for multiple users.


Glove Controllers

Wearable gloves are being developed for use in VR/AR applications that use IMU's to measure hand motion and orientation along with other sensors to measure the degree of finger extension. The combined effect is to create fast, responsive, high-fidelity input devices capable of tracking hand poses and 3D motion gestures. Glove-based controls have proven extremely useful in high precision control applications for robotics, flight controls and other scenarios where occlusion-free finger dexterity is required. Gloves can also provide a convenient package for active haptic feedback where vibrating pads or extension resistors can be woven into the glove.

Manus VR Gloves Dexmo Haptic Feedback Gloves GloveOne VR Gloves

Gloves allow users to form natural poses without optical occlusion while still maintaining the freedom to grasp real world objects. The disadvantage is that gloves still cover hand surfaces and can be uncomfortable to use for long periods which can be a barrier for natural interaction.


Bare-Hand Controls

Head-mounted display manufacturers and developers have begun creating immersive hand controls that do not use hand-held controllers or gloves. These hand pose and motion gesture controls commonly use head mounted time-of-flight (tof) motion tracking devices to track hand/finger features and create a virtual hand model that can be analyzed for pose and motion gestures.

Leap Motion VR Hololense AR Nimble VR

Head-mounted hand tracking devices typically provide simplified articulated skeletal hand models that can have up to 22 feature points. These feature points can be related to known hand bone and joint positions and orientations. Once the hand configuration is reliably determined the model can be analyzed for detailed pose metrics and can be said to have a defined state. This pose state can then be tracked along with filtered features to characterize the 3D motion. The combination of these metrics is then used to recognize user actions which are summarized as gestures when mapped to a control interface.

Leap Motion Hand Tracking Kinect Hand Tracking PMD CamBoard Pico Hand Tracking

As seen on http://gestureml.orgGestureML.org there are literally thousands of possible 3D hand gestures that can be reliably recognized by a gesture engine. Selecting the right gesture mappings to create a gesture-based control scheme is a fundamental part of making a successful, natural control scheme.


User Defined Virtual Reality Controls

There are hundreds of new peripheral devices that are being developed as control devices for VR and AR applications. Most of these replicate traditional gamepad controls. The types of immersive interactions that are needed for high fidelity world navigation, manipulation and character control will likely require different control devices for specialized VR/AR applications. Using these customized systems along with user-defined virtual control schemes will allow for highly refined input mapping and increasingly sophisticated cooperation between HCI devices to create rich, multifaceted experiences with off-the-shelf hardware.

As with other relatively custom controls it is important to experiment with control mapping and device specialization. VCML has been used to create multimodal controls for common virtual reality-enabled games such as: Grand Theft Auto, Skyrim and BioShock (as shown below).

Grand Theft Auto 5 VR skyrim1.jpgSkyrim VR maxresdefault.jpgBioShock VR

In the above examples a variety of techniques were used to create immersive controls. In each case VCML provided user defined and real-time configurable controls that were developed and refined before and during active gameplay. For example one control scheme used with Grand Theft Auto 5 used multiple Nod rings to navigate the character in the 3D world, control weapons and drive vehicles. The left ring was used to move around the 3D world with the right ring used for gun control. Both rings or one ring could then be used to drive. The user could then steer a motorbike with one hand, shoot with the other and change the radio station using voice commands.

Mixed Reality Control Schemes

virtualcontrolml/virtual_reality_control_schemes.txt · Last modified: 2016/04/29 11:58 by paul