AI for Toggling the Linearity of Interactions in AR

Abstract

Interaction in Augmented Reality or Mixed Reality environments is
generally classified into two modalities: linear (relative to
object) or non-linear (relative to camera). Switching between these
modes can be arduous in cases where someone’s interaction with the
device is limited or restricted as is often the case in medical or
industrial applications where one’s hands might be sterile or
soiled. To solve this, we present Sound-to-Experience where the
modality can be effectively toggled by a noise or sound which is
detected using a modern Artificial Intelligence deep-network
classifier.