Smart Spaces

FXPAL research in Smart Spaces explores human interaction within smart environments. Using state of the art sensors like depth cameras, acoustic arrays, and tracking devices we create sensitive-space applications for interacting with large data, with complex systems, and with other people. Our “Smart Space” multiple display environments include media walls that enable interactions with users’ wearable and mobile devices(e.g. Google Glass) as well as direct-input sensing. Our goal is to provide augmented spaces where local and remote collaborations are informed by real-time, ambient and archival data streams in combination with live input.

Personal smart spaces

  • Personal Lite Displays
    This research in projector-camera applications focuses on gestural interaction, particularly a novel technique that we invented: gesture-based GUI widgets. The GUI widgets have salient hotspots that provide visual cues to the user and enable better gesture detection by the system. This approach has advantages over existing methods: it can support both touch and touchless interfaces, it provides better perceived affordance, and it can support complex tasks with repeated actions.
  • GIST: Glass Intelligent Speech to Text
    "OK Glass, gimme the gist!" As part of the Google Glass Explorer program, we are developing a real-world captioning system for the Glass wearable augmented reality computer.
Spatial interfaces

  • VPoint
    Existing user interfaces for the configuration of large shared displays with multiple inputs and outputs usually do not allow users easy and direct configuration of the display's properties such as window arrangement or scaling. To address this problem, we are exploring a gesture-based technique for manipulating display windows on shared display systems. To aid target selection under noisy tracking conditions, we propose VPoint, a modified Voronoi tessellation approach that increases the selectable target area of the display windows. By maximizing the available target area, users can select and interact with display windows with greater ease and precision.
  • PointPose
    The expressiveness of touch input can be increased by detecting additional finger pose information at the point of touch such as finger rotation and tilt. Our PointPose prototype performs finger pose estimation at the location of touch using a short-range depth sensor viewing the touch screen of a mobile device. Our approach does not require complex external tracking hardware, and external computation is unnecessary as the finger pose extraction algorithm runs directly on the mobile device. This makes PointPose ideal for prototyping and developing novel mobile user interfaces that use finger pose estimation.
Mixed realities and mirror worlds

  • The Virtual Factory: Industrial Collaboration Environments
    The Virtual Factory project investigates applications of mixed-reality, mobile, and virtual worlds in industrial settings. In collaboration with TCHO, a chocolate maker start-up in San Francisco, we created virtual "mirror world" representations of a real-world chocolate factory, and then imported real-time sensor data from the real factory floor into the resulting virtual factory. This 3D environment is designed for simulation, visualization, and collaboration, using a set of interlinked, real-time 3D, 2D and mobile layers of information about the TCHO chocolate factory and its processes.
  • Magic Mirror
    Our Magic Mirror work creates dynamic 3D virtual models of physical spaces that reflect the structure and activities of those spaces to help support navigation, context awareness and tasks such as planning and recollection of events. A rich sensor network dynamically updates the models, determining the position of people, status of rooms, or updating textures to reflect displays or bulletin boards. Through views on web pages, portable devices, or on ‘magic window’ displays located in the physical space, remote people may ‘look in’ to the space, while people within the space are provided with augmented views showing information not physically apparent.
  • Seamless Design Visions
    Developing a shared vision through speculative design and scenario building is an important part of FXPAL's research. We are interested in a quality we call "seamless" where display and interaction options move between spaces, surfaces, and media with ease.

More Research Projects

Related Publications

Smart Spaces on FXPAL's YouTube channel


One of our visions of seamless spaces.


The Virtual Factory monitors
and controls real world factories.

VPoint is a gesture-based tool for manipulating windows over large display areas.


For our Magic Mirror work we created data-driven 3D virtual worlds embedded in Google Earth.

Copyright ©1999-2014 FX Palo Alto Laboratory | Send feedback to the webmaster