- Publication Details
- The International Symposium on Pervasive Displays
- Jun June 4, 2013
VPoint explores the use of large displays for collaborative content presentation and manipulation.
Mouse-based interaction on displays with large sizes and high resolutions can be problematic. The size of an unscaled mouse cursor diminishes so much that it can hardly be located on the screen, when the screen is viewed at a comfortable distance, and the default tracking speed of regular mice makes it tedious to manipulate content on the screen. At FXPAL, we are exploring full-body gestural interfaces as an alternative to mouse-based interactions on large displays. The advantages of gesture-based interaction is that gestures can be simple to perform, and cover larger spatial distances. Hence, smaller control-display gains can be used. Gestures can be intuitive, for instance when the UI is designed such that it follows the Natural User Interface (NUI) principles, where interactive objects expose their functionality during interaction. Finally, we feel that gestural interfaces will promote movement and activity at otherwise sedentary workplaces, with the effect of increasing the users’ health and well-being.
The VPoint prototype aims to explore the use of a large display for collaborative content presentation and manipulation. It uses gesture-based input tracked by a Kinect sensor, and is directly integrated with the Windows 7 desktop.
Even though sensors for gestural interfaces, such as the Kinect, have become widely adopted, they still have limitations. High noise during tracking makes input imprecise and fine manipulation tasks difficult. To allow precise manipulation of content, i.e., positioning and scaling of windows, we have developed a target expansion scheme that is based on an adapted Voronoi Tessellation. By subdividing the screen contents in this way, the target area of the target objects is greatly expanded, which facilitates pointing and manipulation. The VPoint prototype implements this target expansion scheme using a semi-transparent visualization showing the target areas, which is directly overlayed on the windows desktop.
We are currently examining data from user studies of the VPoint prototype, in order to answer the following usability questions:
- Does the tesselation scheme improve precision and lower task completion times?
- Is it necessary to show a visual overlay of the expanded target areas, or will an invisible tesselation also be effective?
Although we have found the solution to the immediate problem of improving pointing and manipulation tasks using gestures on large displays, we would like to expand the prototype to allow manipulating actual window content of applications. We therefore aim to explore gestural interaction techniques that allow intuitive and seamless switching between interaction at a desktop “meta” level and individual application windows. Here, it may be necessary to evaluate multi-modal techniques and compare them to purely gestural approaches.