Qiong Liu, Ph.D.

Principal Research Scientist

Qiong Liu

Qiong Liu joined FXPAL in 2001. His research interests include paper user interface, thermal video processing, immersive conferencing, image/video/audio processing, multimedia, computer vision, machine learning, human-computer interaction, and robotics. Qiong earned his Ph.D. in Computer Science from the University of Illinois at Urbana-Champaign (UIUC) in 2001. He earned his M.S. in Precision Instruments at Tsinghua University in 1992 and his B.S. in Biomedical Engineering and Instrument Science from Zhejiang University in 1989.

Research Page

Recent Research Video Postings

Co-Authors

Publications

2005
Publication Details
  • Short presentation in UbiComp 2005 workshop in Tokyo, Japan.
  • Sep 11, 2005

Abstract

Close
As the use of rich media in mobile devices and smart environments becomes more sophisticated, so must the design of the everyday objects used as containers or controllers. Rather than simply tacking electronics onto existing furniture or other objects, the design of a smart object can enhance existing ap-plications in unexpected ways. The Convertible Podium is an experiment in the design of a smart object with complex integrated systems, combining the highly designed look and feel of a modern lectern with systems that allow it to serve as a central control station for rich media manipulation in next-generation confer-ence rooms. It enables easy control of multiple independent screens, multiple media sources (including mobile devices) and multiple distribution channels. The Podium is designed to ease the tasks involved in authoring and presenting in a rich media meeting room, as well as supporting remote telepresence and in-tegration with mobile devices.
Publication Details
  • ICME 2005
  • Jul 20, 2005

Abstract

Close
A common problem with teleconferences is awkward turn-taking - particularly 'collisions,' whereby multiple parties inadvertently speak over each other due to communication delays. We propose a model for teleconference discussions including the effects of delays, and describe tools that can improve the quality of those interactions. We describe an interface to gently provide latency awareness, and to give advanced notice of 'incoming speech' to help participants avoid collisions. This is possible when codec latencies are significant, or when a low bandwidth side channel or out-of-band signaling is available with lower latency than the primary video channel. We report on results of simulations, and of experiments carried out with transpacific meetings, that demonstrate these tools can improve the quality of teleconference discussions.

AN ONLINE VIDEO COMPOSITION SYSTEM

Publication Details
  • IEEE International Conference on Multimedia & Expo July 6-8, 2005, Amsterdam, The Netherlands
  • Jul 6, 2005

Abstract

Close
This paper presents an information-driven online video composition system. The composition work handled by the system includes dynamically setting multiple pan/tilt/zoom (PTZ) cameras to proper poses and selecting the best close-up view for passive viewers. The main idea of the composition system is to maximize captured video information with limited cameras. Unlike video composition based on heuristic rules, our video composition is formulated as a process of minimizing distortions between ideal signals (i.e. signals with infinite spatial-temporal resolution) and displayed signals. The formulation is consistent with many well-known empirical approaches widely used in previous systems and may provide analytical explanations to those approaches. Moreover, it provides a novel approach for studying video composition tasks systematically. The composition system allows each user to select a personal close-up view. It manages PTZ cameras and a video switcher based on both signal characteristics and users' view selections. Additionally, it can automate the video composition process based on past users' view-selections when immediate selections are not available. We demonstrate the performance of this system with real meetings.
2004
Publication Details
  • Springer Lecture Notes in Computer Science - Advances in Multimedia Information Processing, Proc. PCM 2004 5th Pacific Rim Conference on Multimedia, Tokyo, Japan
  • Dec 1, 2004

Abstract

Close
For some years, our group at FX Palo Alto Laboratory has been developing technologies to support meeting recording, collaboration, and videoconferencing. This paper presents several systems that use video as an active interface, allowing remote devices and information to be accessed "through the screen." For example, SPEC enables collaborative and automatic camera control through an active video window. The NoteLook system allows a user to grab an image from a computer display, annotate it with digital ink, then drag it to that or a different display. The ePIC system facilitates natural control of multi-display and multi-device presentation spaces, while the iLight system allows remote users to "draw" with light on a local object. All our systems serve as platforms for researching more sophisticated algorithms to support additional functionality and ease of use.
Publication Details
  • Proceedings of 2004 IEEE International Conference on Multimedia and Expo (ICME 2004)
  • Jun 27, 2004

Abstract

Close
This paper presents a method for creating highly condensed video summaries called Stained-Glass visualizations. These are especially suitable for small displays on mobile devices. A morphological grouping technique is described for finding 3D regions of high activity or motion from a video embedded in x-y-t space. These regions determine areas in the keyframes, which can be subsumed in a more general geometric framework of germs and supports: germs are the areas of interest, and supports give the context. Algorithms for packing and laying out the germs are provided. Gaps between the germs are filled using a Voronoi-based method. Irregular shapes emerge, and the result looks like stained glass.
Publication Details
  • Proceedings of 2004 IEEE International Conference on Multimedia and Expo (ICME 2004)
  • Jun 27, 2004

Abstract

Close
Using a machine to assist remote environment management can save people's time, effort, and traveling cost. This paper proposes a trainable mobile robot system, which allows people to watch a remote site through a set of cameras installed on the robot, drive the platform around, and control remote devices using mouse or pen based gestures performed in video windows. Furthermore, the robot can learn device operations when it is being used by humans. After being used for a while, the robot can automatically select device control interfaces, or launch a pre-defined operation sequence based on its sensory inputs.
Publication Details
  • Proceedings of 2004 IEEE International Conference on Multimedia and Expo (ICME 2004)
  • Jun 27, 2004

Abstract

Close
Many conference rooms are now equipped with multiple multi-media devices, such as plasma displays and surrounding speakers, to enhance presentation quality. However, most existing presentation authoring tools are based on the one-display-and-one-speaker assumption, which makes it difficult to organize and playback a presentation dispatched to multiple devices, thus hinders users from taking full advantage of additional multimedia devices. In this paper, we propose and implement a tool to facilitate authoring and playback of a multi-channel presentation in a media devices distributed environment. The tool, named PreAuthor, provides an intuitive and visual way to author a multi-channel presentation by dragging and dropping "hyper-slides" on corresponding visual representations of various devices. PreAuthor supports "hyper-slide" synchronization among various output devices during preview and playback. It also offers multiple options for the presenter to view the presentation in a rendered image sequence, live video, 3D VRML model, or real environment.
Publication Details
  • JOINT AMI/PASCAL/IM2/M4 Workshop on Multimodal Interaction and Related Machine Learning Algorithms
  • Jun 22, 2004

Abstract

Close
For some years, our group at FX Palo Alto Laboratory has been developing technologies to support meeting recording, collaboration, and videoconferencing. This paper presents a few of our more interesting research directions. Many of our systems use a video image as an interface, allowing devices and information to be accessed "through the screen." For example, SPEC enables hybrid collaborative and automatic camera control through an active video window. The NoteLook system allows a user to grab an image from a computer display, annotate it with digital ink, then drag it to that or a different display, while automatically generating timestamps for later video review. The ePIC system allows natural use and control of multi-display and multi-device presentation spaces, and the iLight system allows remote users to "draw" with light on a local object. All our systems serve as platforms for researching more sophisticated algorithms that will hopefully support additional advanced functions and ease of use.
2003
Publication Details
  • Proc. ACM Multimedia 2003, pp. 546-554
  • Nov 1, 2003

Abstract

Close
We present a system that allows remote and local participants to control devices in a meeting environment using mouse or pen based gestures "through" video windows. Unlike state-of-the-art device control interfaces that require interaction with text commands, buttons, or other artificial symbols, our approach allows users to interact with devices through live video of the environment. This naturally extends our video supported pan/tilt/zoom (PTZ) camera control system, by allowing gestures in video windows to control not only PTZ cameras, but also other devices visible in video images. For example, an authorized meeting participant can show a presentation on a screen by dragging the file on a personal laptop and dropping it on the video image of the presentation screen. This paper presents the system architecture, implementation tradeoffs, and various meeting control scenarios.
Publication Details
  • Proc. IEEE Intl. Conf. on Image Processing
  • Sep 14, 2003

Abstract

Close
This paper presents a video acquisition system that can learn automatic video capture from human's camera operations. Unlike a predefined camera control system, this system can easily adapt to its environment changes with users' help. By collecting users' camera-control operations under various environments, the control system can learn video capture from human, and use these learned skills to operate its cameras when remote viewers don't, won't, or can't operate the system. Moreover, this system allows remote viewers to control their own virtual cameras instead of watching the same video produced by a human operator or a fully automatic system. The online learning algorithm and the camera management algorithm are demonstrated using field data.
Publication Details
  • Proceedings of INTERACT '03, pp. 583-590.
  • Sep 1, 2003

Abstract

Close
In a meeting room environment with multiple public wall displays and personal notebook computers, it is possible to design a highly interactive experience for manipulating and annotating slides. For the public displays, we present the ModSlideShow system with a discrete modular model for linking the displays into groups, along with a gestural interface for manipulating the flow of slides within a display group. For the applications on personal devices, an augmented reality widget with panoramic video supports interaction among the various displays. This widget is integrated into our NoteLook 3.0 application for annotating, capturing and beaming slides on pen-based notebook computers.
Publication Details
  • 2003 International Conference on Multimedia and Expo
  • Jul 6, 2003

Abstract

Close
This paper presents an information-driven audiovisual signal acquisition approach. This approach has several advantages: users are encouraged to assist in signal acquisition; available sensors are managed based on both signal characteristics and users' suggestions. The problem formulation is consistent with many well-known empirical approaches widely used in previous systems and may provide analytical explanations to these approaches. We demonstrate the use of this approach to pan/tilt/zoom (PTZ) camera management with field data.
2002
Publication Details
  • ACM Multimedia 2002
  • Dec 1, 2002

Abstract

Close
FlySPEC is a video camera system designed for real-time remote operation. A hybrid design combines the high resolution possible using an optomechanical video camera, with the wide field of view always available from a panoramic camera. The control system integrates requests from multiple users with the result that each controls a virtual camera. The control system seamlessly integrates manual and fully automatic control. It supports a range of options from untended automatic to full manual control, and the system can learn control strategies from user requests. Additionally, the panoramic view is always available for an intuitive interface, and objects are never out of view regardless of the zoom factor. We present the system architecture, an information-theoretic approach to combining panoramic and zoomed images to optimally satisfy user requests, and experimental results that show the FlySPEC system significantly assists users in a remote inspection tasks.
Publication Details
  • IEEE International Conference on Multimedia and Expo 2002
  • Aug 26, 2002

Abstract

Close
This paper presents a camera system called FlySPEC. In contrast to a traditional camera system that provides the same video stream to every user, FlySPEC can simultaneously serve different video-viewing requests. This flexibility allows users to conveniently participate in a seminar or meeting at their own pace. Meanwhile, the FlySPEC system provides a seamless blend of manual control and automation. With this control mix, users can easily make tradeoffs between video capture effort and video quality. The FlySPEC camera is constructed by installing a set of Pan/Tilt/Zoom (PTZ) cameras near a high-resolution panoramic camera. While the panoramic camera provides the basic functionality of serving different viewing requests, the PTZ camera is managed by our algorithm to improve the overall video quality that may affect users watching details. The video resolution improvements from using different camera management strategies are compared in the experimental section.
Publication Details
  • SPIE ITCOM 2002
  • Jul 31, 2002

Abstract

Close
We present a framework, motivated by rate-distortion theory and the human visual system, for optimally representing the real world given limited video resolution. To provide users with high fidelity views, we built a hybrid video camera system that combines a fixed wide-field panoramic camera with a controllable pan/tilt/zoom (PTZ) camera. In our framework, a video frame is viewed as a limited-frequency representation of some "true" image function. Our system combines outputs from both cameras to construct the highest fidelity views possible, and controls the PTZ camera to maximize information gain available from higher spatial frequencies. In operation, each remote viewer is presented with a small panoramic view of the entire scene, and a larger close-up view of a selected region. Users may select a region by marking the panoramic view. The system operates the PTZ camera to best satisfy requests from multiple users. When no regions are selected, the system automatically operates the PTZ camera to minimize predicted video distortion. High-resolution images are cached and sent if a previously recorded region has not changed and the PTZ camera is pointed elsewhere. We present experiments demonstrating that the panoramic image can effectively predict where to gain the most information, and also that the system provides better images to multiple users than conventional camera systems.