Publications

FXPAL publishes in top scientific conferences and journals.

2001
Publication Details
  • Proceedings of ACM CHI2001, vol. 3, pp. 442 - 449, Seattle, Washington, USA, March 31 - April 5, 2001.
  • Apr 5, 2001

Abstract

Close
Given rapid improvements in network infrastructure and streaming-media technologies, a large number of corporations and universities are recording lectures and making them available online for anytime, anywhere access. However, producing high-quality lecture videos is still labor intensive and expensive. Fortunately, recent technology advances are making it feasible to build automated camera management systems to capture lectures. In this paper we report on our design, implementation and study of such a system. Compared to previous work-which has tended to be technology centric-we started with interviews with professional video producers and used their knowledge and expertise to create video production rules. We then targeted technology components that allowed us to implement a substantial portion of these rules, including the design of a virtual video director. The system's performance was compared to that of a human operator via a user study. Results suggest that our system's quality in close to that of a human-controlled system. In fact most remote audience members could not tell if the video was produced by a computer or a person.

Quiet Calls: Talking Silently on Mobile Phones

Publication Details
  • In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 174-181, ACM Press, March 31-April 5, 2001, Seattle, WA.
  • Mar 30, 2001
Publication Details
  • In Proceedings of the Thirty-fourth Annual Hawaii International Conference on System Sciences (HICSS), Big Island, Hawaii. January 7-12, 2001.
  • Feb 7, 2001

Abstract

Close
This paper describes a new system for panoramic two-way video communication. Digitally combining images from an array of inexpensive video cameras results in a wide-field panoramic camera, from inexpensive off-the-shelf hardware. This system can aid distance learning in several ways, by both presenting a better view of the instructor and teaching materials to the students, and by enabling better audience feedback to the instructor. Because the camera is fixed with respect to the background, simple motion analysis can be used to track objects and people of interest. Electronically selecting a region of this results in a rapidly steerable "virtual camera." We present system details and a prototype distance-learning scenario using multiple panoramic cameras.
Publication Details
  • WebNet 2001 World Conference on the WWW and Internet, Orlando, FL
  • Jan 17, 2001

Abstract

Close
As more information is made available online, users collect information in personal information spaces like bookmarks and emails. While most users feel that organizing these collections is crucial to improve access, studies have shown that this activity is time consuming and highly cognitive. Automatic classification has been used but by relying on the full text of the documents, they do not generate personalized classifications. Our approach is to give users the ability to annotate their documents as they first access them. This annotation tool is unobtrusive and welcome by most users who generally miss this facility when dealing with digital documents. Our experiments show that these annotations can be used to generate personalized classifications of annotated Web pages.

Description and Narrative in Hypervideo

Publication Details
  • Proceedings of the Thirty-Fourth Annual Hawaii International Conference on System Sciences
  • Jan 3, 2001

Abstract

Close
While hypertext was originally conceived for the management of scientific and technical information, it has been embraced with great enthusiasm by several members of the literary community for the promises it offers towards new approaches to narrative. Experiments with hypertext-based interactive narrative were originally based solely on verbal text but have more recently extended to include digital video artifacts. The most accomplished of these experiments, HyperCafe, provided new insights into the nature of narrative and how it may be presented; but it also offered an opportunity to reconsider other text types. This paper is an investigation of the application of an approach similar to HyperCafe to a descriptive text. We discuss how the approach serves the needs of description and illustrate the discussion with a concrete example. We then conclude by considering the extent to which our experiences with description may be applied to our continuing interest in narrative.
2000
Publication Details
  • ACM Computing Surveys, Vol. 32 No. 4, December 2000.
  • Dec 1, 2000

Abstract

Close
Modern window-based user interface systems generate user interface events as natural products of their normal operation. Because such events can be automatically captured and because they indicate user behavior with respect to an application's user interface, they have long been regarded as a potentially fruitful source of information regarding application usage and usability. However, because user interface events are typically voluminos and rich in detail, automated support is generally required to extract information at a level of abstraction that is useful to investigators interested in analyzing application usage or evaluating usability. This survey examines computer-aided techniques used by HCI practitioners and researchers to extract usability-related information from user interface events. A framework is presented to help HCI practitioners and researchers categorize and compare the approaches that have been, or might fruitfully be, applied to this problem. Because many of the techniques in the research literature have not been evaluated in practice, this survey provides a conceptual evaluation to help identify some of the relative merits and drawbacks of the various classes of approaches. Ideas for future research in this area are also presented. This survey addresses the following questions: How might user interface events be used in evaluating usability? How are user interface events related to other forms of usability data? What are the key challenges faced by investigators wishing to exploit this data? What approaches have been brought to bear on this problem and how do they compare to one another? What are some of the important open research questions in this area?
Publication Details
  • Multimedia Modeling: Modeling Multimedia Information and Systems, Nagano, Japan
  • Nov 12, 2000

Abstract

Close
While hypermedia is usually presented as a way to offer content in a nonlinear manner, hypermedia structure tends to reinforce the assumption that reading is basically a linear process. Link structures provide a means by which the reader may choose different paths to traverse; but each of these paths is fundamentally linear, revealed through either a block of text or a well-defined chain of links. While there are experiences that get beyond such linear constraints, such as driving a car, it is very hard to capture this kind of non-linearity, characterized by multiple sources of stimuli competing for attention, in a hypermedia document. This paper presents a multi-channel document infrastructure that provides a means by which all such sources of attention are presented on a single "page" (i.e., a display with which the reader interacts) and move between background and foreground in response to the activities of the reader. The infrastructure thus controls the presentation of content with respect to four dimensions: visual, audio, interaction support, and rhythm.
Publication Details
  • In Proceedings of UIST '00, ACM Press, pp. 81-89, 2000.
  • Nov 4, 2000

Abstract

Close
Hitchcock is a system that allows users to easily create custom videos from raw video shot with a standard video camera. In contrast to other video editing systems, Hitchcock uses automatic analysis to determine the suitability of portions of the raw video. Unsuitable video typically has fast or erratic camera motion. Hitchcock first analyzes video to identify the type and amount of camera motion: fast pan, slow zoom, etc. Based on this analysis, a numerical "unsuitability" score is computed for each frame of the video. Combined with standard editing rules, this score is used to identify clips for inclusion in the final video and to select their start and end points. To create a custom video, the user drags keyframes corresponding to the desired clips into a storyboard. Users can lengthen or shorten the clip without specifying the start and end frames explicitly. Clip lengths are balanced automatically using a spring-based algorithm.
Publication Details
  • In Proceedings of the International Symposium on Music Information Retrieval, in press.
  • Oct 23, 2000

Abstract

Close
We introduce an audio retrieval-by-example system for orchestral music. Unlike many other approaches, this system is based on analysis of the audio waveform and does not rely on symbolic or MIDI representations. ARTHUR retrieves audio on the basis of long-term structure, specifically the variation of soft and louder passages. The long-term structure is determined from envelope of audio energy versus time in one or more frequency bands. Similarity between energy profiles is calculated using dynamic programming. Given an example audio document, other documents in a collection can be ranked by similarity of their energy profiles. Experiments are presented for a modest corpus that demonstrate excellent results in retrieving different performances of the same orchestral work, given an example performance or short excerpt as a query.

An Introduction to Quantum Computing for Non-Physicists.

Publication Details
  • ACM Computing Surveys, Vol. 32(3), pp. 300 - 335
  • Sep 1, 2000

Abstract

Close
Richard Feynman's observation that quantum mechanical effects could not be simulated efficiently on a computer led to speculation that computation in general could be done more efficiently if it used quantum effects. This speculation appeared justified when Peter Shor described a polynomial time quantum algorithm for factoring integers. In quantum systems, the computational space increases exponentially with the size of the system which enables exponential parallelism. This parallelism could lead to exponentially faster quantum algorithms than possible classically. The catch is that accessing the results, which requires measurement, proves tricky and requires new non-traditional programming techniques. The aim of this paper is to guide computer scientists and other non-physicists through the conceptual and notational barriers that separate quantum computing from conventional computing. We introduce basic principles of quantum mechanics to explain where the power of quantum computers comes from and why it is difficult to harness. We describe quantum cryptography, teleportation, and dense coding. Various approaches to harnessing the power of quantum parallelism are explained, including Shor's algorithm, Grover's algorithm, and Hogg's algorithms. We conclude with a discussion of quantum error correction.
Publication Details
  • In Multimedia Tools and Applications, 11(3), pp. 347-358, 2000.
  • Aug 1, 2000

Abstract

Close
In accessing large collections of digitized videos, it is often difficult to find both the appropriate video file and the portion of the video that is of interest. This paper describes a novel technique for determining keyframes that are different from each other and provide a good representation of the whole video. We use keyframes to distinguish videos from each other, to summarize videos, and to provide access points into them. The technique can determine any number of keyframes by clustering the frames in a video and by selecting a representative frame from each cluster. Temporal constraints are used to filter out some clusters and to determine the representative frame for a cluster. Desirable visual features can be emphasized in the set of keyframes. An application for browsing a collection of videos makes use of the keyframes to support skimming and to provide visual summaries.

Expanding a Tangible User Interface

Publication Details
  • In proceedings of DIS'2000, ACM Press, August 2000.
  • Aug 1, 2000
Publication Details
  • In Proceedings of IEEE International Conference on Multimedia and Expo, vol. III, pp. 1329-1332, 2000.
  • Jul 30, 2000

Abstract

Close
We describe a genetic segmentation algorithm for video. This algorithm operates on segments of a string representation. It is similar to both classical genetic algorithms that operate on bits of a string and genetic grouping algorithms that operate on subsets of a set. For evaluating segmentations, we define similarity adjacency functions, which are extremely expensive to optimize with traditional methods. The evolutionary nature of genetic algorithms offers a further advantage by enabling incremental segmentation. Applications include video summarization and indexing for browsing, plus adapting to user access patterns.
Publication Details
  • In Proceedings of the Genetic and Evolutionary Computation Conference, Morgan Kaufmann Publishers, pp. 666-673, 2000.
  • Jul 8, 2000

Abstract

Close
We describe a genetic segmentation algorithm for image data streams and video. This algorithm operates on segments of a string representation. It is similar to both classical genetic algorithms that operate on bits of a string and genetic grouping algorithms that operate on subsets of a set. It employs a segment fair crossover operation. For evaluating segmentations, we define similarity adjacency functions, which are extremely expensive to optimize with traditional methods. The evolutionary nature of genetic algorithms offers a further advantage by enabling incremental segmentation. Applications include browsing and summarizing video and collections of visually rich documents, plus a way of adapting to user access patterns.
Publication Details
  • In Japan Hardcopy 2000, The Annual Conference of the Imaging Society of Japan. 6/12 6/14 2000.
  • Jun 12, 2000
Publication Details
  • In Proceedings of Hypertext '00, ACM Press, pp. 244-245, 2000.
  • May 30, 2000

Abstract

Close
We describe a way to make a hypermedia meeting record from multimedia meeting documents by automatically generating links through image matching. In particular, we look at video recordings and scanned paper handouts of presentation slides with ink annotations. The algorithm that we employ is the Discrete Cosine Transform (DCT). Interactions with multipath links and paper interfaces are discussed.

Hypertext Interaction Revisited

Publication Details
  • In Proceedings of Hypertext '00, ACM Press, pp. 171-179, 2000
  • May 30, 2000

Abstract

Close
Much of hypertext narrative relies on links to shape a reader's interaction with the text. But links may be too limited to express ambiguity, imprecision, and entropy, or to admit new modes of participation short of full collaboration. We use an e-book form to explore the implications of freeform annotation-based interaction with hypertext narrative. Readers' marks on the text can be used to guide navigation, create a persistent record of a reading, or to recombine textual elements as a means of creating a new narrative. In this paper, we describe how such an experimental capability was created on top of XLibris, a next generation e-book, using Forward Anywhere as the hypernarrative. We work through a scenario of interaction, and discuss the issues the work raises
Publication Details
  • In RIAO'2000 Conference Proceedings, Content-Based Multimedia Information Access, C.I.D., pp. 637-648, 2000.
  • Apr 12, 2000

Abstract

Close
We present and interactive system that allows a user to locate regions of video that are similar to a video query. Thus segments of video can be found by simply providing an example of the video of interest. The user selects a video segment for the query from either a static frame-based interface or a video player. A statistical model of the query is calculated on-the-fly, and is used to find similar regions of video. The similarity measure is based on a Gaussian model of reduced frame image transform coefficients. Similarity in a single video is displayed in the Metadata Media Player. The player can be used to navigate through the video by jumping between regions of similarity. Similarity can be rapidly calculated for multiple video files as well. These results are displayed in MBase, a Web-based video browser that allows similarity in multiple video files to be visualized simultaneously.

Anchored Conversations. Chatting in the Context of a Document.

Publication Details
  • In CHI 2000 Conference Proceedings, ACM Press, pp. 454-461, 2000.
  • Mar 31, 2000

Abstract

Close
This paper describes an application-independent tool called Anchored Conversations that brings together text-based conversations and documents. The design of Anchored Conversations is based on our observations of the use of documents and text chats in collaborative settings. We observed that chat spaces support work conversations, but they do not allow the close integration of conversations with work documents that can be seen when people are working together face-to-face. Anchored Conversations directly addresses this problem by allowing text chats to be anchored into documents. Anchored Conversations also facilitates document sharing; accepting an invitation to an anchored conversation results in the document being automatically uploaded. In addition, Anchored Conversations provides support for review, catch-up and asynchronous communications through a database. In this paper we describe motivating fieldwork, the design of Anchored Conversations, a scenario of use, and some preliminary results from a user study.
Publication Details
  • In CHI 2000 Conference Proceedings, ACM Press, pp. 185-192, 2000.
  • Mar 31, 2000

Abstract

Close
This paper presents a method for generating compact pictorial summarizations of video. We developed a novel approach for selecting still images from a video suitable for summarizing the video and for providing entry points into it. Images are laid out in a compact, visually pleasing display reminiscent of a comic book or Japanese manga. Users can explore the video by interacting with the presented summary. Links from each keyframe start video playback and/or present additional detail. Captions can be added to presentation frames to include commentary or descriptions such as the minutes of a recorded meeting. We conducted a study to compare variants of our summarization technique. The study participants judged the manga summary to be significantly better than the other two conditions with respect to their suitability for summaries and navigation, and their visual appeal.

Beyond Bits: The Future of Quantum Information Processing.

Publication Details
  • IEEE Computer, pp. 38-45, January 2000.
  • Feb 1, 2000

Abstract

Close
Recently, physicists and computer scientists have realized that not only do our ideas about computing rest on only partly accurate principles, but they miss out on a whole class of computation. Quantum physics offers powerful methods of encoding and manipulating information that are not possible within a classical framework. The potential applications of these quantum information processing methods include provably secure key distribution for cryptography, rapid integer factoring, and quantum simulation.
1999
Publication Details
  • In Proceedings of GROUP '99 (Phoenix, AZ), ACM Press, 1999.
  • Nov 14, 1999

Abstract

Close
The development of tools to support synchronous communications between non-collocated colleagues has received considerable attention in recent years. Much of the work has focused on increasing a sense of co-presence between interlocutors by supporting aspects of face-to-face conversations that go beyond mere words (e.g. gaze, postural shifts). In this regard, a design goal for many environments is the provision of as much media-richness as possible to support non-collocated communication. In this paper we present results from our most recent interviews studying the use of a text-based virtual environment to support work collaborations. We describe how such an environment, though lacking almost all the visual and auditory cues known to be important in face-to-face conversation, has played an important role in day-to-day communication. We offer a set of characteristics we feel are important to the success of this text-only tool and discuss issues emerging from its long-term use.
Publication Details
  • In Proceedings of ACM Multimedia '99, Orlando, Florida, November 1999.
  • Oct 30, 1999

Abstract

Close
NoteLook is a client-server system designed and built to support multimedia note taking in meetings with digital video and ink. It is integrated into a conference room equipped with computer controllable video cameras, video conference camera, and a large display rear video projector. The NoteLook client application runs on wireless pen-based notebook computers. Video channels containing images of the room activity and presentation material are transmitted by the NoteLook servers to the clients, and the images can be interactively and automatically incorporated into the note pages. Users can select channels, snap in large background images and sequences of thumbnails, and write freeform ink notes. A smart video source management component enables the capture of high quality images of the presentation material from a variety of sources. For accessing and browsing the notes and recorded video, NoteLook generates Web pages with links from the images and ink strokes correlated to the video.
Publication Details
  • In Proceedings ACM Multimedia, (Orlando, FL) ACM Press, pp. 383-392, 1999.
  • Oct 30, 1999

Abstract

Close
This paper presents methods for automatically creating pictorial video summaries that resemble comic books. The relative importance of video segments is computed from their length and novelty. Image and audio analysis is used to automatically detect and emphasize meaningful events. Based on this importance measure, we choose relevant keyframes. Selected keyframes are sized by importance, and then efficiently packed into a pictorial summary. We present a quantitative measure of how well a summary captures the salient events in a video, and show how it can be used to improve our summaries. The result is a compact and visually pleasing summary that captures semantically important events, and is suitable for printing or Web access. Such a summary can be further enhanced by including text captions derived from OCR or other methods. We describe how the automatically generated summaries are used to simplify access to a large collection of videos.
Publication Details
  • In Proceedings of ACM Multimedia '99, pp. 77-80, Orlando, Florida, November 1999
  • Oct 30, 1999

Abstract

Close
This paper presents a novel approach to visualizing the time structure of music and audio. The acoustic similarity between any two instants of an audio recording is calculated and displayed as a two-dimensional representation. Similar or repeating elements are visually distinct, allowing identification of structural and rhythmic characteristics. Visualization examples are presented for orchestral, jazz, and popular music. Applications include content-based analysis and segmentation, as well as tempo and structure extraction.

Tools for Quantum Algorithms

Publication Details
  • Int.J.Mod.Phys. C10 (1999) 1347-1362
  • Oct 29, 1999

Abstract

Close
We present efficient implementations of a number of operations for quantum computers. These include controlled phase adjustments of the amplitudes in a superposition, permutations, approximations of transformations and generalizations of the phase adjustments to block matrix transformations. These operations generalize those used in proposed quantum search algorithms.
Publication Details
  • In Proceedings of the Second International Workshop on Cooperative Buildings (CoBuild'99). Lecture Notes in Computer Science, Vol. 1670 Springer-Verlag, pp. 79-88, 1999.
  • Oct 1, 1999

Abstract

Close
We describe a media enriched conference room designed for capturing meetings. Our goal is to do this in a flexible, seamless, and unobtrusive manner in a public conference room that is used for everyday work. Room activity is captured by computer controllable video cameras, video conference cameras, and ceiling microphones. Presentation material displayed on a large screen rear video projector is captured by a smart video source management component that automatically locates the highest fidelity image source. Wireless pen-based notebook computers are used to take notes, which provide indexes to the captured meeting. Images can be interactively and automatically incorporated into the notes. Captured meetings may be browsed on the Web with links to recorded video.
Publication Details
  • In Human-Computer Interaction INTERACT '99, IOS Press, pp. 458-465, 1999.
  • Aug 30, 1999

Abstract

Close
In our Portholes research, we found that users needed to have a sense of being in public and to know who can see them (audience) and who is looking currently at them (lookback). Two redesigns of the Portholes display present a 3D theater view of the audience. Different sections display core team members, non-core team members and lookback. An experiment determined that people have strong preferences about audience information and how it should be displayed. Layout preferences are varied, but unfolding techniques and cluster analysis reveal that these preference perspectives fall into four groups of similar preferences.
Publication Details
  • In Human-Computer Interaction INTERACT '99, IOS Press, pp. 205-212, 1999.
  • Aug 30, 1999

Abstract

Close
When reviewing collections of video such as recorded meetings or presentations, users are often interested only in an overview or short segments of these documents. We present techniques that use automatic feature analysis, such as slide detection and applause detection, to help locate the desired video and to navigate to regions of interest within it. We built a web-based interface that graphically presents information about the contents of each video in a collection such as its keyframes and the distribution of a particular feature over time. A media player is tightly integrated with the web interface. It supports navigation within a selected file by visualiz-ing confidence scores for the presence of features and by using them as index points. We conducted a user study to refine the usability of these tools.

From Reading to Retrieval: Freeform Ink Annotations as Queries

Publication Details
  • In Proceedings of ACM SIGIR 99, ACM Press, pp. 19-25, 1999.
  • Aug 15, 1999

Abstract

Close
User interfaces for digital libraries tend to focus on retrieval: users retrieve documents online, but then print them out and work with them on paper. One reason for printing documents is to annotate them with freeform ink while reading. Annotation can help readers to understand documents and to make them their own. In addition, annotation can reveal readers' interests with respect to a particular document. In particular, it is possible to construct full-text queries based on annotated passages of documents. We describe an experiment that tested the effectiveness of such queries, as compared to relevance feedback query techniques. For a set of TREC topics and documents, queries derived from annotated passages produced significantly better results than queries derived from subjects' judgments of relevance.

Introducing a Digital Library Reading Appliance into a Reading Group.

Publication Details
  • In Proceedings of ACM Digital Libraries 99, ACM Press, pp. 77-84, 1999.
  • Aug 11, 1999

Abstract

Close
How will we read digital library materials? This paper describes the reading practices of an on-going reading group, and how these practices changed when we introduced XLibris, a digital library reading appliance that uses a pen tablet computer to provide a paper-like interface. We interviewed group members about their reading practices, observed their meetings, and analyzed their annotations, both when they read a paper document and when they read using XLibris. We use these data to characterize their analytic reading, reference use, and annotation practices. We also describe the use of the Reader's Notebook, a list of clippings that XLibris computes from a reader's annotations. Implications for digital libraries stem from our findings on reading and mobility, the complexity of analytic reading, the social nature of reference following, and the unselfconscious nature of readers' annotations.