Publications

From before 2005 (Clear Search)

2001

m-Links: An Infrastructure for Very Small Internet Devices

Publication Details
  • The 7th Annual International Conference on Mobile Computing and Networking (MOBICOM 2001), Rome, Italy, July 16-21 2001, ACM Press, 2001, pp. 122-131.
  • Jul 16, 2001

Abstract

Close
In this paper we describe the Mobile Link (m-Links) infrastructure for utilizing existing World Wide Web content and services on wireless phones and other very small Internet terminals. Very small devices, typically with 3-20 lines of text, provide portability and other functionality while sacrificing usability as Internet terminals. In order to provide access on such limited hardware we propose a small device web navigation model that is more appropriate than the desktop computers web browsing model. We introduce a middleware proxy, the Navigation Engine, to facilitate the navigation model by concisely displaying the Webs link (i.e., URL) structure. Because not all Web information is appropriately "linked," the Navigation Engine incorporates data-detectors to extract bits of useful information such as phone numbers and addresses. In order to maximize program-data composibility, multiple network-based services (similar to browser plug-ins) are keyed to a links attributes such as its MIME type. We have built this system with an emphasis on user extensibility and we describe the design and implementation as well as a basic set of middleware services that we have found to be particularly important.
Publication Details
  • Proceedings of the INNS-IEEE International Joint Conference on Neural Networks, vol. 3, pp. 2176 - 2181, Washington DC., July 14-19, 2001.
  • Jul 14, 2001

Abstract

Close
The goal of this project is to teach a computer-robot system to understand human speech through natural human-computer interaction. To achieve this goal, we develop an interactive and incremental learning algorithm based on entropy-guided learning vector quantisation (LVQ) and memory association. Supported by this algorithm, the robot has the potential to learn unlimited sounds progressively. Experimental results of a multilingual short-speech learning task are given after the presentation of the learning system. Further investigation of this learning system will include human-computer interactions that involve more modalities, and applications that use the proposed idea to train home appliances.
Publication Details
  • The Eighth IFIP TC.13 Conference On Human-Computer Interaction (INTERACT 2001). Tokyo, Japan, July 9-13, 2001.
  • Jul 9, 2001

Abstract

Close
The two most commonly used techniques for evaluating the fit between application design and use - namely, usability testing and beta testing with user feedback - suffer from a number of limitations that restrict evaluation scale (in the case of usability tests) and data quality (in the case of beta tests). They also fail to provide developers with an adequate basis for: (1) assessing the impact of suspected problems on users at large, and (2) deciding where to focus development and evaluation resources to maximize the benefit for users at large. This paper describes an agent-based approach for collecting usage data and user feedback over the Internet that addresses these limitations to provide developers with a complementary source of usage- and usability-related information. Contributions include: a theory to motivate and guide data collection, an architecture capable of supporting very large scale data collection, and real-word experience suggesting the proposed approach is complementary to existing practice.
Publication Details
  • In Proceedings of Human-Computer Interaction (INTERACT '01), IOS Press, Tokyo, Japan, pp. 464-471
  • Jul 9, 2001

Abstract

Close
Hitchcock is a system to simplify the process of editing video. Its key features are the use of automatic analysis to find the best quality video clips, an algorithm to cluster those clips into meaningful piles, and an intuitive user interface for combining the desired clips into a final video. We conducted a user study to determine how the automatic clip creation and pile navigation support users in the editing process. The study showed that users liked the ease-of-use afforded by automation, but occasionally had problems navigating and overriding the automated editing decisions. These findings demonstrate the need for a proper balance between automation and user control. Thus, we built a new version of Hitchcock that retains the automatic editing features, but provides additional controls for navigation and for allowing users to modify the system decisions.

Designing e-Books for Legal Research.

Publication Details
  • In Proceedings of JCDL 2001 (Roanoke, VA, June 23-27). ACM Press. pp. 41-48.
  • Jun 23, 2001

Abstract

Close
In this paper we report the findings from a field study of legal research in a first-tier law school and on the resulting redesign of XLibris, a next-generation e-book. We first characterize a work setting in which we expected an e-book to be a useful interface for reading and otherwise using a mix of physical and digital library materials, and explore what kinds of reading-related functionality would bring value to this setting. We do this by describing important aspects of legal research in a heterogeneous information environment, including mobility, reading, annotation, link following and writing practices, and their general implications for design. We then discuss how our work with a user community and an evolving e-book prototype allowed us to examine tandem issues of usability and utility, and to redesign an existing e-book user interface to suit the needs of law students. The study caused us to move away from the notion of a stand-alone reading device and toward the concept of a document laptop, a platform that would provide wireless access to information resources, as well as support a fuller spectrum of reading-related activities.
Publication Details
  • Proceedings of ACM CHI2001, vol. 3, pp. 442 - 449, Seattle, Washington, USA, March 31 - April 5, 2001.
  • Apr 5, 2001

Abstract

Close
Given rapid improvements in network infrastructure and streaming-media technologies, a large number of corporations and universities are recording lectures and making them available online for anytime, anywhere access. However, producing high-quality lecture videos is still labor intensive and expensive. Fortunately, recent technology advances are making it feasible to build automated camera management systems to capture lectures. In this paper we report on our design, implementation and study of such a system. Compared to previous work-which has tended to be technology centric-we started with interviews with professional video producers and used their knowledge and expertise to create video production rules. We then targeted technology components that allowed us to implement a substantial portion of these rules, including the design of a virtual video director. The system's performance was compared to that of a human operator via a user study. Results suggest that our system's quality in close to that of a human-controlled system. In fact most remote audience members could not tell if the video was produced by a computer or a person.

Quiet Calls: Talking Silently on Mobile Phones

Publication Details
  • In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 174-181, ACM Press, March 31-April 5, 2001, Seattle, WA.
  • Mar 30, 2001
Publication Details
  • In Proceedings of the Thirty-fourth Annual Hawaii International Conference on System Sciences (HICSS), Big Island, Hawaii. January 7-12, 2001.
  • Feb 7, 2001

Abstract

Close
This paper describes a new system for panoramic two-way video communication. Digitally combining images from an array of inexpensive video cameras results in a wide-field panoramic camera, from inexpensive off-the-shelf hardware. This system can aid distance learning in several ways, by both presenting a better view of the instructor and teaching materials to the students, and by enabling better audience feedback to the instructor. Because the camera is fixed with respect to the background, simple motion analysis can be used to track objects and people of interest. Electronically selecting a region of this results in a rapidly steerable "virtual camera." We present system details and a prototype distance-learning scenario using multiple panoramic cameras.
Publication Details
  • WebNet 2001 World Conference on the WWW and Internet, Orlando, FL
  • Jan 17, 2001

Abstract

Close
As more information is made available online, users collect information in personal information spaces like bookmarks and emails. While most users feel that organizing these collections is crucial to improve access, studies have shown that this activity is time consuming and highly cognitive. Automatic classification has been used but by relying on the full text of the documents, they do not generate personalized classifications. Our approach is to give users the ability to annotate their documents as they first access them. This annotation tool is unobtrusive and welcome by most users who generally miss this facility when dealing with digital documents. Our experiments show that these annotations can be used to generate personalized classifications of annotated Web pages.

Description and Narrative in Hypervideo

Publication Details
  • Proceedings of the Thirty-Fourth Annual Hawaii International Conference on System Sciences
  • Jan 3, 2001

Abstract

Close
While hypertext was originally conceived for the management of scientific and technical information, it has been embraced with great enthusiasm by several members of the literary community for the promises it offers towards new approaches to narrative. Experiments with hypertext-based interactive narrative were originally based solely on verbal text but have more recently extended to include digital video artifacts. The most accomplished of these experiments, HyperCafe, provided new insights into the nature of narrative and how it may be presented; but it also offered an opportunity to reconsider other text types. This paper is an investigation of the application of an approach similar to HyperCafe to a descriptive text. We discuss how the approach serves the needs of description and illustrate the discussion with a concrete example. We then conclude by considering the extent to which our experiences with description may be applied to our continuing interest in narrative.
2000
Publication Details
  • ACM Computing Surveys, Vol. 32 No. 4, December 2000.
  • Dec 1, 2000

Abstract

Close
Modern window-based user interface systems generate user interface events as natural products of their normal operation. Because such events can be automatically captured and because they indicate user behavior with respect to an application's user interface, they have long been regarded as a potentially fruitful source of information regarding application usage and usability. However, because user interface events are typically voluminos and rich in detail, automated support is generally required to extract information at a level of abstraction that is useful to investigators interested in analyzing application usage or evaluating usability. This survey examines computer-aided techniques used by HCI practitioners and researchers to extract usability-related information from user interface events. A framework is presented to help HCI practitioners and researchers categorize and compare the approaches that have been, or might fruitfully be, applied to this problem. Because many of the techniques in the research literature have not been evaluated in practice, this survey provides a conceptual evaluation to help identify some of the relative merits and drawbacks of the various classes of approaches. Ideas for future research in this area are also presented. This survey addresses the following questions: How might user interface events be used in evaluating usability? How are user interface events related to other forms of usability data? What are the key challenges faced by investigators wishing to exploit this data? What approaches have been brought to bear on this problem and how do they compare to one another? What are some of the important open research questions in this area?
Publication Details
  • Multimedia Modeling: Modeling Multimedia Information and Systems, Nagano, Japan
  • Nov 12, 2000

Abstract

Close
While hypermedia is usually presented as a way to offer content in a nonlinear manner, hypermedia structure tends to reinforce the assumption that reading is basically a linear process. Link structures provide a means by which the reader may choose different paths to traverse; but each of these paths is fundamentally linear, revealed through either a block of text or a well-defined chain of links. While there are experiences that get beyond such linear constraints, such as driving a car, it is very hard to capture this kind of non-linearity, characterized by multiple sources of stimuli competing for attention, in a hypermedia document. This paper presents a multi-channel document infrastructure that provides a means by which all such sources of attention are presented on a single "page" (i.e., a display with which the reader interacts) and move between background and foreground in response to the activities of the reader. The infrastructure thus controls the presentation of content with respect to four dimensions: visual, audio, interaction support, and rhythm.
Publication Details
  • In Proceedings of UIST '00, ACM Press, pp. 81-89, 2000.
  • Nov 4, 2000

Abstract

Close
Hitchcock is a system that allows users to easily create custom videos from raw video shot with a standard video camera. In contrast to other video editing systems, Hitchcock uses automatic analysis to determine the suitability of portions of the raw video. Unsuitable video typically has fast or erratic camera motion. Hitchcock first analyzes video to identify the type and amount of camera motion: fast pan, slow zoom, etc. Based on this analysis, a numerical "unsuitability" score is computed for each frame of the video. Combined with standard editing rules, this score is used to identify clips for inclusion in the final video and to select their start and end points. To create a custom video, the user drags keyframes corresponding to the desired clips into a storyboard. Users can lengthen or shorten the clip without specifying the start and end frames explicitly. Clip lengths are balanced automatically using a spring-based algorithm.
Publication Details
  • In Proceedings of the International Symposium on Music Information Retrieval, in press.
  • Oct 23, 2000

Abstract

Close
We introduce an audio retrieval-by-example system for orchestral music. Unlike many other approaches, this system is based on analysis of the audio waveform and does not rely on symbolic or MIDI representations. ARTHUR retrieves audio on the basis of long-term structure, specifically the variation of soft and louder passages. The long-term structure is determined from envelope of audio energy versus time in one or more frequency bands. Similarity between energy profiles is calculated using dynamic programming. Given an example audio document, other documents in a collection can be ranked by similarity of their energy profiles. Experiments are presented for a modest corpus that demonstrate excellent results in retrieving different performances of the same orchestral work, given an example performance or short excerpt as a query.

An Introduction to Quantum Computing for Non-Physicists.

Publication Details
  • ACM Computing Surveys, Vol. 32(3), pp. 300 - 335
  • Sep 1, 2000

Abstract

Close
Richard Feynman's observation that quantum mechanical effects could not be simulated efficiently on a computer led to speculation that computation in general could be done more efficiently if it used quantum effects. This speculation appeared justified when Peter Shor described a polynomial time quantum algorithm for factoring integers. In quantum systems, the computational space increases exponentially with the size of the system which enables exponential parallelism. This parallelism could lead to exponentially faster quantum algorithms than possible classically. The catch is that accessing the results, which requires measurement, proves tricky and requires new non-traditional programming techniques. The aim of this paper is to guide computer scientists and other non-physicists through the conceptual and notational barriers that separate quantum computing from conventional computing. We introduce basic principles of quantum mechanics to explain where the power of quantum computers comes from and why it is difficult to harness. We describe quantum cryptography, teleportation, and dense coding. Various approaches to harnessing the power of quantum parallelism are explained, including Shor's algorithm, Grover's algorithm, and Hogg's algorithms. We conclude with a discussion of quantum error correction.