Publications

FXPAL publishes in top scientific conferences and journals.

2009
Publication Details
  • IJCSI International Journal of Computer Science Issues. Vol. 1.
  • Oct 15, 2009

Abstract

Close
Reading documents on mobile devices is challenging. Not only are screens small and difficult to read, but also navigating an environment using limited visual attention can be difficult and potentially dangerous. Reading content aloud using text-to-speech (TTS) processing can mitigate these problems, but only for content that does not include rich visual information. In this paper, we introduce a new technique, SeeReader, that combines TTS with automatic content recognition and document presentation control that allows users to listen to documents while also being notified of important visual content. Together, these services allow users to read rich documents on mobile devices while maintaining awareness of their visual environment.
Publication Details
  • Book chapter in "Designing User Friendly Augmented Work Environments" Series: Computer Supported Cooperative Work Lahlou, Saadi (Ed.) 2009, Approx. 340 p. 117 illus., Hardcove
  • Sep 30, 2009

Abstract

Close
The Usable Smart Environment project (USE) aims at designing easy-to-use, highly functional next-generation conference rooms. Our first design prototype focuses on creating a "no wizards" room for an American executive; that is, a room the executive could walk into and use by himself, without help from a technologist. A key idea in the USE framework is that customization is one of the best ways to create a smooth user experience. Since the system needs to fit both with the personal leadership style of the executive and the corporation's meeting culture, we began the design process by exploring the work flow in and around meetings attended by the executive. Based on our work flow analysis and the scenarios we developed from it, USE developed a flexible, extensible architecture specifically designed to enhance ease of use in smart environment technologies. The architecture allows customization and personalization of smart environments for particular people and groups, types of work, and specific physical spaces. The first USE room was designed for FXPAL's executive "Ian" and installed in Niji, a small executive conference room at FXPAL. The room Niji currently contains two large interactive whiteboards for projection of presentation material, for annotations using a digital whiteboard, or for teleconferencing; a Tandberg teleconferencing system; an RFID authentication plus biometric identification system; printing via network; a PDA-based simple controller, and a tabletop touch-screen console. The console is used for the USE room control interface, which controls and switches between all of the equipment mentioned above.
Publication Details
  • ACM Mindtrek 2009
  • Sep 30, 2009

Abstract

Close

Most mobile navigation systems focus on answering the question,“I know where I want to go, now can you show me exactly how to get there?” While this approach works well for many tasks, it is not as useful for unconstrained situations in which user goals and spatial landscapes are more fluid, such as festivals or conferences. In this paper we describe the design and iteration of the Kartta system, which we developed to answer a slightly different question: “What are the most interesting areas here and how do I find them?”

Publication Details
  • Mobile HCI 2009 (poster)
  • Sep 15, 2009

Abstract

Close
Most mobile navigation systems focus on answering the question, "I know where I want to go, now can you show me exactly how to get there?" While this approach works well for many tasks, it is not as useful for unconstrained situations in which user goals and spatial landscapes are more fluid, such as festivals or conferences. In this paper we describe the design and iteration of the Kartta system, which we developed to answer a slightly different question: "What are the most interesting areas here and how do I find them?"
Publication Details
  • Book chapter in "Understanding the New Generation Office: Collective Intelligence of 100 Specialists" (book project in Japan, by New Era Office Research Center, Tokyo)
  • Aug 18, 2009

Abstract

Close

A personal interface for information mash-up: exploring worlds both physical and virtual

Book chapter in "Understanding the New Generation Office: Collective Intelligence of 100 Specialists" (book project in Japan, by New Era Office Research Center, Tokyo) , August 18, 2009

This is a Big Idea piece for a collective intelligence book project by the New Era Office Research Center, Tokyo. It is written at the invitation of FX colleague Koushi Kawamoto. The project asks the same questions of 100 specialists: Answer these four questions about an idea for a next-generation workplace: 1. Want: what do I want to be able to do? 2. Should: what should a system to support this "want" be able to do? 3. Create: imagine what an instance of this idea might be. 4. Can: how could this instance be realized in reality?

WANT: In my ideal work environment, the data I need on everything and everyone should be available at my fingertips, all the time, in many configurations that I can mix-and-match to suit the needs of any task. This includes things like: • documents of all types • people's status, tasks, and availability • audio, video, mobile, and virtual world communication channels • links to the physical world as appropriate, for example sensors delivering factory data, or the state of the machines I use daily in the workplace (printers, my PC, conference room systems), or awareness data about my colleagues. CAN: How can we approach this problem? Let's consider the creation of a personal interface or instrument for information mashup, capable of interacting with complex data structures, for tuning smart environments, and for exploring worlds both physical and virtual, in business, social and personal realms. Like any interactive system this idea has two parts: human-facing and system-facing. These can be called Interstitia I (extending human interactivity) and Interstitia II (enabling smart environments).
Publication Details
  • Presentation at SIGGRAPH 2009, New Orleans, LA. ACM.
  • Aug 3, 2009

Abstract

Close
FXPAL, a research lab in Silicon Valley, and TCHO, a chocolate manufacturer in San Francisco, have been collaborating on exploring emerging technologies for industry. The two companies seek ways to bring people closer to the products they consume, clarifying end-to-end production processes with technologies like sensor networks for fine-grained monitoring and control, mobile process control, and real/virtual mashups using virtual and augmented realities. This work lies within and extends the area of research called mixed- or cross-reality
Publication Details
  • IEEE Pervasive Computing July-August 2009 (Journal, Works in Progress section)
  • Jul 18, 2009

Abstract

Close
FXPAL, a research lab in Silicon Valley, and TCHO, a chocolate manufacturer in San Francisco, have been collaborating on exploring emerging technologies for industry. The two companies seek ways to bring people closer to the products they consume, clarifying end-to-end production processes with technologies like sensor networks for fine-grained monitoring and control, mobile process control, and real/virtual mashups using virtual and augmented realities.

Interactive Models from Images of a Static Scene

Publication Details
  • Computer Graphics and Virtual Reality (CGVR '09)
  • Jul 13, 2009

Abstract

Close
FXPAL's Pantheia system enables users to create virtual models by 'marking up' a physical space with pre-printed visual markers. The meanings associated with the markers come from a markup language that enables the system to create models from a relatively sparse set of markers. This paper describes extensions to our markup language and system that support the creation of interactive virtual objects. Users place markers to define components such as doors and drawers with which an end user of the model can interact. Other interactive elements, such as controls for color changes or lighting choices, are also supported. Pantheia produced a model of a room with hinged doors, a cabinet with drawers, doors, and color options, and a railroad track.
Publication Details
  • 2009 IEEE International Conference on Multimedia and Expo (ICME)
  • Jun 30, 2009

Abstract

Close

This paper presents a tool and a novel Fast Invariant Transform (FIT) algorithm for language independent e-documents access. The tool enables a person to access an e-document through an informal camera capture of a document hardcopy. It can save people from remembering/exploring numerous directories and file names, or even going through many pages/paragraphs in one document. It can also facilitate people’s manipulation of a document or people’s interactions through documents. Additionally, the algorithm is useful for binding multimedia data to language independent paper documents. Our document recognition algorithm is inspired by the widely known SIFT descriptor [4] but can be computed much more efficiently for both descriptor construction and search. It also uses much less storage space than the SIFT approach. By testing our algorithm with randomly scaled and rotated document pages, we can achieve a 99.73% page recognition rate on the 2188-page ICME06 proceedings and 99.9% page recognition rate on a 504-page Japanese math book.

Image-based Lighting Adjustment Method for Browsing Object Images

Publication Details
  • 2009 IEEE International Conference on Multimedia and Expo (ICME)
  • Jun 30, 2009

Abstract

Close
In this paper, we describe an automatic lighting adjustment method for browsing object images. From a set of images of an object, taken under different lighting conditions, we generate two types of illuminated images: a textural image which eliminates unwanted specular reflections of the object, and a highlight image in which specularities of the object are highly preserved. Our user interface allows viewers to digitally zoom into any region of the image, and the lighting adjusted images are automatically generated for the selected region and displayed. Switching between the textural and the highlight images helps viewers to understand characteristics of the object surface.

WebNC: efficient sharing of web applications

Publication Details
  • Hypertext 2009
  • Jun 29, 2009

Abstract

Close
WebNC is a system for efficiently sharing, retrieving and viewing web applications. Unlike existing screencasting and screensharing tools, WebNC is optimized to work with web pages where a lot of scrolling happens. WebNC uses a tile-based encoding to capture, transmit and deliver web applications, and relies only on dynamic HTML and JavaScript. The resulting webcasts require very little bandwidth and are viewable on any modern web browser including Firefox and Internet Explorer as well as browsers on the iPhone and Android platforms.
Publication Details
  • Journal article in Artificial Intelligence for Engineering Design, Analysis and Manufacturing (2009), 23, 263-274. Printed in the USA. 2009 Cambridge University Press.
  • Jun 17, 2009

Abstract

Close
Modern design embraces digital augmentation, especially in the interplay of digital media content and the physical dispersion and handling of information. Based on the observation that small paper memos with sticky backs (such as Post-Its ™) are a powerful and frequently used design tool, we have created Post-Bits, a new interface device with a physical embodiment that can be handled as naturally as paper sticky notes by designers, yet add digital information affordances as well. A Post-Bit is a design prototype of a small electronic paper device for handling multimedia content, with interaction control and display in one thin flexible sheet. Tangible properties of paper such as flipping, flexing, scattering, and rubbing are mapped to controlling aspects of the multimedia content such as scrubbing, sorting, or up- or downloading dynamic media (images, video, text). In this paper we discuss both the design process involved in building a prototype of a tangible interface using new technologies, and how the use of Post-Bits as a tangible design tool can impact two common design tasks: design ideation or brainstorming, and storyboarding for interactive systems or devices.
Publication Details
  • Immerscom 2009
  • May 27, 2009

Abstract

Close
We describe Pantheia, a system that constructs virtual models of real spaces from collections of images, through the use of visual markers that guide and constrain model construction. To create a model users simply `mark up' the real world scene by placing pre-printed markers that describe scene elements or impose semantic constraints. Users then collect still images or video of the scene. From this input, Pantheia automatically and quickly produces a model. The Pantheia system was used to produce models of two rooms that demonstrate the e ectiveness of the approach.
Publication Details
  • Pervasive 2009
  • May 11, 2009

Abstract

Close
Recorded presentations are difficult to watch on a mobile phone because of the small screen, and even more challenging when the user is traveling or commuting. This demo shows an application designed for viewing presentations in a mobile situation, and describes the design process that involved on-site observation and informal user testing at our lab. The system generates a user-controllable movie by capturing a slide presentation, extracting active regions of interest using cues from the presenter, and creating pan-and-zoom effects to direct the active regions within a small screen. During playback, the user can simply watch the movie in automatic mode using a minimal amount of effort to operate the application. When more flexible control is needed, the user can switch into manual mode to temporarily focus on specific regions of interest.
Publication Details
  • ACM Transactions on Multimedia Computing, Communications and Applications, Vol. 5, Issue 2
  • May 1, 2009

Abstract

Close
Hyper-Hitchcock consists of three components for creating and viewing a form of interactive video called detail-on-demand video: a hypervideo editor, a hypervideo player, and algorithms for automatically generating hypervideo summaries. Detail-on-demand video is a form of hypervideo that supports one hyperlink at a time for navigating between video sequences. The Hyper-Hitchcock editor enables authoring of detail-on-demand video without programming and uses video processing to aid in the authoring process. The Hyper-Hitchcock player uses labels and keyframes to support navigation through and back hyperlinks. Hyper-Hitchcock includes techniques for automatically generating hypervideo summaries of one or more videos that take the form of multiple linear summaries of different lengths with links from the shorter to the longer summaries. User studies on authoring and viewing provided insight into the various roles of links in hypervideo and found that player interface design greatly affects people's understanding of hypervideo structure and the video they access.

WebNC: efficient sharing of web applications

Publication Details
  • WWW 2009
  • Apr 22, 2009

Abstract

Close
WebNC is a browser plugin that leverages the Document Object Model for efficiently sharing web browser windows or recording web browsing sessions to be replayed later. Unlike existing screen-sharing or screencasting tools, WebNC is optimized to work with web pages where a lot of scrolling happens. Rendered pages are captured as image tiles, and transmitted to a central server through http post. Viewers can watch the webcasts in realtime or asynchronously using a standard web browser: WebNC only relies on html and javascript to reproduce the captured web content. Along with the visual content of web pages, WebNC also captures their layout and textual content for later retrieval. The resulting webcasts require very little bandwidth, are viewable on any modern web browser including the iPhone and Android phones, and are searchable by keyword.
Publication Details
  • CHI2009
  • Apr 4, 2009

Abstract

Close
Zooming user interfaces are increasingly popular on mobile devices with touch screens. Swiping and pinching finger gestures anywhere on the screen manipulate the displayed portion of a page, and taps open objects within the page. This makes navigation easy but limits other manipulations of objects that would be supported naturally by the same gestures, notably cut and paste, multiple selection, and drag and drop. A popular device that suffers from this limitation is Apple's iPhone. In this paper, we present Bezel Swipe, an interaction technique that supports multiple selection, cut, copy, paste and other operations without interfering with zooming, panning, tapping and other pre-defined gestures. Participants of our user study found Bezel Swipe to be a viable alternative to direct touch selection.
Publication Details
  • In Proceedings of CHI 2009
  • Apr 4, 2009

Abstract

Close
One of the core challenges now facing smart rooms is supporting realistic, everyday activities. While much research has been done to push forward the frontiers of novel interaction techniques, we argue that technology geared toward widespread adoption requires a design approach that emphasizes straightforward configuration and control, as well as flexibility. We examined the work practices of users of a large, multi-purpose conference room, and designed DICE, a system to help them use the room's capabilities. We describe the design process, and report findings about the system's usability and about people's use of a multi-purpose conference room.
Publication Details
  • Book chapter in Handbook of Research on Socio-Technical Design and Social Networking Systems, eds. Whitworth B., and de Moor, A. Information Science Reference, pp. 529-543.
  • Mar 2, 2009

Abstract

Close
Eye-gaze plays an important role in face-to-face communication. This chapter presents research on exploiting the rich information contained in human eye-gaze for two types of applications. The first is to enhance computer mediated human-human communication by overlaying eye-gaze movement onto the shared visual spatial discussion material such as a map. The second is to manage multimodal human-computer dialogue by tracking the user's eye-gaze pattern as an indicator of user's interest. We briefly review related literature and summarize results from two research projects on human-human and human-computer communication.
Publication Details
  • Proceedings of TRECVID 2008 Workshop
  • Mar 1, 2009

Abstract

Close
In 2008 FXPAL submitted results for two tasks: rushes summarization and interactive search. The rushes summarization task has been described at the ACM Multimedia workshop [1]. Interested readers are referred to that publication for details. We describe our interactive search experiments in this notebook paper.
Publication Details
  • IUI '09
  • Feb 8, 2009

Abstract

Close
We designed an interactive visual workspace, MediaGLOW, that supports users in organizing personal and shared photo collections. The system interactively places photos with a spring layout algorithm using similarity measures based on visual, temporal, and geographic features. These similarity measures are also used for the retrieval of additional photos. Unlike traditional spring-based algorithms, our approach provides users with several means to adapt the layout to their tasks. Users can group photos in stacks that in turn attract neighborhoods of similar photos. Neighborhoods partition the workspace by severing connections outside the neighborhood. By placing photos into the same stack, users can express a desired organization that the system can use to learn a neighborhood-specific combination of distances.
2008
Publication Details
  • Fuji Xerox Technical Report
  • Dec 15, 2008

Abstract

Close
We have developed an interactive video search system that allows the searcher to rapidly assess query results and easily pivot off those results to form new queries. The system is intended to maximize the use of the discriminative power of the human searcher. The typical video search scenario we consider has a single searcher with the ability to search with text and content-based queries. In this paper, we evaluate a new collaborative modification of our search system. Using our system, two or more users with a common information need search together, simultaneously. The collaborative system provides tools, user interfaces and, most importantly, algorithmically-mediated retrieval to focus, enhance and augment the team's search and communication activities. In our evaluations, algorithmic mediation improved the collaborative performance of both retrieval (allowing a team of searchers to find relevant information more efficiently and effectively), and exploration (allowing the searchers to find relevant information that cannot be found while working individually). We present analysis and conclusions from comparative evaluations of the search system.

Rethinking the Podium

Publication Details
  • Chapter in "Interactive Artifacts and Furniture Supporting Collaborative Work and Learning", ed. P. Dillenbourg, J. Huang, and M. Cherubini. Published Nov. 28, 2008, Springer. Computer Supported Collaborative learning Series Vol 10.
  • Nov 28, 2008

Abstract

Close
As the use of rich media in mobile devices and smart environments becomes more sophisticated, so must the design of the everyday objects used as controllers and interfaces. Many new interfaces simply tack electronic systems onto existing forms. However, an original physical design for a smart artefact, that integrates new systems as part of the form of the device, can enhance the end-use experience. The Convertible Podium is an experiment in the design of a smart artefact with complex integrated systems for the use of rich media in meeting rooms. It combines the highly designed look and feel of a modern lectern with systems that allow it to serve as a central control station for rich media manipulation. The interface emphasizes tangibility and ease of use in controlling multiple screens, multiple media sources (including mobile devices) and multiple distribution channels, and managing both data and personal representation in remote telepresence.

Cerchiamo: a collaborative exploratory search tool

Publication Details
  • CSCW 2008 (Demo), San Diego, CA, ACM Press.
  • Nov 10, 2008

Abstract

Close
We describe Cerchiamo, a collaborative exploratory search system that allows teams of searchers to explore document collections synchronously. Working with Cerchiamo, team members use independent interfaces to run queries, browse results, and make relevance judgments. The system mediates the team members' search activity by passing and reordering search results and suggested query terms based on the teams' actions. The combination of synchronous influence with independent interaction allows team members to be more effective and efficient in performing search tasks.