Publications

FXPAL publishes in top scientific conferences and journals.

2012
Publication Details
  • IIiX 2012
  • Aug 21, 2012

Abstract

Close
Exploratory search activities tend to span multiple sessions and involve finding, analyzing and evaluating information and collab-orating with others. Typical search systems, on the other hand, are designed to support a single searcher, precision-oriented search tasks. We describe a search interface and system design of a multi-session exploratory search system, discuss design challenges en-countered, and chronicle the evolution of our design. Our design describes novel displays for visualizing retrieval history infor-mation, and introduces ambient displays and persuasive elements to interactive information retrieval.
Publication Details
  • DIS (Designing Interactive Systems) 2012 Demos track
  • Jun 11, 2012

Abstract

Close
We will demonstrate successive and final stages in the iterative design of a complex mixed reality system in a real-world factory setting. In collaboration with TCHO, a chocolate maker in San Francisco, we built a virtual “mirror” world of a real-world chocolate factory and its processes. Sensor data is imported into the multi-user 3D environment from hundreds of sensors and a number of cameras on the factory floor. The resulting virtual factory is used for simulation, visualization, and collaboration, using a set of interlinked, real-time layers of information. It can be a stand-alone or a web-based application, and also works on iOS and Android cell phones and tablet computers. A unique aspect of our system is that it is designed to enable the incorporation of lightweight social media-style interactions with co-workers along with factory data. Through this mixture of mobile, social, mixed and virtual technologies, we hope to create systems for enhanced collaboration in industrial settings between physically remote people and places, such as factories in China with managers in the US.
Publication Details
  • CHI 2012
  • May 7, 2012

Abstract

Close
Affect influences workplace collaboration and thereby impacts a workplace's productivity. Participants in face-toface interactions have many cues to each other's affect, but work is increasingly carried out via computer-mediated channels that lack many of these cues. Current presence systems enable users to estimate the availability of other users, but not their affect states or communication preferences. This work investigates relationships between affect state and communication preferences and demonstrates the feasibility of estimating affect state and communication preferences from a presence state stream.
Publication Details
  • CHI 2012
  • May 5, 2012

Abstract

Close
Abstract: Pico projectors have lately been investigated as mobile display and interaction devices. We propose to use them as ‘light beams’: Everyday objects sojourning in a beam are turned into dedicated projection surfaces and tangible interaction devices. While this has been explored for large projectors, the affordances of pico projectors are fundamentally different: they have a very small and strictly limited projection ray and can be carried around in a nomadic way during the day. Thus it is unclear how this could be actually leveraged for tangible interaction with physical, real world objects. We have investigated this in an exploratory field study and contribute the results. Based upon these, we present exemplary interaction techniques and early user feedback.

Designing a tool for exploratory information seeking

Publication Details
  • CHI 2012
  • May 5, 2012

Abstract

Close
In this paper we describe our on-going design process in building a search system designed to support people's multi-session exploratory search tasks. The system, called Querium, allows people to run queries and to examine results as do conventional search engines, but it also integrates a sophisticated search history that helps people make sense of their search activity over time. Information seeking is a cognitively demanding process that can benefit from many kinds of information, if that information is presented appropriately. Our design process has been focusing on creating displays that facilitate on-going sense-making while keeping the interaction efficient, fluid, and enjoyable.

Querium: A Session-Based Collaborative Search System

Publication Details
  • European Conference on Information Retrieval 2012
  • Apr 1, 2012

Abstract

Close
People's information-seeking can span multiple sessions, and can be collaborative in nature. Existing commercial offerings do not effectively support searchers to share, save, collaborate or revisit their information. In this demo paper we present Querium: a novel session-based collaborative search system that lets users search, share, resume and collaborate with other users. Querium provides a number of novel search features in a collaborative setting, including relevance feedback, query fusion, faceted search, and search histories
Publication Details
  • DAS 2012
  • Mar 27, 2012

Abstract

Close
This paper describes a system for capturing images of a book with a 3D stereo camera which performs dewarping to produce output images that are flattened. A Fujifilm consumer grade 3D camera (FinePix W3) provides a highly mobile and low cost 3D capture device. Applying standard computer vision algorithms, the camera is calibrated and the captured images are stereo rectified. Due to technical limitations, the resulting point cloud has defects such as splotches and noise, which make it hard to recover the precise 3D locations of the points on the book pages. We address this problem by computing curve profiles of the depth map and using them to build a cylinder model of the pages. We then generate a mesh M1 on the source image and project this into a mesh M2 on the cylinder model in virtual space. Finally, the mesh M2 is flattened and the pixels in M1 are interpolated and rendered via M2 onto the output image. We have implemented a prototype of the system and report on some preliminary evaluation results.
Publication Details
  • ACM Transactions on Computer Human Interaction
  • Mar 1, 2012

Abstract

Close
To combine the affordances of paper and computers, prior research has proposed numerous interactive paper systems that link specific paper document content to digital operations such as multimedia playback and proofreading. Yet, it remains unclear to what degree these systems bridge the inherent gap between paper and computers when compared to existing paper-only and computer-only interfaces. In particular, given the special properties of paper, such as limited dynamic feedback, how well does an average new user learn to master the interactive paper system? What factors affect the user performance? And how does the paper interface work in a typical use scenario? To answer these questions, we conducted two empirical experiments on a generic pen gesture based command system, called PapierCraft [Liao, et al., 2008], for paper-based interfaces. With it, people can select sections of printed document and issue commands such as copy and paste, linking and in-text search. The first experiment focused on the user performance of drawing pen gestures on paper. It proves that users can learn the command system in about 30 minutes and achieve a performance comparable to a Table PC-based interface supporting the same gestures. The second experiment examined the application of the command system in Active Reading tasks. The results show promise for seamless integration of paper and computers in Active Reading for their combined affordances. In addition, our study identifies some key design issues, such as the pen form factor and feedback of gestures. This paper contributes to better understanding on pros and cons of paper and computers, and sheds light on the design of future interfaces for document interaction.

TalkMiner: A Lecture Video Search Engine

Publication Details
  • Fuji Xerox Technical Report, No. 21, 2012, pp. 118-128
  • Feb 3, 2012

Abstract

Close
The design and implementation of a search engine for lecture webcasts is described. A searchable text index is created allowing users to locate material within lecture videos found on a variety of websites such as YouTube and Berkeley webcasts. The searchable index is built from the text of presentation slides appearing in the video along with other associated metadata such as the title and abstract when available. The automatic identification of distinct slides within the video stream presents several challenges. For example, picture-in-picture compositing of a speaker and a presentation slide, switching cameras, and slide builds confuse basic algorithms for extracting keyframe slide images. Enhanced algorithms are described that improve slide identification. A public system was deployed to test the algorithms and the utility of the search engine at www.talkminer.com. To date, over 17,000 lecture videos have been indexed from a variety of public sources.
Publication Details
  • Fuji Xerox Technical Report No.21 2012
  • Feb 2, 2012

Abstract

Close
Modern office work practices increasingly breach traditional boundaries of time and place, making it difficult to interact with colleagues. To address these problems, we developed myUnity, a software and sensor platform that enables rich workplace awareness and coordination. myUnity is an integrated platform that collects information from a set of independent sensors and external data aggregators to report user location, availability, tasks, and communication channels. myUnity's sensing architecture is component-based, allowing channels of awareness information to be added, updated, or removed at any time. Multiple channels of input are combined and composited into a single, high-level presence state. Early studies of a myUnity deployment have demonstrated that the platform allows quick access to core awareness information and show that it has become a useful tool for supporting communication and collaboration in the modern workplace.
Publication Details
  • Personal and Ubiquitous Computing (PUC)
  • Feb 1, 2012

Abstract

Close
Presence systems are valuable in supporting workplace communication and collaboration. These systems are only effective if widely adopted and used. User perceptions of the utility of the information being shared and their comfort sharing such information strongly impact adoption and use. This paper describes the results of a survey of user preferences regarding comfort with and utility of workplace presence systems; the effects of sampling frequency, fidelity, and aggregation; and design implications of these results. We present new results that extend some past findings while challenging others. We contribute new design insights that inform the design of presence technologies to increase both utility and adoption.
2011
Publication Details
  • The 10th International Conference on Virtual Reality Continuum and Its Applications in Industry
  • Dec 11, 2011

Abstract

Close
Augmented Paper (AP) is an important area of Augmented Reality (AR). Many AP systems rely on visual features for paper doc-ument identification. Although promising, these systems can hardly support large sets of documents (i.e. one million documents) because of the high memory and time cost in handling high-dimensional features. On the other hand, general large-scale image identification techniques are not well customized to AP, costing unnecessarily more resource to achieve the identification accuracy required by AP. To address this mismatching between AP and image identification techniques, we propose a novel large-scale image identification technique well geared to AP. At its core is a geometric verification scheme based on Minimum visual-word Correspondence Set (MICSs). MICS is a set of visual word (i.e. quantized visual fea-ture) correspondences, each of which contains a minimum number of correspondences that are sufficient for deriving a transformation hypothesis between a captured document image and an indexed image. Our method selects appropriate MICSs to vote in a Hough space of transformation parameters, and uses a robust dense region detection algorithm to locate the possible transformation models in the space. The models are then utilized to verify all the visual word correspondence to precisely identify the matching indexed image. By taking advantage of unique geometric constraints in AP, our method can significantly reduce the time and memory cost while achieving high accuracy. As showed in evaluation with two AP systems called FACT and EMM, over a dataset with 1M+ images, our method achieves 100% identification accuracy and 0.67% registration error for FACT; For EMM, our method outperforms the state-of-the-art image identification approach by achieving 4% improvements in detection rate and almost perfect precision, while saving 40% and 70% memory and time cost.

PaperUI

Publication Details
  • Springer LNCS
  • Dec 1, 2011

Abstract

Close
PaperUI is a human-information interface concept that advocates using paper as displays and using mobile devices, such as camera phones or camera pens, as traditional computer-mice. When emphasizing technical efforts, some researchers like to refer the PaperUI related underlying work as interactive paper system. We prefer the term PaperUI for emphasizing the final goal, narrowing the discussion focus, and avoiding terminology confusion between interactive paper system and interactive paper computer [40]. PaperUI combines the merits of paper and the mobile devices, in that users can comfortably read and flexibly arrange document content on paper, and access digital functions related to the document via the mobile computing devices. This concept aims at novel interface technology to seamlessly bridge the gap between paper and computers for better user experience in handling documents. Compared with traditional laptops and tablet PCs, devices involved in the PaperUI concept are more light-weight, compact, energy efficient, and widely adopted. Therefore, we believe this interface vision can make computation more convenient to access for general public.
Publication Details
  • ACM Multimedia 2011
  • Nov 28, 2011

Abstract

Close
This paper describes methods for clustering photos that include both time stamps and location coordinates. We present versions of a two part method that first detects clusters using time and location information independently. These candidate clusters partition the set of time-ordered photos. A subset of the candidate clusters is selected by an efficient dynamic programming procedure to optimize a cost function. We propose several cost functions to design clusterings that are coherent in space, time, or both. One set of cost functions minimizes inter-photo distances directly. A second set maximizes an information measure to select clusterings for consistency in both time and space across scale.
Publication Details
  • ACM Multimedia 2011
  • Nov 28, 2011

Abstract

Close
Embedded Media Markers (EMMs) are nearly transparent icons printed on paper documents that link to associated digital media. By using the document content for retrieval, EMMs are less visually intrusive than barcodes and other glyphs while still providing an indication for the presence of links. An initial implementation demonstrated good overall performance but exposed difficulties in guaranteeing the creation of unambiguous EMMs. We developed an EMM authoring tool that supports the interactive authoring of EMMs via visualizations that show the user which areas on a page may cause recognition errors and automatic feedback that moves the authored EMM away from those areas. The authoring tool and the techniques it relies on have been applied to corpora with different visual characteristics to explore the generality of our approach.
Publication Details
  • ACM Multimedia Industrial Exhibit
  • Nov 28, 2011

Abstract

Close
The Active Reading Application (ARA) brings the familiar experience of writing on paper to the tablet. The application augments paper-based practices with audio, the ability to review annotations, and sharing. It is designed to make it easier to review, annotate, and comment on documents by individuals and groups. ARA incorporates several patented technologies and draws on several years of research and experimentation.
Publication Details
  • ACM Multimedia Industrial Exhibits
  • Nov 28, 2011

Abstract

Close
Modern office work practices increasingly breach traditional boundaries of time and place, making it difficult to interact with colleagues. To address these problems, we developed myUnity, a software and sensor platform that enables rich workplace awareness and coordination. myUnity is an integrated platform that collects information from a set of independent sensors and external data aggregators to report user location, availability, tasks, and communication channels. myUnity's sensing architecture is component-based, allowing channels of awareness information to be added, updated, or removed at any time. Our current system includes a variety of sensor and data input, including camera-based activity classification, wireless location trilateration, and network activity monitoring. These and other input channels are combined and composited into a single, high-level presence state. Early studies of a myUnity deployment have demonstrated that use of the platform allows quick access to core awareness information and show it has become a useful tool supporting communication and collaboration in the modern workplace.

Session-based search with Querium

Publication Details
  • HCIR 2011
  • Oct 20, 2011

Abstract

Close
We illustrate the use of Querium, a novel search system designed to support people's collaborative and multi-session search tasks, in the context of the HCIR 2011 Search Challenge. This report demonstrates how a Querium's interface and search engine can be used to search for documents in an open-ended, exploratory task. We illustrate the use of relevance feedback, faceted search, query fusion, and the search history, as well as commenting and overview functions.

Designing for Collaboration in Information Seeking

Publication Details
  • HCIR 2011
  • Oct 19, 2011

Abstract

Close
Information seeking is often a collaborative activity that can take can take many forms; in this paper we focus on explicit, intentional collaboration of small and explore a range of design decisions that should be considered when building Human-Computer Information Retrieval (HCIR) tools that support collaboration. In particular, we are interested in exploring the interplay between algorithmic mediation of collaboration and the mediated communication among team members. We argue that certain characteristics of the group's information need call for different design decisions.
Publication Details
  • Oct 3, 2011

Abstract

Close
Documents created, stored, and retrieved digitally are often printed on paper to be read for the purposes of producing new documents. The cycle of electronic document "consumption" and production is often broken in the middle by printing. Our research in XLibris has examined these transitions between the digital and paper worlds. Starting with interfaces for analytic reading, we have focused on annotation, on retrieval and re-retrieval, and on shared annotation. In this talk, I will describe the interfaces and the empirical evaluations we have conducted, and will discuss the potential of this technology in digital--and in physical--libraries.

PaperUI

Publication Details
  • CBDAR 2011
  • Sep 18, 2011

Abstract

Close
PaperUI is a human-computer interface concept that treats paper as displays that users can interact with via mobile devices such as mobile phones and projectors. It combines the merits of paper and the mobile devices. Compared with traditional laptops and tablet PCs, devices involved in this concept are more light-weight, compact, energy efficient, and widely adopted. Therefore, we believe this interface vision can make computation more convenient to access for general public. With our implemented prototype, pilot users can read documents easily and comfortably on paper, and access many digital functions related to the document via a camera phone or a mobile projector Invited Talk. http://imlab.jp/cbdar2011/#keynote

Abstract

Close
This demo shows an interactive paper system called MixPad, which features using mice and keyboards to enhance the conventional pen-finger-gesture based interaction with paper documents. Similar to many interactive paper systems, MixPad adopts a mobile camera-projector unit to recognize paper documents, detect pen and finger gestures and provide visual feedback. Unlike these systems, MixPad allows using mice and keyboards to help users interact with fine-grained document content on paper (e.g. individual words and user-defined arbitrary regions), and to facilitate cross-media operations. For instance, to copy a document segment from paper to a laptop, one first points a finger of her non-dominant hand to the segment roughly, and then uses a mouse in her dominant hand to refine the selection and drag it to the laptop; she can also type text as a detailed comment on a paper document. This novel interaction paradigm combines the advantages of mice, keyboards, pens and fingers, and therefore enables rich digital functions on paper.
Publication Details
  • MobileHCI
  • Aug 30, 2011

Abstract

Close
Modern office work practices increasingly breach traditional boundaries of time and place, increasing breakdowns workers encounter when coordinating interaction with colleagues. We conducted interviews with 12 workers and identified key problems introduced by these practices. To address these problems we developed myUnity, a fully functional platform enabling rich workplace awareness and coordination. myUnity is one of the first integrated platforms to span mobile and desktop environments, both in terms of access and sensing. It uses multiple sources to report user location, availability, tasks, and communication channels. A pilot field study of myUnity demonstrated the significant value of pervasive access to workplace awareness and communication facilities, as well as positive behavioral change in day-to-day communication practices for most users. We present resulting insights about the utility of awareness technology in flexible work environments.
Publication Details
  • International Journal of Arts and Technology
  • Jul 25, 2011

Abstract

Close

Mobile media applications need to balance user and group goals, attentional constraints, and limited screen real estate. In this paper, we describe the iterative development and testing of an application that explores these tradeo ffs. We developed early prototypes of a retrospective, time-based system as well as a prospective and space-based system. Our experiences with the prototypes led us to focus on the prospective system. We argue that attentional demands dominate and mobile media applications should be lightweight and hands-free as much as possible.