Publications

FXPAL publishes in top scientific conferences and journals.

2009

WebNC: efficient sharing of web applications

Publication Details
  • Hypertext 2009
  • Jun 29, 2009

Abstract

Close
WebNC is a system for efficiently sharing, retrieving and viewing web applications. Unlike existing screencasting and screensharing tools, WebNC is optimized to work with web pages where a lot of scrolling happens. WebNC uses a tile-based encoding to capture, transmit and deliver web applications, and relies only on dynamic HTML and JavaScript. The resulting webcasts require very little bandwidth and are viewable on any modern web browser including Firefox and Internet Explorer as well as browsers on the iPhone and Android platforms.
Publication Details
  • Journal article in Artificial Intelligence for Engineering Design, Analysis and Manufacturing (2009), 23, 263-274. Printed in the USA. 2009 Cambridge University Press.
  • Jun 17, 2009

Abstract

Close
Modern design embraces digital augmentation, especially in the interplay of digital media content and the physical dispersion and handling of information. Based on the observation that small paper memos with sticky backs (such as Post-Its ™) are a powerful and frequently used design tool, we have created Post-Bits, a new interface device with a physical embodiment that can be handled as naturally as paper sticky notes by designers, yet add digital information affordances as well. A Post-Bit is a design prototype of a small electronic paper device for handling multimedia content, with interaction control and display in one thin flexible sheet. Tangible properties of paper such as flipping, flexing, scattering, and rubbing are mapped to controlling aspects of the multimedia content such as scrubbing, sorting, or up- or downloading dynamic media (images, video, text). In this paper we discuss both the design process involved in building a prototype of a tangible interface using new technologies, and how the use of Post-Bits as a tangible design tool can impact two common design tasks: design ideation or brainstorming, and storyboarding for interactive systems or devices.
Publication Details
  • Immerscom 2009
  • May 27, 2009

Abstract

Close
We describe Pantheia, a system that constructs virtual models of real spaces from collections of images, through the use of visual markers that guide and constrain model construction. To create a model users simply `mark up' the real world scene by placing pre-printed markers that describe scene elements or impose semantic constraints. Users then collect still images or video of the scene. From this input, Pantheia automatically and quickly produces a model. The Pantheia system was used to produce models of two rooms that demonstrate the e ectiveness of the approach.
Publication Details
  • Pervasive 2009
  • May 11, 2009

Abstract

Close
Recorded presentations are difficult to watch on a mobile phone because of the small screen, and even more challenging when the user is traveling or commuting. This demo shows an application designed for viewing presentations in a mobile situation, and describes the design process that involved on-site observation and informal user testing at our lab. The system generates a user-controllable movie by capturing a slide presentation, extracting active regions of interest using cues from the presenter, and creating pan-and-zoom effects to direct the active regions within a small screen. During playback, the user can simply watch the movie in automatic mode using a minimal amount of effort to operate the application. When more flexible control is needed, the user can switch into manual mode to temporarily focus on specific regions of interest.
Publication Details
  • ACM Transactions on Multimedia Computing, Communications and Applications, Vol. 5, Issue 2
  • May 1, 2009

Abstract

Close
Hyper-Hitchcock consists of three components for creating and viewing a form of interactive video called detail-on-demand video: a hypervideo editor, a hypervideo player, and algorithms for automatically generating hypervideo summaries. Detail-on-demand video is a form of hypervideo that supports one hyperlink at a time for navigating between video sequences. The Hyper-Hitchcock editor enables authoring of detail-on-demand video without programming and uses video processing to aid in the authoring process. The Hyper-Hitchcock player uses labels and keyframes to support navigation through and back hyperlinks. Hyper-Hitchcock includes techniques for automatically generating hypervideo summaries of one or more videos that take the form of multiple linear summaries of different lengths with links from the shorter to the longer summaries. User studies on authoring and viewing provided insight into the various roles of links in hypervideo and found that player interface design greatly affects people's understanding of hypervideo structure and the video they access.

WebNC: efficient sharing of web applications

Publication Details
  • WWW 2009
  • Apr 22, 2009

Abstract

Close
WebNC is a browser plugin that leverages the Document Object Model for efficiently sharing web browser windows or recording web browsing sessions to be replayed later. Unlike existing screen-sharing or screencasting tools, WebNC is optimized to work with web pages where a lot of scrolling happens. Rendered pages are captured as image tiles, and transmitted to a central server through http post. Viewers can watch the webcasts in realtime or asynchronously using a standard web browser: WebNC only relies on html and javascript to reproduce the captured web content. Along with the visual content of web pages, WebNC also captures their layout and textual content for later retrieval. The resulting webcasts require very little bandwidth, are viewable on any modern web browser including the iPhone and Android phones, and are searchable by keyword.
Publication Details
  • CHI2009
  • Apr 4, 2009

Abstract

Close
Zooming user interfaces are increasingly popular on mobile devices with touch screens. Swiping and pinching finger gestures anywhere on the screen manipulate the displayed portion of a page, and taps open objects within the page. This makes navigation easy but limits other manipulations of objects that would be supported naturally by the same gestures, notably cut and paste, multiple selection, and drag and drop. A popular device that suffers from this limitation is Apple's iPhone. In this paper, we present Bezel Swipe, an interaction technique that supports multiple selection, cut, copy, paste and other operations without interfering with zooming, panning, tapping and other pre-defined gestures. Participants of our user study found Bezel Swipe to be a viable alternative to direct touch selection.

DICE: Designing Conference Rooms for Usability

Publication Details
  • In Proceedings of CHI 2009
  • Apr 4, 2009

Abstract

Close
One of the core challenges now facing smart rooms is supporting realistic, everyday activities. While much research has been done to push forward the frontiers of novel interaction techniques, we argue that technology geared toward widespread adoption requires a design approach that emphasizes straightforward configuration and control, as well as flexibility. We examined the work practices of users of a large, multi-purpose conference room, and designed DICE, a system to help them use the room's capabilities. We describe the design process, and report findings about the system's usability and about people's use of a multi-purpose conference room.

Gaze-aided human-computer and human-human dialogue

Publication Details
  • Book chapter in Handbook of Research on Socio-Technical Design and Social Networking Systems, eds. Whitworth B., and de Moor, A. Information Science Reference, pp. 529-543.
  • Mar 2, 2009

Abstract

Close
Eye-gaze plays an important role in face-to-face communication. This chapter presents research on exploiting the rich information contained in human eye-gaze for two types of applications. The first is to enhance computer mediated human-human communication by overlaying eye-gaze movement onto the shared visual spatial discussion material such as a map. The second is to manage multimodal human-computer dialogue by tracking the user's eye-gaze pattern as an indicator of user's interest. We briefly review related literature and summarize results from two research projects on human-human and human-computer communication.
Publication Details
  • Proceedings of TRECVID 2008 Workshop
  • Mar 1, 2009

Abstract

Close
In 2008 FXPAL submitted results for two tasks: rushes summarization and interactive search. The rushes summarization task has been described at the ACM Multimedia workshop [1]. Interested readers are referred to that publication for details. We describe our interactive search experiments in this notebook paper.
Publication Details
  • IUI '09
  • Feb 8, 2009

Abstract

Close
We designed an interactive visual workspace, MediaGLOW, that supports users in organizing personal and shared photo collections. The system interactively places photos with a spring layout algorithm using similarity measures based on visual, temporal, and geographic features. These similarity measures are also used for the retrieval of additional photos. Unlike traditional spring-based algorithms, our approach provides users with several means to adapt the layout to their tasks. Users can group photos in stacks that in turn attract neighborhoods of similar photos. Neighborhoods partition the workspace by severing connections outside the neighborhood. By placing photos into the same stack, users can express a desired organization that the system can use to learn a neighborhood-specific combination of distances.
2008

Interactive Multimedia Search: Systems for Exploration and Collaboration

Publication Details
  • Fuji Xerox Technical Report
  • Dec 15, 2008

Abstract

Close
We have developed an interactive video search system that allows the searcher to rapidly assess query results and easily pivot off those results to form new queries. The system is intended to maximize the use of the discriminative power of the human searcher. The typical video search scenario we consider has a single searcher with the ability to search with text and content-based queries. In this paper, we evaluate a new collaborative modification of our search system. Using our system, two or more users with a common information need search together, simultaneously. The collaborative system provides tools, user interfaces and, most importantly, algorithmically-mediated retrieval to focus, enhance and augment the team's search and communication activities. In our evaluations, algorithmic mediation improved the collaborative performance of both retrieval (allowing a team of searchers to find relevant information more efficiently and effectively), and exploration (allowing the searchers to find relevant information that cannot be found while working individually). We present analysis and conclusions from comparative evaluations of the search system.

Rethinking the Podium

Publication Details
  • Chapter in "Interactive Artifacts and Furniture Supporting Collaborative Work and Learning", ed. P. Dillenbourg, J. Huang, and M. Cherubini. Published Nov. 28, 2008, Springer. Computer Supported Collaborative learning Series Vol 10.
  • Nov 28, 2008

Abstract

Close
As the use of rich media in mobile devices and smart environments becomes more sophisticated, so must the design of the everyday objects used as controllers and interfaces. Many new interfaces simply tack electronic systems onto existing forms. However, an original physical design for a smart artefact, that integrates new systems as part of the form of the device, can enhance the end-use experience. The Convertible Podium is an experiment in the design of a smart artefact with complex integrated systems for the use of rich media in meeting rooms. It combines the highly designed look and feel of a modern lectern with systems that allow it to serve as a central control station for rich media manipulation. The interface emphasizes tangibility and ease of use in controlling multiple screens, multiple media sources (including mobile devices) and multiple distribution channels, and managing both data and personal representation in remote telepresence.

Cerchiamo: a collaborative exploratory search tool

Publication Details
  • CSCW 2008 (Demo), San Diego, CA, ACM Press.
  • Nov 10, 2008

Abstract

Close
We describe Cerchiamo, a collaborative exploratory search system that allows teams of searchers to explore document collections synchronously. Working with Cerchiamo, team members use independent interfaces to run queries, browse results, and make relevance judgments. The system mediates the team members' search activity by passing and reordering search results and suggested query terms based on the teams' actions. The combination of synchronous influence with independent interaction allows team members to be more effective and efficient in performing search tasks.
Publication Details
  • Workshop held in conjunction with CSCW2008
  • Nov 8, 2008

Abstract

Close
It is increasingly common to find Multiple Display Environments (MDEs) in a variety of settings, including the workplace, the classroom, and perhaps soon, the home. While some technical challenges exist even in single-user MDEs, collaborative use of MDEs offers a rich set of opportunities for research and development. In this workshop, we will bring together experts in designing, developing, building and evaluating MDEs to improve our collective understanding of design guidelines, relevant real-world activities, evaluation methods and metrics, and opportunities for remote as well as collocated collaboration. We intend to create not only a broader understanding of this growing field, but also to foster a community of researchers interested in bringing these environments from the laboratory to the real world. In this workshop, we intended to explore the following research themes:
  • Elicitation and process of distilling design guidelines for MDE systems and interfaces.
  • Investigation and classification of activities suited for MDEs.
  • Exploration and assessment of how existing groupware theories apply to collaboration in MDEs.
  • Evaluation techniques and metrics for assessing effectiveness of prototype MDE systems and interfaces.
  • Exploration of MDE use beyond strictly collocated collaboration.

Remix rooms: Redefining the smart conference room

Publication Details
  • CSCW 2008 (Workshop)
  • Nov 8, 2008

Abstract

Close
In this workshop we will explore how the experience of smart conference rooms can be broadened to include different contexts and media such as context-aware mobile systems, personal and professional videoconferencing, virtual worlds, and social software. How should the technologies behind conference room systems reflect the rapidly changing expectations around personal devices and social online spaces like Facebook, Twitter, and Second Life? What kinds of systems are needed to support meetings in technologically complex environments? How can a mashup of conference room spaces and technologies account for differing social and cultural practices around meetings? What requirements are imposed by security and privacy issues in public and semi-public spaces?

Reading in the Office

Publication Details
  • BooksOnline'08, October 30, 2008
  • Oct 30, 2008

Abstract

Close
Reading online poses a number of technological challenges. Advances in technology such as touch screens, light-weight high-power computers, and bi-stable displays have periodically renewed interest in online reading over the last twenty years, only to see that interest decline to a small early-adopter community. The recent release of the Kindle by Amazon is another attempt to create an online reading device. Has publicity surrounding Kindle and other such devices has reached critical mass to allow them to penetrate the consumer market successfully, or will we see a decline in interest over the next couple of years echoing the lifecycle of Softbook™ and Rocket eBook™ devices that preceded them? I argue that the true value of online reading lies in supporting activities beyond reading per se: activities such as annotation, reading and comparing multiple documents, transitions between reading, writing and retrieval, etc. Whether the current hardware will be successful in the long term may depend on its abilities to address the reading needs of knowledge workers, not just leisure readers.
Publication Details
  • ACM Multimedia 2008
  • Oct 27, 2008

Abstract

Close
Audio monitoring has many applications but also raises pri- vacy concerns. In an attempt to help alleviate these con- cerns, we have developed a method for reducing the intelli- gibility of speech while preserving intonation and the ability to recognize most environmental sounds. The method is based on identifying vocalic regions and replacing the vocal tract transfer function of these regions with the transfer function from prerecorded vowels, where the identity of the replacement vowel is independent of the identity of the spoken syllable. The audio signal is then re-synthesized using the original pitch and energy, but with the modi ed vocal tract transfer function. We performed an intelligibility study which showed that environmental sounds remained recognizable but speech intelligibility can be dramatically reduced to a 7% word recognition rate.
Publication Details
  • Proceedings of ACM Multimedia '08, pp. 817-820 (Short Paper).
  • Oct 27, 2008

Abstract

Close
We present an automatic zooming technique that leverages content analysis for viewing a document page on a small display such as a mobile phone or PDA. The page can come from a scanned document (bitmap image) or an electronic document (text and graphics data plus metadata). The page with text and graphics is segmented into regions. For each region, a scale-distortion function is constructed based on image analysis of the signal distortion that occurs at different scales. During interactive viewing of the document, as the user navigates by moving the viewport around the page, the zoom factor is automatically adjusted by optimizing the scale-distortion functions of the regions visible in the viewport.

mTable: Browsing Photos and Videos on a Tabletop System

Publication Details
  • ACM Multimedia 2008 (Video)
  • Oct 27, 2008

Abstract

Close
In this video demo, we present mTable, a multimedia tabletop system for browsing photo and video collections. We have developed a set of applications for visualizing and exploring photos, a board game for labeling photos, and a 3D cityscape metaphor for browsing videos. The system is suitable for use in a living room or office lounge, and can support multiple displays by visualizing the collections on the tabletop and showing full-size images and videos on another flat panel display in the room.
Publication Details
  • ACM Multimedia 2008
  • Oct 27, 2008

Abstract

Close
PicNTell is a new technique for generating compelling screencasts where users can quickly record desktop activities and generate videos that are embeddable on popular video sharing distributions such as YouTube®. While standard video editing and screen capture tools are useful for some editing tasks, they have two main drawbacks: (1) they require users to import and organize media in a separate interface, and (2) they do not support natural (or camcorder-like) screen recording, and instead usually require the user to define a specific region or window to record. In this paper we review current screen recording use, and present the PicNTell system, pilot studies, and a new six degree-of-freedom tracker we are developing in response to our findings.
Publication Details
  • ACM Multimedia 2008
  • Oct 27, 2008

Abstract

Close
This demo introduces a tool for accessing an e-document by capturing one or more images of a real object or document hardcopy. This tool is useful when a file name or location of the file is unknown or unclear. It can save field workers and office workers from remembering/exploring numerous directories and file names. Frequently, it can convert tedious keyboard typing in a search box to a simple camera click. Additionally, when a remote collaborator cannot clearly see an object or a document hardcopy through remote collaboration cameras, this tool can be used to automatically retrieve and send the original e-document to a remote screen or printer.

Ranked Feature Fusion Models for Ad Hoc Retrieval

Publication Details
  • CIKM (Conference on Information and Knowledge Management) 2008, October, Napa, CA
  • Oct 27, 2008

Abstract

Close
We introduce the Ranked Feature Fusion framework for information retrieval system design. Typical information retrieval formalisms such as the vector space model, the best-match model and the language model first combine features (such as term frequency and document length) into a unified representation, and then use the representation to rank documents. We take the opposite approach: Documents are first ranked by the relevance of a single feature value and are assigned scores based on their relative ordering within the collection. A separate ranked list is created for every feature value and these lists are then fused to produce a final document scoring. This new ``rank then combine'' approach is extensively evaluated and is shown to be as effective as traditional ``combine then rank'' approaches. The model is easy to understand and contains fewer parameters than other approaches. Finally, the model is easy to extend (integration of new features is trivial) and modify. This advantage includes but is not limited to relevance feedback and distribution flattening.
Publication Details
  • ACM Multimedia
  • Oct 27, 2008

Abstract

Close
Retail establishments want to know about traffic flow and patterns of activity in order to better arrange and staff their business. A large number of fixed video cameras are commonly installed at these locations. While they can be used to observe activity in the retail environment, assigning personnel to this is too time consuming to be valuable for retail analysis. We have developed video processing and visualization techniques that generate presentations appropriate for examining traffic flow and changes in activity at different times of the day. Taking the results of video tracking software as input, our system aggregates activity in different regions of the area being analyzed, determines the average speed of moving objects in the region, and segments time based on significant changes in the quantity and/or location of activity. Visualizations present the results as heat maps to show activity and object counts and average velocities overlaid on the map of the space.