Pernilla Qvarfordt, Ph.D.

Senior Research Scientist

Pernilla Qvarfordt

Pernilla’s research is focused on understanding human attention and behavior when interacting with multimedia environments. Most recently she has been working on designing and evaluating exploratory search user interfaces in the project Querium, and tools for improving interaction with images and video, in particular using eye tracking as an additional source of information about users’ attention (see the Eye Tracking project). She has developed a concept for improving visual search by providing feedback about user’s attention collected by an eye tracker (see this blog post about a study performed). She is also involved in research on multimedia organization involving personal archive of photo and video.

Other research interest includes design of environments for promoting communication and collaboration. Pernilla has been involved in the MyUnity project aimed at designing tools for creating awareness of co-workers to improve communication in the workplace. Pernilla has also contributed to the Distributed Interactive Conference Environment (DICE) project, where she studied collaboration in conference rooms, the Collaborative Exploratory Search project, and the Mixed and Immersive Realities project, where she studied distributed collaboration.

Pernilla is active in the eye tracking research community, she was conference chair for ETRA 2014 and program chair for ETRA 2012. Prior to joining FXPAL in 2005, Pernilla earned her Ph. D. in Computer Science from the Department of Computer and Information Science, Linköping University, Sweden in 2004. Her dissertation work focused on exploring the use of eye-gaze information in multimodal interaction. During her graduate study she visited L.R.I at University Paris-Sud and IBM Almaden Research Center as a visiting researcher. At Linköping University, Pernilla had been active in course development and teaching in the areas of human-computer interaction and interaction design.

Co-Authors

Publications

2014
Publication Details
  • SIGIR 2014
  • Jul 6, 2014
  • pp. pp.495-504

Abstract

Close
People often use more than one query when searching for information. They revisit search results to re-find information and build an understanding of their search need through iterative explorations of query formulation. These tasks are not well-supported by search interfaces and web browsers. We designed and built SearchPanel, a Chrome browser extension that helps people manage their ongoing information seeking. This extension combines document and process metadata into an interactive representation of the retrieved documents that can be used for sense-making, navigation, and re-finding documents. In a real-world deployment spanning over two months, results show that SearchPanel appears to have been primarily used for complex information needs, in search sessions with long durations and high numbers of queries. The process metadata features in SearchPanel seem to be of particular importance when working on complex information needs.
2013
Publication Details
  • EuroHCIR 2013
  • Aug 1, 2013

Abstract

Close
People often use more than one query when searching for information; they also revisit search results to re-find information. These tasks are not well-supported by search interfaces and web browsers. We designed and built a Chrome browser extension that helps people manage their ongoing information seeking. The extension combines document and process metadata into an interactive representation of the retrieved documents that can be used for sense-making, for navigation, and for re-finding documents.
Publication Details
  • SIGIR 2013
  • Jul 28, 2013

Abstract

Close
Exploratory search is a complex, iterative information seeking activity that involves running multiple queries, finding and examining many documents. We introduced a query preview interface that visualizes the distribution of newly-retrieved and re-retrieved documents prior to showing the detailed query results. When evaluating the preview control with a control condition, we found effects on both people’s information seeking behavior and improved retrieval performance. People spent more time formulating a query and were more likely to explore search results more deeply, retrieved a more diverse set of documents, and found more different relevant documents when using the preview. With more time spent on query formulation, higher quality queries were produced and as consequence the retrieval results improved; both average residual precision and recall was higher with the query preview present.
Publication Details
  • CHI 2013
  • Apr 27, 2013

Abstract

Close
Although longer queries can produce better results for information seeking tasks, people tend to type short queries. We created an interface designed to encourage people to type longer queries, and evaluated it in two Mechanical Turk experiments. Results suggest that our interface manipulation may be effective for eliciting longer queries.
2010

Camera Pose Navigation using Augmented Reality

Publication Details
  • ISMAR 2010
  • Oct 13, 2010

Abstract

Close
We propose an Augmented Reality (AR) system that helps users take a picture from a designated pose, such as the position and camera angle of an earlier photo. Repeat photography is frequently used to observe and document changes in an object. Our system uses AR technology to estimate camera poses in real time. When a user takes a photo, the camera pose is saved as a 'view bookmark.' To support a user in taking a repeat photo, two simple graphics are rendered in an AR viewer on the camera's screen to guide the user to this bookmarked view. The system then uses image adjustment techniques to create an image based on the user's repeat photo that is even closer to the original.
Publication Details
  • In Proc. CHI 2010
  • Apr 10, 2010

Abstract

Close
The modern workplace is inherently collaborative, and this collaboration relies on effective communication among coworkers. Many communication tools – email, blogs, wikis, Twitter, etc. – have become increasingly available and accepted in workplace communications. In this paper, we report on a study of communications technologies used over a one year period in a small US corporation. We found that participants used a large number of communication tools for different purposes, and that the introduction of new tools did not impact significantly the use of previously-adopted technologies. Further, we identified distinct classes of users based on patterns of tool use. This work has implications for the design of technology in the evolving ecology of communication tools.
Publication Details
  • Symposium on Eye Tracking Research and Applications 2010
  • Mar 22, 2010

Abstract

Close
In certain applications such as radiology and imagery analysis, it is important to minimize errors. In this paper we evaluate a structured inspection method that uses eye tracking information as a feedback mechanism to the image inspector. Our two-phase method starts with a free viewing phase during which gaze data is collected. During the next phase, we either segment the image, mask previously seen areas of the image, or combine the two techniques, and repeat the search. We compare the different methods proposed for the second search phase by evaluating the inspection method using true positive and false negative rates, and subjective workload. Results show that gaze-blocked configurations reduced the subjective workload, and that gaze-blocking without segmentation showed the largest increase in true positive identifications and the largest decrease in false negative identifications of previously unseen objects.
2009
Publication Details
  • Book chapter in "Designing User Friendly Augmented Work Environments" Series: Computer Supported Cooperative Work Lahlou, Saadi (Ed.) 2009, Approx. 340 p. 117 illus., Hardcove
  • Sep 30, 2009

Abstract

Close
The Usable Smart Environment project (USE) aims at designing easy-to-use, highly functional next-generation conference rooms. Our first design prototype focuses on creating a "no wizards" room for an American executive; that is, a room the executive could walk into and use by himself, without help from a technologist. A key idea in the USE framework is that customization is one of the best ways to create a smooth user experience. Since the system needs to fit both with the personal leadership style of the executive and the corporation's meeting culture, we began the design process by exploring the work flow in and around meetings attended by the executive. Based on our work flow analysis and the scenarios we developed from it, USE developed a flexible, extensible architecture specifically designed to enhance ease of use in smart environment technologies. The architecture allows customization and personalization of smart environments for particular people and groups, types of work, and specific physical spaces. The first USE room was designed for FXPAL's executive "Ian" and installed in Niji, a small executive conference room at FXPAL. The room Niji currently contains two large interactive whiteboards for projection of presentation material, for annotations using a digital whiteboard, or for teleconferencing; a Tandberg teleconferencing system; an RFID authentication plus biometric identification system; printing via network; a PDA-based simple controller, and a tabletop touch-screen console. The console is used for the USE room control interface, which controls and switches between all of the equipment mentioned above.
Publication Details
  • In Proceedings of CHI 2009
  • Apr 4, 2009

Abstract

Close
One of the core challenges now facing smart rooms is supporting realistic, everyday activities. While much research has been done to push forward the frontiers of novel interaction techniques, we argue that technology geared toward widespread adoption requires a design approach that emphasizes straightforward configuration and control, as well as flexibility. We examined the work practices of users of a large, multi-purpose conference room, and designed DICE, a system to help them use the room's capabilities. We describe the design process, and report findings about the system's usability and about people's use of a multi-purpose conference room.
Publication Details
  • Book chapter in Handbook of Research on Socio-Technical Design and Social Networking Systems, eds. Whitworth B., and de Moor, A. Information Science Reference, pp. 529-543.
  • Mar 2, 2009

Abstract

Close
Eye-gaze plays an important role in face-to-face communication. This chapter presents research on exploiting the rich information contained in human eye-gaze for two types of applications. The first is to enhance computer mediated human-human communication by overlaying eye-gaze movement onto the shared visual spatial discussion material such as a map. The second is to manage multimodal human-computer dialogue by tracking the user's eye-gaze pattern as an indicator of user's interest. We briefly review related literature and summarize results from two research projects on human-human and human-computer communication.
2008
Publication Details
  • Fuji Xerox Technical Report
  • Dec 15, 2008

Abstract

Close
We have developed an interactive video search system that allows the searcher to rapidly assess query results and easily pivot off those results to form new queries. The system is intended to maximize the use of the discriminative power of the human searcher. The typical video search scenario we consider has a single searcher with the ability to search with text and content-based queries. In this paper, we evaluate a new collaborative modification of our search system. Using our system, two or more users with a common information need search together, simultaneously. The collaborative system provides tools, user interfaces and, most importantly, algorithmically-mediated retrieval to focus, enhance and augment the team's search and communication activities. In our evaluations, algorithmic mediation improved the collaborative performance of both retrieval (allowing a team of searchers to find relevant information more efficiently and effectively), and exploration (allowing the searchers to find relevant information that cannot be found while working individually). We present analysis and conclusions from comparative evaluations of the search system.
Publication Details
  • CSCW 2008 (Demo), San Diego, CA, ACM Press.
  • Nov 10, 2008

Abstract

Close
We describe Cerchiamo, a collaborative exploratory search system that allows teams of searchers to explore document collections synchronously. Working with Cerchiamo, team members use independent interfaces to run queries, browse results, and make relevance judgments. The system mediates the team members' search activity by passing and reordering search results and suggested query terms based on the teams' actions. The combination of synchronous influence with independent interaction allows team members to be more effective and efficient in performing search tasks.
Publication Details
  • SIGIR 2008. (Singapore, Singapore, July 20 - 24, 2008). ACM, New York, NY, 315-322. Best Paper.
  • Jul 22, 2008

Abstract

Close
We describe a new approach to information retrieval: algorithmic mediation for intentional, synchronous collabo- rative exploratory search. Using our system, two or more users with a common information need search together, simultaneously. The collaborative system provides tools, user interfaces and, most importantly, algorithmically-mediated retrieval to focus, enhance and augment the team's search and communication activities. Collaborative search outperformed post hoc merging of similarly instrumented single user runs. Algorithmic mediation improved both collaborative search (allowing a team of searchers to nd relevant in- formation more efficiently and effectively), and exploratory search (allowing the searchers to find relevant information that cannot be found while working individually).
Publication Details
  • TRECVid 2007
  • Mar 1, 2008

Abstract

Close
In 2007 FXPAL submitted results for two tasks: rushes summarization and interactive search. The rushes summarization task has been described at the ACM Multimedia workshop. Interested readers are referred to that publication for details. We describe our interactive search experiments in this notebook paper.
2006
Publication Details
  • UbiComp 2006 Workshop position paper
  • Sep 20, 2006

Abstract

Close
We describe our work-in-progress: a "wizard-free" conference room designed for ease of use, yet retaining next-generation functionality. Called USE (Usable Smart Environments), our system uses multi-display systems, immersive conferencing, and secure authentication. It is based in cross-cultural ethnographic studies on the way people use conference rooms. The USE project has developed a flexible, extensible architecture specifically designed to enhance ease of use in smart environment technologies. The architecture allows customization and personalization of smart environments for particular people and groups, types of work, and specific physical spaces. The system consists of a database of devices with attributes, rooms and meetings that implements a prototype-instance inheritance mechanism through which contextual information (e.g. IP addresses application settings, phone numbers for teleconferencing systems, etc.) can be associated
2005
Publication Details
  • M.F. Costabile and F. Paternò (Eds.): INTERACT 2005, LNCS 3585
  • Sep 12, 2005

Abstract

Close
We developed and studied an experimental system, RealTourist, which lets a user to plan a conference trip with the help of a remote tourist consultant who could view the tourist's eye-gaze superimposed onto a shared map. Data collected from the experiment were analyzed in conjunction with literature review on speech and eye-gaze patterns. This inspective, exploratory research identified various functions of gaze-overlay on shared spatial material including: accurate and direct display of partner's eye-gaze, implicit deictic referencing, interest detection, common focus and topic switching, increased redundancy and ambiguity reduction, and an increase of assurance, confidence, and understanding. This study serves two purposes. The first is to identify patterns that can serve as a basis for designing multimodal human-computer dialogue systems with eye-gaze locus as a contributing channel. The second is to investigate how computer-mediated communication can be supported by the display of the partner's eye-gaze.