Publications

FXPAL publishes in top scientific conferences and journals.

2017

Abstract

Close
Work breaks can play an important role in the mental and physical well-being of workers and contribute positively to productivity. In this paper we explore the use of activity-, physiological-, and indoor-location sensing to promote mobility during work-breaks. While the popularity of devices and applications to promote physical activity is growing, prior research highlights important constraints when designing for the workplace. With these constraints in mind, we developed BreakSense, a mobile application that uses a Bluetooth beacon infrastructure, a smartphone and a smartwatch to encourage mobility during breaks with a game-like design. We discuss constraints imposed by design for work and the workplace, and highlight challenges associated with the use of noisy sensors and methods to overcome them. We then describe a short deployment of BreakSense within our lab that examined bound vs. unbound augmented breaks and how they affect users’ sense of completion and readiness to work.

Abstract

Close
Users often use social media to share their interest in products. We propose to identify purchase stages from Twitter data following the AIDA model (Awareness, Interest, Desire, Action). In particular, we define a task of classifying the purchase stage of each tweet in a user's tweet sequence. We introduce RCRNN, a Ranking Convolutional Recurrent Neural Network which computes tweet representations using convolution over word embeddings and models a tweet sequence with gated recurrent units. Also, we consider various methods to cope with the imbalanced label distribution in our data and show that a ranking layer outperforms class weights.
Publication Details
  • IEEE PerCom 2017
  • Mar 13, 2017

Abstract

Close
We present Lift, a visible light-enabled finger tracking and object localization technique that allows users to perform freestyle multi-touch gestures on any object’s surface in an everyday environment. By projecting encoded visible patterns onto an object’s surface (e.g. paper, display, or table), and localizing the user’s fingers with light sensors, Lift offers users a richer interactive space than the device’s existing interfaces. Additionally, everyday objects can be augmented by attaching sensor units onto their surface to accept multi-touch gesture input. We also present two applications as a proof of concept. Finally, results from our experiments indicate that Lift can localize ten fingers simultaneously with accuracy of 0.9 mm and 1.8 mm on two axes respectively and an average refresh rate of 84 Hz with 16.7ms delay on WiFi and 12ms delay on serial, making gesture recognition on noninstrumented objects possible.
Publication Details
  • TRECVID Workshop
  • Mar 1, 2017

Abstract

Close
This is a summary of our participation in the TRECVID 2016 video hyperlinking task (LNK). We submitted four runs in total. A baseline system combined on established vectorspace text indexing and cosine similarity. Our other runs explored the use of distributed word representations in combination with fine-grained inter-segment text similarity measures.
2016

Automatic Geographic Metadata Correction for Sensor-Rich Video Sequences.

Publication Details
  • ACM SIGSPATIAL GIS 2016
  • Nov 2, 2016

Abstract

Close
Videos recorded with current mobile devices are increasingly geotagged at fine granularity and used in various location based applications and services. However, raw sensor data collected is often noisy, resulting in subsequent inaccurate geospatial analysis. In this study, we focus on the challenging correction of compass readings and present an automatic approach to reduce these metadata errors. Given the small geo-distance between consecutive video frames, image-based localization does not work due to the high ambiguity in the depth reconstruction of the scene. As an alternative, we collect geographic context from OpenStreetMap and estimate the absolute viewing direction by comparing the image scene to world projections obtained with different external camera parameters. To design a comprehensive model, we further incorporate smooth approximation and feature-based rotation estimation when formulating the error terms. Experimental results show that our proposed pyramid-based method outperforms its competitors and reduces orientation errors by an average of 58.8%. Hence, for downstream applications, improved results can be obtained with these more accurate geo-metadata. To illustrate, we present the performance gain in landmark retrieval and tag suggestion by utilizing the accuracy-enhanced geo-metadata.

A General Feature-based Map Matching Framework with Trajectory Simplications.

Publication Details
  • 7th ACM SIGSPATIAL International Workshop on GeoStreaming (IWGS 2016)
  • Oct 31, 2016

Abstract

Close
Accurate map matching has been a fundamental but challenging problem that has drawn great research attention in recent years. It aims to reduce the uncertainty in a trajectory by matching the GPS points to the road network on a digital map. Most existing work has focused on estimating the likelihood of a candidate path based on the GPS observations, while neglecting to model the probability of a route choice from the perspective of drivers. Here we propose a novel feature-based map matching algorithm that estimates the cost of a candidate path based on both GPS observations and human factors. To take human factors into consideration is very important especially when dealing with low sampling rate data where most of the movement details are lost. Additionally, we simultaneously analyze a subsequence of coherent GPS points by utilizing a new segment-based probabilistic map matching strategy, which is less susceptible to the noisiness of the positioning data. We have evaluated the proposed approach on a public large-scale GPS dataset, which consists of 100 trajectories distributed all over the world. The experimental results show that our method is robust to sparse data with large sampling intervals (e.g., 60 s to 300 s) and challenging track features (e.g., u-turns and loops). Compared with two state-of-the-art map matching algorithms, our method substantially reduces the route mismatch error by 6.4% to 32.3% and obtains the best map matching results in all the different combinations of sampling rates and challenging features.
Publication Details
  • ENCYCLOPEDIA WITH SEMANTIC COMPUTING
  • Oct 31, 2016

Abstract

Close
Improvements in sensor and wireless network enable accurate, automated, instant determination and dissemination of a user's or objects position. The new enabler of location-based services (LBSs) apart from the current ubiquitous networking infrastructure is the enrichment of the different systems with semantics information, such as time, location, individual capability, preference and more. Such semantically enriched system-modeling aims at developing applications with enhanced functionality and advanced reasoning capabilities. These systems are able to deliver more personalized services to users by domain knowledge with advanced reasoning mechanisms, and provide solutions to problems that were otherwise infeasible. This approach also takes user's preference and place property into consideration that can be utilized to achieve a comprehensive range of personalized services, such as advertising, recommendations, or polling. This paper provides an overview of indoor localization technologies, popular models for extracting semantics from location data, approaches for associating semantic information and location data, and applications that may be enabled with location semantics. To make the presentation easy to understand, we will use a museum scenario to explain pros and cons of different technologies and models. More specifically, we will first explore users' needs in a museum scenario. Based on these needs, we will then discuss advantages and disadvantages of using different localization technologies to meet these needs. From these discussions, we can highlight gaps between real application requirements and existing technologies, and point out promising localization research directions. By identifying gaps between various models and real application requirements, we can draw a road map for future location semantics research.
Publication Details
  • UIST 2016 (Demo)
  • Oct 16, 2016

Abstract

Close
We propose a robust pointing detection with virtual shadow representation for interacting with a public display. Using a depth camera, our shadow is generated by a model with an angled virtual sun light and detects the nearest point as a pointer. Position of the shadow becomes higher when user walks closer, which conveys the notion of correct distance to control the pointer and offers accessibility to the higher area of the display.
Publication Details
  • ACM MM
  • Oct 15, 2016

Abstract

Close
The proliferation of workplace multimedia collaboration applications has meant on one hand more opportunities for group work but on the other more data locked away in proprietary interfaces. We are developing new tools to capture and access multimedia content from any source. In this demo, we focus primarily on new methods that allow users to rapidly reconstitute, enhance, and share document-based information.

Second Screen Hypervideo-Based Physiotherapy Training

Publication Details
  • Multimedia for personal health and health care – MMHealth 2016 @ ACM Multimedia 2016
  • Oct 15, 2016

Abstract

Close
Adapting to personal needs and supporting correct posture are important in physiotherapy training. In this demo, we show a dual screen application (handheld and TV) that allows patients to view hypervideo training programs. Designed to guide their daily exercises, these programs can be adapted to daily needs. The dual screen concept offers the positional flexibility missing in single screen solutions.

A Dual Screen Concept for User-Controlled Hypervideo-Based Physiotherapy Training

Publication Details
  • Multimedia for personal health and health care – MMHealth 2016 @ ACM Multimedia 2016
  • Oct 15, 2016

Abstract

Close
Dual screen concepts for hypervideo-based physiotherapy training are important in healthcare settings, but existing applications often cannot be adapted to personal needs and do not support correct posture. In this paper, we describe the design and implementation of a dual screen application (handheld and TV) that allows patients to view hypervideos designed to help them correctly perform their exercises. This approach lets patients adapt their training to their daily needs and their overall training progress. We evaluated this prototypical implementation in a user test with post-operative care prostate cancer patients. From our results, we derived design recommendations for dual screen physical training hypervideo applications.

Hypervideo Production Using Crowdsourced Youtube Videos

Publication Details
  • ACM Multimedia 2016
  • Oct 15, 2016

Abstract

Close
Different systems exist for the creation of hypervideos nowadays. However, the creation of the video scenes which are put together to a hypervideo is a tedious and time consuming job. Then again huge video databases like YouTube exist which already provide rich sources of video materials. Yet it is not allowed to download and re-purpose the videos from there legally, which requires a solution to link whole videos or parts of videos and play them from the platform in an embedded player. This work presents the SIVA Web Producer, a Chrome extension for the creation of hypervideos consisting of scenes from YouTube videos. After creating a project, the Chrome extension allows to import YouTube videos or parts thereof as video clips. These can than be linked to a scene graph. A preview is provided and finalized videos can be published on the SIVA Web Portal.
Publication Details
  • Document Engineering DocEng 2016
  • Sep 13, 2016

Abstract

Close
In this paper we describe DocuGram, a novel tool to capture and share documents from any application. As users scroll through pages of their document inside the native application (Word, Google Docs, web browser), the system captures and analyses in real-time the video frames and reconstitutes the original document pages into an easy to view HTML-based representation. In addition to regenerating the document pages, a DocuGram also includes the interactions users had over them, e.g. mouse motions and voice comments. A DocuGram acts as a modern copy machine, allowing users to copy and share any document from any application.
Publication Details
  • Mobile HCI 2016
  • Sep 6, 2016

Abstract

Close
Most teleconferencing tools treat users in distributed meetings monolithically: all participants are meant to be connected to one another in the more-or-less the same manner. In reality, though, people connect to meetings in all manner of different contexts, sometimes sitting in front of a laptop or tablet giving their full attention, but at other times mobile and involved in other tasks or as a liminal participant in a larger group meeting. In this paper we present the design and evaluation of two applications, Penny and MeetingMate, designed to help users in non-standard contexts participate in meetings.
Publication Details
  • CBRecSys: Workshop on New Trends in Content-Based Recommender Systems at ACM Recommender Systems Conference
  • Sep 2, 2016

Abstract

Close
The abundance of data posted to Twitter enables companies to extract useful information, such as Twitter users who are dissatisfied with a product. We endeavor to determine which Twitter users are potential customers for companies and would be receptive to product recommendations through the language they use in tweets after mentioning a product of interest. With Twitter's API, we collected tweets from users who tweeted about mobile devices or cameras. An expert annotator determined whether each tweet was relevant to customer purchase behavior and whether a user, based on their tweets, eventually bought the product. For the relevance task, among four models, a feed-forward neural network yielded the best cross-validation accuracy of over 80% per product. For customer purchase prediction of a product, we observed improved performance with the use of sequential input of tweets to recurrent models, with an LSTM model being best; we also observed the use of relevance predictions in our model to be more effective with less powerful RNNs and on more difficult tasks.
Publication Details
  • Ro-Man 2016
  • Aug 26, 2016

Abstract

Close
Two related challenges with current teleoperated robotic systems are lack of peripheral vision and awareness, and difficulty or tedium of navigating through remote spaces. We address these challenges by providing an interface with a focus plus context (F+C) view of the robot location, and where the user can navigate simply by looking where they want to go, and clicking or drawing a path on the view to indicate the desired trajectory or destination. The F+C view provides an undistorted, perspectively correct central region surrounded by a wide field of view peripheral portion, and avoids the need for separate views. The navigation method is direct and intuitive in comparison to keyboard or joystick based navigation, which require the user to be in a control loop as the robot moves. Both the F+C views and the direct click navigation were evaluated in a preliminary user study.

Abstract

Close
Mobile Telepresence Robots (MTR) are an emerging technology that extend the functionality of telepresence systems by adding mobility. MTRs nowadays, however, rely on stationary imaging systems such as a single narrow-view camera for vision, which can lead to reduced operator performance due to view-related deficiencies in situational awareness. We therefore developed an improved imaging and viewing platform that allows immersive telepresence using a Head Mounted Device (HMD) with head-tracked mono and stereoscopic video. Using a remote collaboration task to ground our research, we examine the effectiveness head-tracked HMD systems in comparison to a baseline monitor-based system. We performed a user study where participants were divided into three groups: fixed camera monitor-based baseline condition (without HMD), HMD with head-tracked 2D camera and HMD with head-tracked stereo camera. Results showed the use of HMD reduces task error rates and improves perceived collaborative success and quality of view, compared to the baseline condition. No major difference was found, however, between stereo and 2D camera conditions for participants wearing an HMD.
Publication Details
  • SIGIR 2016
  • Jul 18, 2016

Abstract

Close
Social media offers potential opportunities for businesses to extract business intelligence. This paper presents Tweetviz, an interactive tool to help businesses extract actionable information from a large set of noisy Twitter messages. Tweetviz visualizes tweet sentiment of business locations, identifies other business venues that Twitter users visit, and estimates some simple demographics of the Twitter users frequenting a business. A user study to evaluate the system's ability indicates that Tweetviz can provide an overview of a business's issues and sentiment as well as information aiding users in creating customer profiles.

Pre-fetching Strategies for HTML5 Hypervideo Players

Publication Details
  • Hypertext 2016
  • Jul 12, 2016

Abstract

Close
Web videos are becoming more and more popular. Current web technologies make it simpler than ever to both stream videos and create complex constructs of interlinked videos with additional information (video, audio, images, and text); so called hypervideos. When viewers interact with hypervideos by clicking on links, new content has to be loaded. This may lead to excessive waiting times, interrupting the presentation -- especially when videos are loaded into the hypervideo player. In this work, we propose hypervideo pre-fetching strategies, which can be implemented in players to minimize waiting times. We examine the possibilities offered by the HTML5
Publication Details
  • 3rd IEEE International Workshop on Mobile Multimedia Computing (MMC)
  • Jul 11, 2016

Abstract

Close
Mobile Audio Commander (MAC) is a mobile phone-based multimedia sensing system that facilitates the introduction of extra sensors to existing mobile robots for advanced capabilities. In this paper, we use MAC to introduce an accurate indoor positioning sensor to a robot to facilitate its indoor navigation. More specifically, we use a projector to send out position ID through light signal, use a light sensor and the audio channel on a mobile phone to decode the position ID, and send navigation commands to a target robot through audio output. With this setup, our system can simplify user’s robot navigation. Users can define a robot navigation path on a phone, and our system will compare the navigation path with its accurate location sensor inputs and generate analog line-following signal, collision avoidance signal, and analog angular signal to adjust the robot’s straight movements and turns. This paper describes two examples of using MAC and a positioning system to enable complicated robot navigation with proper user interface design, external circuit design and real sensor installations on existing robots.
Publication Details
  • ICME 2016
  • Jul 11, 2016

Abstract

Close
Captions are a central component in image posts that communicate the background story behind photos. Captions can enhance the engagement with audiences and are therefore critical to campaigns or advertisement. Previous studies in image captioning either rely solely on image content or summarize multiple web documents related to image's location; both neglect users' activities. We propose business-aware latent topics as a new contextual cue for image captioning that represent user activities. The idea is to learn the typical activities of people who posted images from business venues with similar categories (e.g., fast food restaurants) to provide appropriate context for similar topics (e.g., burger) in new posts. User activities are modeled via a latent topic representation. In turn, the image captioning model can generate sentences that better reflect user activities at business venues. In our experiments, the business-aware latent topics are effective for adapting to captions to images captured in various businesses than the existing baselines. Moreover, they complement other contextual cues (image, time) in a multi-modal framework.

Abstract

Close
We previously created the HyperMeeting system to support a chain of geographically and temporally distributed meetings in the form of a hypervideo. This paper focuses on playback plans that guide users through the recorded meeting content by automatically following available hyperlinks. Our system generates playback plans based on users' interests or prior meeting attendance and presents a dialog that lets users select the most appropriate plan. Prior experience with playback plans revealed users' confusion with automatic link following within a sequence of meetings. To address this issue, we designed three timeline visualizations of playback plans. A user study comparing the timeline designs indicated that different visualizations are preferred for different tasks, making switching among them important. The study also provided insights that will guide research of personalized hypervideo, both inside and outside a meeting context.
Publication Details
  • Springer Multimedia Tools and Applications: SPECIAL ISSUE ON
  • Jul 1, 2016

Abstract

Close
It is difficult to adjust the content of traditional slide presentations to the knowledge level, interest and role of individuals. This might force presenters to include content that is irrelevant for part of the audience, which negatively affects the knowledge transfer of the presentation. In this work, we present a prototype that is able to eliminate non-pertinent information from slides by presenting annotations for individual attendees on optical head-mounted displays. We first create guidelines for creating optimal annotations by evaluating several types of annotations alongside different types of slides. Then we evaluate the knowledge acquisition of presentation attendees using the prototype versus traditional presentations. Our results show that annotations with a limited amount of information, such as text up to 5 words, can significantly increase the amount of knowledge gained from attending a group presentation. Additionally, presentations where part of the information is moved to annotations are judged more positively on attributes such as clarity and enjoyment.

4th International Workshop on Interactive Content Consumption (WSICC'16)

Publication Details
  • ACM TVX 2016
  • Jun 22, 2016

Abstract

Close
WSICC has established itself as a truly interactive workshop at EuroITV'13, TVX'14, and TVX'15 with three successful editions. The fourth edition of the WSICC workshop aims to bring together researchers and practitioners working on novel approaches for interactive multimedia content consumption. New technologies, devices, media formats, and consumption paradigms are emerging that allow for new types of interactivity. Examples include multi-panoramic video and object-based audio, increasingly available in live scenarios with content feeds from a multitude of sources. All these recent advances have an impact on different aspects related to interactive content consumption, which the workshop categorizes into Enabling Technologies, Content, User Experience, and User Interaction. The resources from past editions of the workshop are available on the http://wsicc.net website.

Speech Control for HTML5 Hypervideo Players

Publication Details
  • WSICC Workshop at TVX
  • Jun 22, 2016

Abstract

Close
Hypervideo usage scenarios like physiotherapy trainings or instructions for manual tasks make it hard for users to use an input device like a mouse or touch screen on a hand-held device while they are performing an exercise or use both hands to perform a manual task. In this work, we are trying to overcome this issue by providing an alternative input method for hypervideo navigation using speech commands. In a user test, we evaluated two different speech recognition libraries, annyang (in combination with the Web Speech API) and PocketSphinx.js (in combination with the Web Audio API), for their usability to control hypervideo players. Test users spoke 18 words, either in German or English, which were recorded and then processed by both libraries. We found out that annyang shows better recognition results. However, depending on other factors of influence, like the occurrence of background noise (reliability), the availability of an internet connection, or the used browser, PocketSphinx.js may be a better fit.

From Single Screen to Dual Screen - a Design Study for a User-Controlled Hypervideo-Based Physiotherapy Training

Publication Details
  • WSICC Workshop at TVX
  • Jun 22, 2016

Abstract

Close
Hypervideo based physiotherapy trainings bear an opportunity to support patients in continuing their training after being released from a rehabilitation clinic. Many exercises require the patient to sit on the floor or a gymnastic ball, lie on a gymnastics mat, or do the exercises in other postures. Using a laptop or tablet with a stand to show the exercises is more helpful than for example just having some drawings on a leaflet. However, it may lead to incorrect execution of the exercises while maintaining eye contact with the screen or require the user to get up and select the next exercise if the devices is positioned for a better view. A dual screen application, where contents are shown on a TV screen and the flow of the video can be controlled from a mobile second device, allows patients to keep their correct posture and the same time view and select contents. In this paper we propose first studies for user interface designs for such apps. Initial paper prototypes are discussed and refined in two focus groups. The results are then presented to a broader range of users in a survey. Three prototypes for the mobile app and one prototype for the TV are identified for future user tests.

Screen Concepts for Multi-Version Hypervideo Authoring

Publication Details
  • WSICC Workshop at TVX
  • Jun 22, 2016

Abstract

Close
The creation of hypervideos usually requires a lot of planning and is time consuming with respect to media content creation. However, when structure and media are put together to author a hypervideo, it may only require minor changes to make the hypervideo available in other languages or for another user group (like beginners versus experts). However, to make the translation of media and all navigation elements of a hypervideo efficient and manageable, the authoring tool needs a GUI that provides a good overview of elements that can be translated and of missing translations. In this work, we propose screen concepts that help authors to provide different versions (for example language and/or experience level) of a hypervideo. We analyzed different variants of GUI elements and evaluated them in a survey. We draw guidelines from the results that can help with the creation of similar systems in the future.
Publication Details
  • International Workshop on Interactive Content Consumption
  • Jun 22, 2016

Abstract

Close
The confluence of technologies such as telepresence, immersive imaging, model based virtual mirror worlds, mobile live streaming, etc. give rise to a capability for people anywhere to view and connect with present or past events nearly anywhere on earth. This capability properly belongs to a public commons, available as a birthright of all humans, and can been seen as part of an evolutionary transition supporting a global collective mind. We describe examples and elements of this capability, and suggest how they can be better integrated through a tool we call TeleViewer and a framework called WorldViews, which supports easy sharing of views as well as connecting of providers and consumers of views all around the world.

Abstract

Close
Most current mobile and wearable devices are equipped with inertial measurement units (IMU) that allow the detection of motion gestures, which can be used for interactive applications. A difficult problem to solve, however, is how to separate ambient motion from an actual motion gesture input. In this work, we explore the use of motion gesture data labeled with gesture execution phases for training supervised learning classifiers for gesture segmentation. We believe that using gesture execution phase data can significantly improve the accuracy of gesture segmentation algorithms. We define gesture execution phases as the start, middle and end of each gesture. Since labeling motion gesture data with gesture execution phase information is work intensive, we used crowd workers to perform the labeling. Using this labeled data set, we trained SVM-based classifiers to segment motion gestures from ambient movement of the device t. We describe initial results that indicate that gesture execution phase can be accurately recognized by SVM classifiers. Our main results show that training gesture segmentation classifiers with phase-labeled data substantially increases the accuracy of gesture segmentation: we achieved a gesture segmentation accuracy of 0.89 for simulated online segmentation using a sliding window approach.
Publication Details
  • Information Processing & Management
  • Jun 11, 2016

Abstract

Close
Search log analysis has become a common practice to gain insights into user search behaviour, it helps gain an understanding of user needs and preferences, as well as how well a system supports such needs. Currently log analysis is typically focused on the low-level user actions, i.e. logged events such as issued queries and clicked results; and often only a selection of such events are logged and analysed. However, the types of logged events may differ widely from interface to interface, making comparison between systems difficult. Further, analysing a selection of events may lead to conclusions out of context— e.g. the statistics of observed query reformulations may be influenced by the existence of a relevance feedback component. Alternatively, in lab studies user activities can be analysed at a higher level, such as search tactics and strategies, abstracted away from detailed interface implementation. However, the required manual codings that map logged events to higher level interpretations prevent this type of analysis from going large scale. In this paper, we propose a new method for analysing search logs by (semi-)automatically identifying user search tactics from logged events, allowing large scale analysis that is comparable across search systems. We validate the efficiency and effectiveness of the proposed tactic identification method using logs of two reference search systems of different natures: a product search system and a video search system. With the identified tactics, we perform a series of novel log analyses in terms of entropy rate of user search tactic sequences, demonstrating how this type of analysis allows comparisons of user search behaviours across systems of different nature and design. This analysis provides insights not achievable with traditional log analysis.
Publication Details
  • ACM International Conference on Multimedia Retrieval (ICMR)
  • Jun 6, 2016

Abstract

Close
We propose a method for extractive summarization of audiovisual recordings focusing on topic-level segments. We first build a content similarity graph between all segments of all documents in the collection, using word vectors from the transcripts, and then select the most central segments for the summaries. We evaluate the method quantitatively on the AMI Meeting Corpus using gold standard reference summaries and the Rouge metric, and qualitatively on lecture recordings using a novel two-tiered approach with human judges. The results show that our method compares favorably with others in terms of Rouge, and outperforms the baselines for human scores, thus also validating our evaluation protocol.
Publication Details
  • LREC 2016
  • May 23, 2016

Abstract

Close
Many people post about their daily life on social media. These posts may include information about the purchase activity of people, and insights useful to companies can be derived from them: e.g. profile information of a user who mentioned something about their product. As a further advanced analysis, we consider extracting users who are likely to buy a product from the set of users who mentioned that the product is attractive. In this paper, we report our methodology for building a corpus for Twitter user purchase behavior prediction. First, we collected Twitter users who posted a want phrase + product name: e.g. "want a Xperia" as candidate want users, and also candidate bought users in the same way. Then, we asked an annotator to judge whether a candidate user actually bought a product. We also annotated whether tweets randomly sampled from want/bought user timelines are relevant or not to purchase. In this annotation, 58% of want user tweets and 35% of bought user tweets were annotated as relevant. Our data indicate that information embedded in timeline tweets can be used to predict purchase behavior of tweeted products.

Abstract

Close
The negative effect of lapses during a behavior-change program has been shown to increase the risk of repeated lapses and, ultimately, program abandonment. In this paper, we examine the potential of system-driven lapse management -- supporting users through lapses as part of a behavior-change tool. We first review lessons from domains such as dieting and addiction research and discuss the design space of lapse management. We then explore the value of one approach to lapse management -- the use of "cheat points" as a way to encourage sustained participation. In an online study, we first examine interpretations of progress that was reached through using cheat points. We then present findings from a deployment of lapse management in a two-week field study with 30 participants. Our results demonstrate the potential of this approach to motivate and change users' behavior. We discuss important open questions for the design of future technology-mediated behavior change programs.

Abstract

Close
Taking breaks from work is an essential and universal practice. In this paper, we extend current research on productivity in the workplace to consider the break habits of knowledge workers and explore opportunities of break logging for personal informatics. We report on three studies. Through a survey of 147 U.S.-based knowledge workers, we investigate what activities respondents consider to be breaks from work, and offer an understanding of the benefit workers desire when they take breaks. We then present results from a two-week in-situ diary study with 28 participants in the U.S. who logged 800 breaks, offering insights into the effect of work breaks on productivity. We finally explore the space of information visualization of work breaks and productivity in a third study. We conclude with a discussion of implications for break recommendation systems, availability and interuptibility research, and the quantified workplace.
Publication Details
  • CHI 2016 (Late Breaking Work)
  • May 7, 2016

Abstract

Close
We describe a novel thermal haptic output device, ThermoTouch, that provides a grid of thermal pixels. Unlike previous devices which mainly use Peltier elements for thermal output, ThermoTouch uses liquid cooling and electro-resistive heating to output thermal feedback at arbitrary grid locations. We describe the design of the prototype, highlight advantages and disadvantages of the technique and briefly discuss future improvements and research applications.
Publication Details
  • IEEE Multimedia Magzine
  • May 2, 2016

Abstract

Close
Silicon Valley is home to many of the world’s largest technology corporations, as well as thousands of small startups. Despite the development of other high-tech economic centers throughout the US and around the world, Silicon Valley continues to be a leading hub for high-tech innovation and development, in part because most of its companies and universities are within 20 miles of each other. Given the high concentration of multimedia researchers in Silicon Valley, and the high demand for information exchange, I was able to work with a team of researchers from various companies and organizations to start the Bay Area Multimedia Forum (BAMMF) series back in November 2013.
Publication Details
  • Multimedia Systems Journal
  • Apr 12, 2016

Abstract

Close
With modern technologies, it is possible to create annotated interactive non-linear videos (a form of hypervideo) for the Web. These videos have a non-linear structure of linked scenes to which additional information (other media like images, text, audio, or additional videos) can be added. A variety of user interactions - like in- and between-scene navigation or zooming into additional information - are possible in players for this type of video. Like linear video, quality of experience (QoE) with annotated hypervideo experiences is tied to the temporal consistency of the video stream at the client end - its flow. Despite its interactive complexity, users expect this type of video experience to flow as seamlessly as simple linear video. However, the added hypermedia elements bog playback engines down. Download and cache management systems address the flow issue, but their effectiveness is tied to numerous questions respecting user requirements, computational strategy, and evaluative metrics. In this work, we a) define QoE metrics, b) examine structural and behavioral patterns of interactive annotated non-linear video, c) propose download and cache management algorithms and strategies, d) describe the implementation of an evaluative simulation framework, and e) present the algorithm test results.

Social Media-Based Profiling of Business Locations

Publication Details
  • Fuji Xerox Technical Report
  • Mar 17, 2016

Abstract

Close
We present a method for profiling businesses at specific locations that is based on mining information from social media. The method matches geo-tagged tweets from Twitter against venues from Foursquare to identify the specific business mentioned in a tweet. By linking geo-coordinates to places, the tweets associated with a business, such as a store, can then be used to profile that business. From these venue-located tweets, we create sentiment profiles for each of the stores in a chain. We present the results as heat maps showing how sentiment differs across stores in the same chain and how some chains have more positive sentiment than other chains. We also estimate social group size from photos and create profiles of social group size for businesses. Sample heat maps of these results illustrate how the average social group size can vary across businesses.
Publication Details
  • IUI 2016
  • Mar 7, 2016

Abstract

Close
We describe methods for analyzing and visualizing document metadata to provide insights about collaborations over time. We investigate the use of Latent Dirichlet Allocation (LDA) based topic modeling to compute areas of interest on which people collaborate. The topics are represented in a node-link force directed graph by persistent fixed nodes laid out with multidimensional scaling (MDS), and the people by transient movable nodes. The topics are also analyzed to detect bursts to highlight "hot" topics during a time interval. As the user manipulates a time interval slider, the people nodes and links are dynamically updated. We evaluate the results of LDA topic modeling for the visualization by comparing topic keywords against the submitted keywords from the InfoVis 2004 Contest, and we found that the additional terms provided by LDA-based keyword sets result in improved similarity between a topic keyword set and the documents in a corpus. We extended the InfoVis dataset from 8 to 20 years and collected publication metadata from our lab over a period of 21 years, and created interactive visualizations for exploring these larger datasets.

Abstract

Close
The use of videoconferencing in the workplace has been steadily growing. While multitasking during video conferencing is often necessary, it is also viewed as impolite and sometimes unacceptable. One potential contributor to negative attitudes towards such multitasking is the disrupted sense of eye contact that occurs when an individual shifts their gaze away to another screen, for example, in a dual-monitor setup, common in office settings. We present a system to improve a sense of eye contact over videoconferencing in dual-monitor setups. Our system uses computer vision and desktop activity detection to dynamically choose the camera with the best view of a user's face. We describe two alternative implementations of our system (RGB-only, and a combination of RGB and RGB-D cameras). We then describe results from an online experiment that shows the potential of our approach to significantly improve perceptions of a person's politeness and engagement in the meeting.
Publication Details
  • Proceedings of CSCW 2016
  • Feb 27, 2016

Abstract

Close
This paper presents a detailed examination of factors that affect perceptions of, and attitudes towards multitasking in dyadic video conferencing. We first report findings from interviews with 15 professional users of videoconferencing. We then report results from a controlled online experiment with 397 participants based in the United States. Our results show that the technology used for multitasking has a significant effect on others' assumptions of what secondary activity the multitasker is likely engaged in, and that this assumed activity in turn affects evaluations of politeness and appropriateness. We also describe how different layouts of the video conferencing UI may lead to better or worse ratings of engagement and in turn ratings of polite or impolite behavior. We then propose a model that captures our results and use the model to discuss implications for behavior and for the design of video communication tools.
Publication Details
  • CSCW 2016
  • Feb 27, 2016

Abstract

Close
We present MixMeetWear, a smartwatch application that allows users to maintain awareness of the audio and visual content of a meeting while completing other tasks. Users of the system can listen to the audio of a meeting and also view, zoom, and pan webcam and shared content keyframes of other meeting participants' live streams in real time. Users can also provide input to the meeting via speech-to-text or predefined responses. A study showed that the system is useful for peripheral awareness of some meetings.
Publication Details
  • CSCW 2016
  • Feb 26, 2016

Abstract

Close
Remote meetings are messy. There are an ever-increasing number of support tools available, and, as past work has shown, people will tend to select a subset of those tools to satisfy their own institutional, social, and personal needs. While video tools make it relatively easy to have conversations at a distance, they are less adapted to sharing and archiving multimedia content. In this paper we take a deeper look at how sharing multimedia content occurs before, during, and after distributed meetings. Our findings shed light on the decisions and rationales people use to select from the vast set of tools available to them to prepare for, conduct, and reconcile the results of a remote meeting.
Publication Details
  • Personal and Ubiquitous Computing (Springer)
  • Feb 19, 2016

Abstract

Close
In recent years, there has been an explosion of services that lever- age location to provide users novel and engaging experiences. However, many applications fail to realize their full potential because of limitations in current location technologies. Current frameworks work well outdoors but fare poorly indoors. In this paper we present LoCo, a new framework that can provide highly accurate room-level indoor location. LoCo does not require users to carry specialized location hardware—it uses radios that are present in most contemporary devices and, combined with a boosting classification technique, provides a significant runtime performance improvement. We provide experiments that show the combined radio technique can achieve accuracy that improves on current state-of-the-art Wi-Fi only techniques. LoCo is designed to be easily deployed within an environment and readily leveraged by application developers. We believe LoCo’s high accuracy and accessibility can drive a new wave of location-driven applications and services.
Publication Details
  • AAAI
  • Feb 12, 2016

Abstract

Close
Image localization is important for marketing and recommendation of local business; however, the level of granularity is still a critical issue. Given a consumer photo and its rough GPS information, we are interested in extracting the fine-grained location information (i.e. business venues) of the image. To this end, we propose a novel framework for business venue recognition. The framework mainly contains three parts. First, business aware visual concept discovery: we mine a set of concepts that are useful for business venue recognition based on three guidelines including business-awareness, visually detectable, and discriminative power. Second, business-aware concept detection by convolutional neural networks (BA-CNN): we pro- pose a new network architecture that can extract semantic concept features from input image. Third, multimodal business venue recognition: we extend visually detected concepts to multimodal feature representations that allow a test image to be associated with business reviews and images from social media for business venue recognition. The experiments results show the visual concepts detected by BA-CNN can achieve up to 22.5% relative improvement for business venue recognition compared to the state-of-the-art convolutional neural network features. Experiments also show that by leveraging multimodal information from social media we can further boost the performance, especially in the case when the database images belonging to each business venue are scarce.
Publication Details
  • MMM 2016
  • Jan 4, 2016

Abstract

Close
Hypervideos yield to different challenges in the area of navigation due to their underlying graph structure. Especially when used on tablets or by older people, a lack of clarity may lead to confusion and rejection of this type of medium. To avoid confusion, the hypervideo can be extended with a well known table of contents, which needs to be created separately by the authors due to an underlying graph structure. In this work, we present an extended presentation of a table of contents for hypervideos on mobile devices. The design was tested in a real world medical training scenario with the target group of people older than 45 which is the main target group of these applications. This user group is a particular challenge since they sometimes have limited experience in the use of mobile devices and physical deficiencies with growing age. Our user interface was designed in three steps. The findings of an expert group and a survey were used to create two different prototypical versions of the display, which were then tested against each other in a user test. This test revealed that a divided view is desired. The table of contents in an easy-to-touch version should be on the left side and previews of scenes should be on the right side of the view. These findings were implemented in the existing SIVA HTML5 open source player and tested with a second group of users. This test only lead to minor changes in the GUI.
2015
Publication Details
  • ISM 2015
  • Dec 14, 2015

Abstract

Close
Indoor localization is challenging in terms of both the accuracy and possible using scenarios. In this paper, we introduce the design and implementation of a toy car localization and navigation system, which demonstrates that a projected light based localization technique allows multiple devices to know and exchange their fine-grained location information in an indoor environment. The projected light consists of a sequence of gray code images which assigns each pixel in the projection area a unique gray code so as to distinguish their coordination. The light sensors installed on the toy car and the potential “passenger” receive the light stream from the projected light stream, based on which their locations are computed. The toy car then utilizes A* algorithm to plan the route based on its own location, orientation, the target’s location and the map of available “roads”. The fast speed of localization enables the toy car to adjust its own orientation while “driving” and keep itself on “roads”. The toy car system demonstrates that the localization technique can power other applications that require fine-grained location information of multiple objects simultaneously.
Publication Details
  • MM Commons Workshop co-located with ACM Multimedia 2015.
  • Oct 30, 2015

Abstract

Close
In this paper, we analyze the association between a social media user's photo content and their interests. Visual content of photos is analyzed using state-of-the-art deep learning based automatic concept recognition. An aggregate visual concept signature is thereby computed for each user. User tags manually applied to their photos are also used to construct a tf-idf based signature per user. We also obtain social groups that users join to represent their social interests. In an effort to compare the visual-based versus tag-based user profiles with social interests, we compare corresponding similarity matrices with a reference similarity matrix based on users' group memberships. A random baseline is also included that groups users by random sampling while preserving the actual group sizes. A difference metric is proposed and it is shown that the combination of visual and text features better approximates the group-based similarity matrix than either modality individually. We also validate the visual analysis against the reference inter-user similarity using the Spearman rank correlation coefficient. Finally we cluster users by their visual signatures and rank clusters using a cluster uniqueness criteria.

Inferring Crowd-Sourced Venues for Tweets

Publication Details
  • IEEE BigData 2015
  • Oct 29, 2015

Abstract

Close
Knowing the geo-located venue of a tweet can facilitate better understanding of a user's geographic context, allowing apps to more precisely present information, recommend services, and target advertisements. However, due to privacy concerns, few users choose to enable geotagging of their tweets resulting in a small percentage of tweets being geotagged; furthermore, even if the geo-coordinates are available, the closest venue to the geo-location may be incorrect. In this paper, we present a method for providing a ranked list of geo-located venues for a non-geotagged tweet, which simultaneously indicates the venue name and the geo-location at a very fine-grained granularity. In our proposed method for Venue Inference for Tweets ({\VIT}), we construct a heterogeneous social network in order to analyze the embedded social relations, and leverage available but limited geographic data to estimate the geo-located venue of tweets. A single classifier is trained to predict the probability of a tweet and a geo-located venue being linked, rather than training a separate model for each venue. We examine the performance of four types of social relation features and three types of geographic features embedded in a social network when predicting whether a tweet and a venue are linked, with a best accuracy of over 88%. We use the classifier probability estimates to rank the predicted geo-located venues of a non-geotagged tweet from over 19k possibilities, and observed an average top-5 accuracy of 29%.
Publication Details
  • ACM MM
  • Oct 26, 2015

Abstract

Close
Establishing common ground is one of the key problems for any form of communication. The problem is particularly pronounced in remote meetings, in which participants can easily lose track of the details of dialogue for any number of reasons. In this demo we present a web-based tool, MixMeet, that allows teleconferencing participants to search the contents of live meetings so they can rapidly retrieve previously shared content to get on the same page, correct a misunderstanding, or discuss a new idea.

Abstract

Close
New technology comes about in a number of different ways. It may come from advances in scientific research, through new combinations of existing technology, or by simply from imagining what might be possible in the future. This video describes the evolution of Tabletop Telepresence, a system for remote collaboration through desktop videoconferencing combined with a digital desk. Tabletop Telepresence provides a means to share paper documents between remote desktops, interact with documents and request services (such as translation), and communicate with a remote person through a teleconference. It was made possible by combining advances in camera/projector technology that enable a fully functional digital desk, embodied telepresence in video conferencing and concept art that imagines future workstyles.
Publication Details
  • ACM Multimedia
  • Oct 18, 2015

Abstract

Close
While synchronous meetings are an important part of collaboration, it is not always possible for all stakeholders to meet at the same time. We created the concept of hypermeetings to support meetings with asynchronous attendance. Hypermeetings consist of a chain of video-recorded meetings with hyperlinks for navigating through the video content. HyperMeeting supports the synchronized viewing of prior meetings during a videoconference. Natural viewing behavior such as pausing generates hyperlinks between the previously recorded meetings and the current video recording. During playback, automatic link-following guided by playback plans present the relevant content to users. Playback plans take into account the user's meeting attendance and viewing history and match them with features such as speaker segmentation. A user study showed that participants found hyperlinks useful but did not always understand where they would take them. The study results provide a good basis for future system improvements.
Publication Details
  • International Journal of Semantic Computing
  • Sep 15, 2015

Abstract

Close
A localization system is a coordinate system for describing the world, organizing the world, and controlling the world. Without a coordinate system, we cannot specify the world in mathematical forms; we cannot regulate processes that may involve spatial collisions; we cannot even automate a robot for physical actions. This paper provides an overview of indoor localization technologies, popular models for extracting semantics from location data, approaches for associating semantic information and location data, and applications that may be enabled with location semantics. To make the presentation easy to understand, we will use a museum scenario to explain pros and cons of different technologies and models. More specifically, we will first explore users' needs in a museum scenario. Based on these needs, we will then discuss advantages and disadvantages of using different localization technologies to meet these needs. From these discussions, we can highlight gaps between real application requirements and existing technologies, and point out promising localization research directions. Similarly, we will also discuss context information required by different applications and explore models and ontologies for connecting users, objects, and environment factors with semantics. By identifying gaps between various models and real application requirements, we can draw a road map for future location semantics research.
Publication Details
  • International Symposium on Wearable Computers (ISWC)
  • Sep 8, 2015

Abstract

Close
To facilitate distributed communication in mobile settings, we developed a system for creating and sharing gaze anno-tations using head mounted displays, such as Google Glass. Gaze annotations make it possible to point out objects of interest within an image and add a verbal description to it. To create an annotation, the user simply looks at an object of interest in the image and speaks out the information connected to the object. The gaze location is recorded and inserted as a gaze marker and the voice is transcribed using speech recognition. After an annotation has been created, it can be shared with another person. We performed a user study that showed that users experienced that gaze annota-tions add precision and expressiveness compared to an annotation to the whole image.
Publication Details
  • DocEng 2015
  • Sep 8, 2015

Abstract

Close
We present a novel system for detecting and capturing paper documents on a tabletop using a 4K video camera mounted overhead on pan-tilt servos. Our automated system first finds paper documents on a cluttered tabletop based on a text probability map, and then takes a sequence of high-resolution frames of the located document to reconstruct a high quality and fronto-parallel document page image. The quality of the resulting images enables OCR processing on the whole page. We performed a preliminary evaluation on a small set of 10 document pages and our proposed system achieved 98% accuracy with the open source Tesseract OCR engine.
Publication Details
  • DocEng 2015
  • Sep 8, 2015

Abstract

Close
Web-based tools for remote collaboration are quickly becoming an established element of the modern workplace. During live meetings, people share web sites, edit presentation slides, and share code editors. It is common for participants to refer to previously spoken or shared content in the course of synchronous distributed collaboration. A simple approach is to index with Optical Character Recognition (OCR) the video frames, or key-frames, being shared and let user retrieve them with text queries. Here we show that a complementary approach is to look at the actions users take inside the live document streams. Based on observations of real meetings, we focus on two important signals: text editing and mouse cursor motion. We describe the detection of text and cursor motion, their implementation in our WebRTC-based system, and how users are better able to search live documents during a meeting based on these detected and indexed actions.

Abstract

Close
Location-enabled applications now permeate the mobile computing landscape. As technologies like Bluetooth Low Energy (BLE) and Apple's iBeacon protocols begin to see widespread adoption, we will no doubt see a proliferation of indoor location enabled application experiences. While not essential to each of these applications, many will require that the location of the device be true and verifiable. In this paper, we present LocAssure, a new framework for trusted indoor location estimation. The system leverages existing technologies like BLE and iBeacons, making the solution practical and compatible with technologies that are already in use today. In this work, we describe our system, situate it within a broad location assurance taxonomy, describe the protocols that enable trusted localization in our system, and provide an analysis of early deployment and use characteristics. Through developer APIs, LocAssure can provide critical security support for a broad range of indoor location applications.

Assistive Image Comment Robot - A Novel Mid-Level Concept-Based Representation

Publication Details
  • IEEE Transactions on Affective Computing
  • Aug 30, 2015

Abstract

Close
We present a general framework and working system for predicting likely affective responses of the viewers in the social media environment after an image is posted online. Our approach emphasizes a mid-level concept representation, in which intended affects of the image publisher is characterized by a large pool of visual concepts (termed PACs) detected from image content directly instead of textual metadata, evoked viewer affects are represented by concepts (termed VACs) mined from online comments, and statistical methods are used to model the correlations among these two types of concepts. We demonstrate the utilities of such approaches by developing an end-to-end Assistive Comment Robot application, which further includes components for multi-sentence comment generation, interactive interfaces, and relevance feedback functions. Through user studies, we showed machine suggested comments were accepted by users for online posting in 90% of completed user sessions, while very favorable results were also observed in various dimensions (plausibility, preference, and realism) when assessing the quality of the generated image comments.

Abstract

Close
In this paper we report findings from two user studies that explore the problem of establishing common viewpoint in the context of a wearable telepresence system. In our first study, we assessed the ability of a local person (the guide) to identify the view orientation of the remote person by looking at the physical pose of the telepresence device. In the follow-up study, we explored visual feedback methods for communicating the relative viewpoints of the remote user and the guide via a head-mounted display. Our results show that actively observing the pose of the device is useful for viewpoint estimation. However, in the case of telepresence devices without physical directional affordances, a live video feed may yield comparable results. Lastly, more abstract visualizations lead to significantly longer recognition times, but may be necessary in more complex environments.
Publication Details
  • IEEE Pervasive Computing
  • Jul 1, 2015

Abstract

Close
Tutorials are one of the most fundamental means of conveying knowledge. In this paper, we present a suite of applications that allow users to combine different types of media captured from handheld, standalone, or wearable devices to create multimedia tutorials. We conducted a study comparing standalone (camera on tripod) versus wearable capture (Google Glass). The results show that tutorial authors have a slight preference for wearable capture devices, especially when recording activities involving larger objects.

POLI: MOBILE AR BY HEARING POSITION FROM LIGHT

Publication Details
  • ICME 2015 Mobile Multimedia Workshop
  • Jun 29, 2015

Abstract

Close
Connecting digital information to physical objects can enrich their content and make them more vivid. Traditional augmented reality techniques reach this goal by augmenting physical objects or their surroundings with various markers and typically require end users to wear additional devices to explore the augmented content. In this paper, we propose POLI, which allows a system administrator to author digital content with his/her mobile device while allows end-users to explore the authored content with their mobile devices. POLI provides three novel interactive approaches for authoring digital content. It does not change the nature appearances of physical objects and does not require users to wear any additional hardware on their bodies.

Abstract

Close
As video-mediated communication reaches broad adoption, improving immersion and social interaction are important areas of focus in the design of tools for exploration and work-based communication. Here we present three threads of research focused on developing new ways of enabling exploration of a remote environment and interacting with the people and artifacts therein.
Publication Details
  • Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
  • Apr 18, 2015

Abstract

Close
Edge targets, such as buttons or menus along the edge of a screen, are known to afford fast acquisition performance in desktop mousing environments. As the popularity of touch based devices continues to grow, understanding the affordances of edge targets on touchscreen is needed. This paper describes results from two controlled experiments that examine in detail the effect of edge targets on performance in touch devices. Our results shows that on touch devices, a target's proximity to the edge has a significant negative effect on reaction time. We examine the effect in detail and explore mitigating factors. We discuss potential explanations for the effect and propose implications for the design of efficient interfaces for touch devices.
Publication Details
  • CHI 2015 (Extended Abstracts)
  • Apr 18, 2015

Abstract

Close
We present our ongoing research on automatic segmentation of motion gestures tracked by IMUs. We postulate that by recognizing gesture execution phases from motion data that we may be able to auto-delimit user gesture entries. We demonstrate that machine learning classifiers can be trained to recognize three distinct phases of gesture entry: the start, middle and end of a gesture motion. We further demonstrate that this type of classification can be done at the level of individual gestures. Furthermore, we describe how we captured a new data set for data exploration and discuss a tool we developed to allow manual annotations of gesture phase information. Initial results we obtained using the new data set annotated with our tool show a precision of 0.95 for recognition of the gesture phase and a precision of 0.93 for simultaneous recognition of the gesture phase and the gesture type.
Publication Details
  • CHI 2015
  • Apr 18, 2015

Abstract

Close
Websites can record individual users' activities and display them in a variety of ways. There is a tradeoff between detail and abstraction in visualization, especially when the amount of content increases and becomes more difficult to process. We conducted an experiment on Mechanical Turk varying the quality, detail, and visual presentation of information about an individual's past work to see how these design features affected perceptions of the worker. We found that providing detail in the display through text increased processing time and led to less positive evaluations. Visually abstract displays required less processing time but decreased confidence in evaluation. This suggests that different design parameters may engender differing psychological processes that influence reactions towards an unknown person.
Publication Details
  • CSCW 2015
  • Mar 14, 2015

Abstract

Close
Collaboration Map (CoMap) is an interactive visualization tool showing temporal changes of small group collaborations. As dynamic entities, collaboration groups have flexible features such as people involved, areas of work, and timings. CoMap shows a graph of collaborations during user-adjustable periods, providing overviews of collaborations' dynamic features. We demonstrate CoMap with a co-authorship dataset extracted from DBLP to visualize 587 publications by 29 researchers at a research organization.

Abstract

Close
In this paper, we report findings from a study that compared basic video-conferencing, emergent kinetic video-conferencing techniques, and face-to-face meetings. In our study, remote and co-located participants worked together in groups of three. We show, in agreement with prior literature, the strong adverse impact of being remote on participation-levels. We also show that local and remote participants perceived differently their own contributions and others. Extending prior work, we also show that local participants exhibited significantly more overlapping speech with remote participants who used an embodied proxy, than with remote participants in basic-video conferencing (and at a rate similar to overlapping speech for co-located groups). We also describe differences in how the technologies were used to follow conversation. We discuss how these findings extend our understanding of the promise and potential limitations of embodied video-conferencing solutions.

Abstract

Close
In a variety of peer production settings, from Wikipedia to open source software development to crowdsourcing, individuals may encounter, edit, or review the work of unknown others. Typically this is done without much context to the person's past behavior or performance. To understand how exposure to an unknown individual's activity history influences attitudes and behaviors, we conducted an online experiment on Mechanical Turk varying the content, quality, and presentation of information about another Turker's work history. Surprisingly, negative work history did not lead to negative outcomes, but in contrast, a positive work history led to positive initial impressions that persisted in the face of contrary information. This work provides insight into the impact of activity history design factors on psychological and behavioral outcomes that can be of use in other related settings.

Abstract

Close
Our research focuses on improving the effectiveness and usability of driving mobile telepresence robots by increasing the user's sense of immersion during the navigation task. To this end we developed a robot platform that allows immersive navigation using head-tracked stereoscopic video and a HMD. We present the result of an initial user study that compares System Usability Scale (SUS) ratings of a robot teleoperation task using head-tracked stereo vision with a baseline fixed video feed and the effect of a low or high placement of the camera(s). Our results show significantly higher ratings for the fixed video condition and no effect of the camera placement. Future work will focus on examining the reasons for the lower ratings of stereo video and and also exploring further visual navigation interfaces.
Publication Details
  • The Twenty-Ninth AAAI Conference on Artificial Intelligence (AAAI-15)
  • Jan 25, 2015

Abstract

Close
Name of an identity is strongly influenced by his/her cultural background such as gender and ethnicity, both vital attributes for user profiling, attribute-based retrieval, etc. Typically, the associations between names and attributes (e.g., people named "Amy" are mostly females) are annotated manually or provided by the census data of governments. We propose to associate a name and its likely demographic attributes by exploiting click-throughs between name queries and images with automatically detected facial attributes. This is the first work attempting to translate an abstract name to demographic attributes in visual-data-driven manner, and it is adaptive to incremental data, more countries and even unseen names (the names out of click-through data) without additional manual labels. In the experiments, the automatic name-attribute associations can help gender inference with competitive accuracy by using manual labeling. It also benefits profiling social media users and keyword-based face image retrieval, especially for contributing 12% relative improvement of accuracy in adapting to unseen names.
2014

Synchronizing Web Documents with Style

Publication Details
  • ACM Brazilian Symposium on Multimedia and the Web
  • Nov 17, 2014

Abstract

Close
In this paper we report on our efforts to define a set of document extensions to Cascading Style Sheets (CSS) that allow for structured timing and synchronization of elements within a Web page. Our work considers the scenario in which the temporal structure can be decoupled from the content of the Web page in a similar way that CSS does with the layout, colors and fonts. Based on the SMIL (Synchronized Multimedia Integration Language) temporal model we propose CSS document extensions and discuss the design and implementation of a proof of concept that realizes our contributions. As HTML5 seems to move away from technologies like Flash and XML (eXtensible Markup Language), we believe our approach provides a flexible declarative solution to specify rich media experiences that is more aligned with current Web practices.
Publication Details
  • ACM International Workshop on Understanding and Modeling Multiparty, Multimodal Interactions (UMMMI)
  • Nov 15, 2014

Abstract

Close
In this paper we discuss communication problems in video-mediated small group discussions. We present results from a study in which ad-hoc groups of five people, with moderator, solved a quiz question-select answer style task over a video-conferencing system. The task was performed under different delay conditions, of up to 2000ms additional one-way delay. Even with a delay up to 2000ms, we could not observe any effect on the achieved quiz scores. In contrast, the subjective satisfaction was severely negatively affected. While we would have suspected a clear conversational breakdown with such a high delay, groups adapted their communication style and thus still managed to solve the task. This is, most groups decided to switch to a more explicit turn-taking scheme. We argue that future video-conferencing systems can provide a better experience if they are aware of the current conversational situation and can provide compensation mechanisms. Thus we provide an overview of what cues are relevant and how they are affected by the video-conferencing system and how recent advancements in computational social science can be leveraged. Further, we provide an analysis of the suitability of normal webcam data for such cue recognition. Based on our observations, we suggest strategies that can be implemented to alleviate the problems.
Publication Details
  • ACM International Workshop on Socially-aware Multimedia (SAM)
  • Nov 6, 2014

Abstract

Close
As commercial, off-the-shelf, services enable people to easily connect with friends and relatives, video-mediated communication is filtering into our daily activities. With the proliferation of broadband and powerful devices, multi-party gatherings are becoming a reality in home environments. With the technical infrastructure in place and has been accepted by a large user base, researchers and system designers are concentrating on understanding and optimizing the Quality of Experience (QoE) for participants. Theoretical foundations for QoE have identified three crucial factors for understanding the impact on the individual’s perception: system, context, and user. While most of the current research tends to focus on the system factors (delay, bandwidth, resolution), in this paper we offer a more complete analysis that takes into consideration context and user factors. In particular, we investigate the influence of delay (constant system factor) in the QoE of multi-party conversations. Regarding the context, we extend the typical one-to-one condition to explore conversations between small groups (up to five people). In terms of user factors, we take into account conversation analysis, turn-taking and role-theory, for better understanding the impact of different user profiles. Our investigation allows us to report a detailed analysis on how delay influences the QoE, concluding that the actual interactivity pattern of each participant in the conversation results on different noticeability thresholds of delays. Such results have a direct impact on how we should design and construct video-communication services for multi-party conversations, where user activity should be considered as a prime adaptation and optimization parameter.

Multi-modal Language Models for Lecture Video Retrieval

Publication Details
  • ACM Multimedia 2014
  • Nov 2, 2014

Abstract

Close
We propose Multi-modal Language Models (MLMs), which adapt latent variable models for text document analysis to modeling co-occurrence relationships in multi-modal data. In this paper, we focus on the application of MLMs to indexing slide and spoken text associated with lecture videos, and subsequently employ a multi-modal probabilistic ranking function for lecture video retrieval. The MLM achieves highly competitive results against well established retrieval methods such as the Vector Space Model and Probabilistic Latent Semantic Analysis. Retrieval performance with MLMs is also shown to improve with the quality of the available extracted spoken text.

Social Media-based Profiling of Store Locations

Publication Details
  • ACM Multimedia Workshop on Geotagging and Its Applications in Multimedia
  • Nov 2, 2014

Abstract

Close
We present a method for profiling businesses at specific locations that is based on mining information from social media. The method matches geo-tagged tweets from Twitter against venues from Foursquare to identify the specific business mentioned in a tweet. By linking geo-coordinates to places, the tweets associated with a business, such as a store, can then be used to profile that business. We used a sentiment estimator developed for tweets to create sentiment profiles of the stores in a chain, computing the average sentiment of tweets associated with each store. We present the results as heatmaps which show how sentiment differs across stores in the same chain and how some chains have more positive sentiment than other chains. We also created profiles of social group size for businesses and show sample heatmaps illustrating how the size of a social group can vary.

On Aesthetics and Emotions in Scene Images: A Computational Perspective.

Publication Details
  • Book: Scene Vision, MIT Press, (Editors Kestas Kveraga and Moshe Bar).
  • Nov 1, 2014

Abstract

Close
In this chapter, we discuss the problem of computational inference of aesthetics and emotions from images. We draw inspiration from diverse disciplines such as philosophy, photography, art, and psychology to define and understand the key concepts of aesthetics and emotions. We introduce the primary computational problems that the research community has been striving to solve and the computational framework required for solving them. We also describe datasets available for performing assessment and outline several real-world applications where research in this domain can be employed. This chapter discusses the contributions of a significant number of research articles that have attempted to solve problems in aesthetics and emotion inference in the last several years. We conclude the chapter with directions for future research. Here’s a link to the book.
http://mitpress.mit.edu/books/scene-vision
Publication Details
  • UIST 2014
  • Oct 5, 2014

Abstract

Close
Video Text Retouch is a technique for retouching textual content found in many online videos such as screencasts, recorded presentations and many online e-learning videos. Viewed through our special, HTML5-based player, users can edit in real-time the textual content of the video frames, such as correcting typos or inserting new words between existing characters. Edits are overlaid and tracked at the desired position for as long as the original video content remains similar. We describe the interaction techniques, image processing algorithms and give implementation details of the system.

Abstract

Close
It is now possible to develop head-mounted devices (HMDs) that allow for ego-centric sensing of mid-air gestural input. Therefore, we explore the use of HMD-based gestural input techniques in smart space environments. We developed a usage scenario to evaluate HMD-based gestural interactions and conducted a user study to elicit qualitative feedback on several HMD-based gestural input techniques. Our results show that for the proposed scenario, mid-air hand gestures are preferred to head gestures for input and rated more favorably compared to non-gestural input techniques available on existing HMDs. Informed by these study results, we developed a prototype HMD system that supports gestural interactions as proposed in our scenario. We conducted a second user study to quantitatively evaluate our prototype comparing several gestural and non-gestural input techniques. The results of this study show no clear advantage or disadvantage of gestural inputs vs.~non-gestural input techniques on HMDs. We did find that voice control as (sole) input modality performed worst compared to the other input techniques we evaluated. Lastly, we present two further applications implemented with our system, demonstrating 3D scene viewing and ambient light control. We conclude by briefly discussing the implications of ego-centric vs.~exo-centric tracking for interaction in smart spaces.
Publication Details
  • MobileHCI 2014 (Industrial Case Study)
  • Sep 23, 2014

Abstract

Close
Telepresence systems usually lack mobility. Polly, a wearable telepresence device, allows users to explore remote locations or experience events remotely by means of a person that serves as a mobile "guide". We built a series of hardware prototypes and our current, most promising embodiment consists of a smartphone mounted on a stabilized gimbal that is wearable. The gimbal enables remote control of the viewing angle as well as providing active image stabilization while the guide is walking. We present qualitative findings from a series of 8 field tests using either Polly or only a mobile phone. We found that guides felt more physical comfort when using Polly vs. a phone and that Polly was accepted by other persons at the remote location. Remote participants appreciated the stabilized video and ability to control camera view. Connection and bandwidth issues appear to be the most challenging issues for Polly-like systems.
Publication Details
  • MobileHCI 2014 (Full Paper)
  • Sep 23, 2014

Abstract

Close
Secure authentication with devices or services that store sensitive and personal information is highly important. However, traditional password and pin-based authentication methods compromise between the level of security and user experience. AirAuth is a biometric authentication technique that uses in-air gesture input to authenticate users. We evaluated our technique on a predefined (simple) gesture set and our classifier achieved an average accuracy of 96.6% in an equal error rate (EER-)based study. We obtained an accuracy of 100% when exclusively using personal (complex) user gestures. In a further user study, we found that AirAuth is highly resilient to video-based shoulder surfing attacks, with a mea- sured false acceptance rate of just 2.2%. Furthermore, a longitudinal study demonstrates AirAuth’s repeatability and accuracy over time. AirAuth is relatively simple, robust and requires only a low amount of computational power and is hence deployable on embedded or mobile hardware. Un- like traditional authentication methods, our system’s security is positively aligned with user-rated pleasure and excitement levels. In addition, AirAuth attained acceptability ratings in personal, office, and public spaces that are comparable to an existing stroke-based on-screen authentication technique. Based on the results presented in this paper, we believe that AirAuth shows great promise as a novel, secure, ubiquitous, and highly usable authentication method.

Asymmetric Delay in Video-Mediated Group Discussions

Publication Details
  • International Workshop on Quality of Multimedia Experience (QoMEX)
  • Sep 18, 2014

Abstract

Close
Delay has been found as one of the most crucial factors determining the Quality of Experience (QoE) in synchronous video-mediated communication. The effect has been extensively studied for dyadic conversations and recently the study of small group communications has become the focus of the research community. Contrary to dyads, in which the delay is symmetrically perceived, this is not the case for groups. Due to the heterogeneous structure of the internet asymmetric delays between participants are likely to occur.
Publication Details
  • DocEng 2014
  • Sep 16, 2014

Abstract

Close
Distributed teams must co-ordinate a variety of tasks. To do so they need to be able to create, share, and annotate documents as well as discuss plans and goals. Many workflow tools support document sharing, while other tools support videoconferencing, however there exists little support for connecting the two. In this work we describe a system that allows users to share and markup content during web meetings. This shared content can provide important conversational props within the context of a meeting; it can also help users review archived meetings. Users can also extract shared content from meetings directly into other workflow tools.
Publication Details
  • Assistive Computer Vision and Robotics Workshop of ECCV
  • Sep 12, 2014

Abstract

Close
Polly is an inexpensive, portable telepresence device based on the metaphor of a parrot riding a guide's shoulder and acting as proxy for remote participants. Although remote users may be anyone with a desire for `tele-visits', we focus on limited mobility users. We present a series of prototypes and field tests that informed design iterations. Our current implementations utilize a smartphone on a stabilized, remotely controlled gimbal that can be hand held, placed on perches or carried by wearable frame. We describe findings from trials at campus, museum and faire tours with remote users, including quadriplegics. We found guides were more comfortable using Polly than a phone and that Polly was accepted by other people. Remote participants appreciated stabilized video and having control of the camera. One challenge is negotiation of movement and view control. Our tests suggests Polly is an effective alternative to telepresence robots, phones or fixed cameras.

Abstract

Close
In recent years, there has been an explosion of social and collaborative applications that leverage location to provide users novel and engaging experiences. Current location technologies work well outdoors but fare poorly indoors. In this paper we present LoCo, a new framework that can provide highly accurate room-level location using a supervised classification scheme. We provide experiments that show this technique is orders of magnitude more efficient than current state-of-the-art Wi- Fi localization techniques. Low classification overhead and computational footprint make classification practical and efficient even on mobile devices. Our framework has also been designed to be easily deployed and lever- aged by developers to help create a new wave of location- driven applications and services.
Publication Details
  • International Journal of Multimedia Information Retrieval Special Issue on Cross-Media Analysis
  • Sep 4, 2014

Abstract

Close
Media Embedded Target, or MET, is an iconic mark printed in a blank margin of a page that indicates a media link is associated with a nearby region of the page. It guides the user to capture the region and thus retrieve the associated link through visual search within indexed content. The target also serves to separate page regions with media links from other regions of the page. The capture application on the cell phone displays a sight having the same shape as the target near the edge of a camera-view display. The user moves the phone to align the sight with the target printed on the page. Once the system detects correct sight-target alignment, the region in the camera view is captured and sent to the recognition engine which identifies the image and causes the associated media to be displayed on the phone. Since target and sight alignment defines a capture region, this approach saves storage by only indexing visual features in the predefined capture region, rather than indexing the entire page. Target-sight alignment assures that the indexed region is fully captured. We compare the use of MET for guiding capture with two standard methods: one that uses a logo to indicate that media content is available and text to define the capture region and another that explicitly indicates the capture region using a visible boundary mark.
Publication Details
  • SPIE optics + photonics (SPIE)
  • Aug 17, 2014

Abstract

Close
Live 3D reconstruction of a human as a 3D mesh with commodity electronics is becoming a reality. Immersive applications (i.e. cloud gaming, tele-presence) benefit from effective transmission of such content over a bandwidth limited link. In this paper we outline different approaches for compressing live reconstructed mesh geometry based on distributing mesh reconstruction functions between sender and receiver. We evaluate rate-performance-complexity of different configurations. First, we investigate 3D mesh compression methods (i.e. dynamic/static) from MPEG-4. Second, we evaluate the option of using octree based point cloud compression and receiver side surface reconstruction.
Publication Details
  • ICME 2014, Best Demo Award
  • Jul 14, 2014

Abstract

Close
In this paper, we describe Gesture Viewport, a projector-camera system that enables finger gesture interactions with media content on any surface. We propose a novel and computationally very efficient finger localization method based on the detection of occlusion patterns inside a virtual sensor grid rendered in a layer on top of a viewport widget. We develop several robust interaction techniques to prevent unintentional gestures to occur, to provide visual feedback to a user, and to minimize the interference of the sensor grid with the media content. We show the effectiveness of the system through three scenarios: viewing photos, navigating Google Maps, and controlling Google Street View.
Publication Details
  • ACM SIGIR International Workshop on Social Media Retrieval and Analysis
  • Jul 11, 2014

Abstract

Close
We examine the use of clustering to identify selfies in a social media user's photos for use in estimating demographic information such as age, gender, and race. Faces are first detected within a user's photos followed by clustering using visual similarity. We define a cluster scoring scheme that uses a combination of within-cluster visual similarity and average face size in a cluster to rank potential selfie-clusters. Finally, we evaluate this ranking approach over a collection of Twitter users and discuss methods that can be used for improving performance in the future.
Publication Details
  • SIGIR 2014
  • Jul 6, 2014
  • pp. pp.495-504

Abstract

Close
People often use more than one query when searching for information. They revisit search results to re-find information and build an understanding of their search need through iterative explorations of query formulation. These tasks are not well-supported by search interfaces and web browsers. We designed and built SearchPanel, a Chrome browser extension that helps people manage their ongoing information seeking. This extension combines document and process metadata into an interactive representation of the retrieved documents that can be used for sense-making, navigation, and re-finding documents. In a real-world deployment spanning over two months, results show that SearchPanel appears to have been primarily used for complex information needs, in search sessions with long durations and high numbers of queries. The process metadata features in SearchPanel seem to be of particular importance when working on complex information needs.

Supporting media bricoleurs

Publication Details
  • ACM interactions
  • Jul 1, 2014

Abstract

Close
Online video is incredibly rich. A 15-minute home improvement YouTube tutorial might include 1500 words of narration, 100 or more significant keyframes showing a visual change from multiple perspectives, several animated objects, references to other examples, a tool list, comments from viewers and a host of other metadata. Furthermore, video accounts for 90% of worldwide Internet traffic. However, it is our observation that video is not widely seen as a full-fledged document; dismissed as a media that, at worst, gilds over substance and, at best, simply augments text-based communications. In this piece, we suggest that negative attitudes toward multimedia documents that include audio and video are largely unfounded and arise mostly because we lack the necessary tools to treat video content as first-order media or to support seamlessly mixing media.
Publication Details
  • ACM TVX 2014
  • Jun 25, 2014

Abstract

Close
Creating compelling multimedia content is a difficult task. It involves not only the creative process of developing a compelling media-based story, but it also requires significant technical support for content editing, management and distribution. This has been true for printed, audio and visual presentations for centuries. It is certainly true for broadcast media such as radio and television. The talk will survey several approaches to describe and manage media interactions. We will focus on the temporal modeling of context-sensitive personalized interactions of complex collections of independent media objects. Using the concepts of ‘togetherness’ being employed in the EU’s FP-7 project TA2: Together Anywhere, Together Anytime, we will follow the process of media capture, profiling, composition, sharing and end-user manipulation. We will consider the promise of using automated tools and contrast this with the reality of letting real users manipulation presentation semantics in real time. The talk will not present a closed form solution, but will present a series of topics and problems that can stimulate the development of a new generation of systems to stimulate social media interaction.
Publication Details
  • IEEE Transactions on Multimedia
  • Jun 18, 2014

Abstract

Close
3D Tele-immersion enables participants in remote locations to share, in real-time, an activity. It offers users interactive and immersive experiences, but it challenges current media streaming solutions. Work in the past has mainly focused on the efficient delivery of image-based 3D videos and on realistic rendering and reconstruction of geometry-based 3D objects. The contribution of this paper is a real-time streaming component for 3D Tele-Immersion with dynamic reconstructed geometry. This component includes both a novel fast compression method and a rateless packet protection scheme specifically designed towards the requirements imposed by real time transmission of live-reconstructed mesh geometry. Tests on a large dataset show an encoding speed-up upto 10 times at comparable compression ratio and quality, when compared to the high-end MPEG-4 SC3DMC mesh encoders. The implemented rateless code ensures complete packet loss protection of the triangle mesh object and a delivery delay within interactive bounds. Contrary to most linear fountain codes, the designed codec enables real time progressive decoding allowing partial decoding each time a packet is received. This approach is compared to transmission over TCP in packet loss rates and latencies, typical in managed WAN and MAN networks, and heavily outperforms it in terms of end-to-end delay. The streaming component has been integrated into a larger 3D Tele-Immersive environment that includes state of the art 3D reconstruction and rendering modules. This resulted in a prototype that can capture, compress transmit and render triangle mesh geometry in real-time in realistic internet conditions as shown in experiments. Compared to alternative methods, lower interactive end-to-end delay and frame rates over 3 times higher are achieved.
Publication Details
  • ICWSM (The 8th International AAAI Conference on Weblogs and Social Media)
  • Jun 1, 2014

Abstract

Close
A topic-independent sentiment model is commonly used to estimate sentiment in microblogs. But for movie and product reviews, domain adaptation has been shown to improve sentiment estimation performance. We investigated the utility of topic-dependent polarity estimation models for microblogs. We examined both a model trained on Twitter tweets containing a target keyword and a model trained on an enlarged set of tweets containing terms related to a topic. Comparing the performance of the topic-dependent models to a topic-independent model trained on a general sample of tweets, we noted that for some topics, topic-dependent models performed better. We then propose a method for predicting which topics are likely to have better sentiment estimation performance when a topic-dependent sentiment model is used.
Publication Details
  • IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)
  • May 3, 2014

Abstract

Close
Geometry based 3D Tele-Immersion is a novel emerging media application that involves on the fly reconstructed 3D mesh geometry. To enable real-time communication of such live reconstructed mesh geometry over a bandwidth limited link, fast dynamic geometry compression is needed. However, most tools and methods have been developed for compressing synthetically generated graphics content. These methods achieve good compression rates by exploiting topological and geometric properties that typically do not hold for reconstructed mesh geometry. The live reconstructed dynamic geometry is causal and often non-manifold, open, non-oriented and time-inconsistent. Based on our experience developing a prototype for 3D Teleimmersion based on live reconstructed geometry, we discuss currently available tools. We then present our approach for dynamic compression that better exploits the fact that the 3D geometry is reconstructed and achieve a state of art rate-distortion under stringent real-time constraints. http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=6854788&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D6854788
Publication Details
  • CHI 2014 (Interactivity)
  • Apr 26, 2014

Abstract

Close
AirAuth is a biometric authentication technique that uses in-air hand gestures to authenticate users tracked through a short-range depth sensor. Our method tracks multiple distinct points on the user's hand simultaneously that act as a biometric to further enhance security. We describe the details of our mobile demonstrator that will give Interactivity attendees an opportunity to enroll and verify our system's authentication method. We also wish to encourage users to design their own gestures for use with the system. Apart from engaging with the CHI community, a demonstration of AirAuth would also yield useful gesture data input by the attendees which we intend to use to further improve the prototype and, more importantly, make available publicly as a resource for further research into gesture-based user interfaces.
Publication Details
  • CHI Extended Abstracts 2014
  • Apr 26, 2014

Abstract

Close
AirAuth is a biometric, gesture-based authentication system based on in-air gesture input. We describe the operations necessary to sample enrollment gestures and to perform matching for authentication, using data from a short range depth sensor. We present the results of two initial user studies. A first study was conducted to crowd source a simple gesture set for use in further evaluations. The results of our second study indicate that AirAuth achieves a very high Equal Error Rate (EER-)based accuracy of 96.6 % for simple gesture set and 100 % for user-specific gestures. Future work will encompass the evaluation of possible attack scenarios and obtaining qualitative user feedback on usability advantages of gesture-based authentication.
Publication Details
  • ACM ICMR 2014
  • Apr 1, 2014

Abstract

Close
Motivated by scalable partial-duplicate visual search, there has been growing interest on a wealth of compact and efficient binary feature descriptors (e.g. ORB, FREAK, BRISK). Typically, binary descriptors are clustered into codewords and quantized with Hamming distance, which follows conventional bag-of-words strategy. However, such codewords formulated in Hamming space did not present obvious indexing and search performance improvement as compared to the Euclidean ones. In this paper, without explicit codeword construction, we explore to utilize binary descriptors as direct codebook indices (addresses). We propose a novel approach to build multiple index tables which parallelly check the collision of same hash values. The evaluation is performed on two public image datasets: DupImage and Holidays. The experimental results demonstrate the index efficiency and retrieval accuracy of our approach.

The Optimiser: monitoring and improving switching delays in video conferencing

Publication Details
  • ACM Workshop on Mobile Video (ACM MoVid)
  • Mar 18, 2014

Abstract

Close
With the growing popularity of video communication systems, more people are using group video chat, rather than only one-to-one video calls. In such multi-party sessions, remote participants compete for the available screen space and bandwidth. A common solution is showing the current speaker prominently. Bandwidth limitations may not allow all streams to be sent at a high resolution at all times, especially with many participants in a call. This can be mitigated by only switching on higher resolutions when they are required. This switching encounters delays due to latency and the properties of encoded video streams. In this paper, we analyse and improve the switching delay of our video conferencing system. Our server-centric system offers a next-generation video chat solution, providing end-to-end video encoding. To evaluate our system we use a testbed that allows us to emulate different network conditions. We measure the video switching delay between three clients, each connected via different network profiles. Our results show that missing Intra-Frames in the transmission has a strong influence on the switching delay. Based on this, we provide an optimization mechanism that improves those delays by resending Intra-Frames.
http://dl.acm.org/citation.cfm?id=2579472

Multimedia Authoring and Annotation

Publication Details
  • International Journal on Multimedia Tools and Applications
  • Feb 28, 2014

Abstract

Close
With the massive amount of captured multimedia, authoring is more relevant than ever. Multimedia content is available in many settings including the web, mobile devices, desktop applications, as well as games and interactive TV. The authoring and production of multimedia documents demands attention to many issues related to the structure and to the synchronization of the media components, to the specification of the document and of the interaction, to the roles of authors and end users, as well as issues concerning reuse and digital rights management. Several complementary approaches to support the authoring of multimedia documents have been reported in the literature, and in many cases they have been studied via authoring tools and applications. One aim of this special issue is to assess current approaches, tools and applications, discussing how they tackle the main issues relative to the process of authoring, as well as their limitations.
Publication Details
  • HotMobile 2014
  • Feb 26, 2014

Abstract

Close
In this paper, we propose HiFi system which enables users to interact with surrounding physical objects. It uses coded light to encode position in an environment. By attaching a tiny light sensor on a user’s mobile device, the user can attach digital info to arbitrary static physical objects or retrieve/modify them anchored to these objects. With this system, a family member may attach a digital maintenance schedule to a fish tank or indoor plants, etc. In a store, a store manager may use such system to attach price tag, discount info and multimedia contents to any products and customers can get the attached info by moving their phone close to the focused product. Similarly, a museum can use this system to provide extra info of displayed items to visitors. Different from computer vision based systems, HiFi does not have requests on texture, bright illumination, etc. Different from regular barcode approaches, HiFi does not require extra physical attachments that may change an object’s native appearance. HiFi has much higher spatial resolution for distinguishing close objects or attached parts of the same object. As HiFi system can track a mobile device at 80 positions per second, it also has much faster response than any above listed system.