Publications

From 2019 (Clear Search)

2019

Abstract

Close
We present a remote assistance system that enables a remotely located expert to provide guidance using hand gestures to a customer who performs a physical task in a different location. The system is built on top of a web-based real-time media communication framework which allows the customer to use a commodity smartphone to send a live video feed to the expert, from which the expert can see the view of the customer's workspace and can show his/her hand gestures over the video in real-time. The expert's hand gesture is captured with a hand tracking device and visualized with a rigged 3D hand model on the live video feed. The system can be accessed via a web browser, and it does not require any app software to be installed on the customer's device. Our system supports various types of devices including smartphone, tablet, desktop PC, and smart glass. To improve the collaboration experience, the system provides a novel gravity-aware hand visualization technique.
Publication Details
  • ACM ISS 2019
  • Nov 9, 2019

Abstract

Close
In a telepresence scenario with remote users discussing a document, it can be difficult to follow which parts are being discussed. One way to address this is by showing the user's hand position on the document, which also enables expressive gestural communication. An important practical problem is how to capture and transmit the hand movements efficiently with high resolution document images. We propose a tabletop system with two channels that integrates document capture with a 4K video camera and hand tracking with a webcam, in which the document image and hand skeleton data are transmitted at different rates and handled by a lightweight Web browser client at remote sites. To enhance the rendering, we employ velocity based smoothing and ephemeral motion traces. We tested our prototype over long distances from USA to Japan and to Italy, and report on latency and jitter performance. Our system achieves relatively low latency over a long distance in comparison with a tele-immersive system that transmits mesh data over much shorter distances.
Publication Details
  • International Conference on the Internet of Things (IoT 2019)
  • Oct 22, 2019

Abstract

Close
A motivating, core capability of most smart, Internet of Things enabled spaces (e.g., home, office, hospital, factory) is the ability to leverage context of use. Location is a key context element; particularly indoor location. Recent advances in radio ranging technologies, such as 802.11-2016 FTM, promise the availability of low-cost, near-ubiquitous time-of-flight-based ranging estimates. In this paper, we build on prior work to enhance the technology's ability to provide useful location estimates. We demonstrate meaningful improvements in coordinate-based estimation accuracy and substantial increases in room-level estimation accuracy. Furthermore, insights gained in our real-world deployment provides important implications for future Internet of Things context applications and their supporting technology deployments such as workflow management, inventory control, or healthcare information tools.
Publication Details
  • ACM MM
  • Oct 21, 2019

Abstract

Close
Despite work on smart spaces, nowadays a lot of knowledge work happens in the wild: at home, in coffee places, trains, buses, planes, and of course in crowded open office cubicles. Conducting web conferences in these settings creates privacy issues, and can also distract participants, leading to a perceived lack of professionalism from the remote peer(s). To solve this common problem, we implemented CamaLeon, a browser-based tool that uses real-time machine vision powered by deep learning to change the webcam stream sent by the remote peer: specifically, CamaLeon dynamically changes the "wild" background into one that resembles that of the office workers. In order to detect the background in wild settings, we designed and trained a fast UNet model on head and shoulder images. CamaLeon also uses a face detector to determine whether it should stream the person's face, depending on its location (or lack of presence). It uses face recognition to make sure it streams only a face that belongs to the user who connected to the meeting. The system was tested during a few real video conferencing calls at our company where 2 workers are remote. Both parties felt a sense of enhanced co-presence, and the remote participants felt more professional with their background replaced.
Publication Details
  • ACM MM
  • Oct 21, 2019

Abstract

Close
Responding to requests for information from an application, a remote person, or an organization that involve documenting the presence and/or state of physical objects can lead to incomplete or inaccurate documentation. We propose a system that couples information requests with a live object recognition tool to semi-automatically catalog requested items and collect evidence of their current state.
Publication Details
  • ACM MM
  • Oct 20, 2019

Abstract

Close
Multimedia research has now moved beyond laboratory experiments and is rapidly being deployed in real-life applications including advertisements, social interaction, search, security, automated driving, and healthcare. Hence, the developed algorithms now have a direct impact on the individuals using the abovementioned services and the society as a whole. While there is a huge potential to benefit the society using such technologies, there is also an urgent need to identify the checks and balances to ensure that the impact of such technologies is ethical and positive. This panel will bring together an array of experts who have experience collecting large-scale datasets, building multimedia algorithms, and deploying them in practical applications, as well as, a lawyer whose eyes have been on the fundamental rights at stake. They will lead a discussion on the ethics and lawfulness of dataset creation, licensing, privacy of individuals represented in the datasets, algorithmic transparency, algorithmic bias, explainability, and the implications of application deployment. Through an interactive process engaging the audience, the panel hopes to: increase the awareness of such concepts in the multimedia research community; initiate a discussion on community guidelines all for setting the future direction of conducting multimedia research in a lawful and ethical manner.
Publication Details
  • VDS'19
  • Oct 20, 2019

Abstract

Close
Computational notebooks have become a major medium for data exploration and insight communication in data science. Although expressive, dynamic, and flexible, in practice they are loose collections of scripts, charts, and tables that rarely tell a story or clearly represent the analysis process. This leads to a number of usability issues, particularly in the comprehension and exploration of notebooks. In this work, we design, implement, and evaluate Albireo, a visualization approach to summarize the structure of notebooks, with the goal of supporting more effective exploration and communication by displaying the dependencies and relationships between the cells of a notebook using a dynamic graph structure. We evaluate the system via a case study and expert interviews, with our results indicating that such a visualization is useful for an analyst’s self-reflection during exploratory programming, and also effective for communication of narratives and collaboration between analysts.

Interactive Bicluster Aggregation in Bipartite Graphs

Publication Details
  • IEEE VIS 2019
  • Oct 20, 2019

Abstract

Close
Exploring coordinated relationships is important for sensemaking of data in various fields, such as intelligence analysis. To support such investigations, visual analysis tools use biclustering to mine relationships in bipartite graphs and visualize the resulting biclusters with standard graph visualization techniques. Due to overlaps among biclusters, such visualizations can be cluttered (e.g., with many edge crossings), when there are a large number of biclusters. Prior work attempted to resolve this problem by automatically ordering nodes in a bipartite graph. However, visual clutter is still a serious problem, since the number of displayed biclusters remains unchanged. We propose bicluster aggregation as an alternative approach, and have developed two methods of interactively merging biclusters. These interactive bicluster aggregations help organize similar biclusters and reduce the number of displayed biclusters. Initial expert feedback indicates potential usefulness of these techniques in practice.
Publication Details
  • IEEE InfoVis 2019
  • Oct 20, 2019

Abstract

Close
Think-aloud protocols are widely used by user experience (UX) practitioners in usability testing to uncover issues in user interface design. It is often arduous to analyze large amounts of recorded think-aloud sessions and few UX practitioners have an opportunity to get a second perspective during their analysis due to time and resource constraints. Inspired by the recent research that shows subtle verbalization and speech patterns tend to occur when users encounter usability problems, we take the first step to design and evaluate an intelligent visual analytics tool that leverages such patterns to identify usability problem encounters and present them to UX practitioners to assist their analysis. We first conducted and recorded think-aloud sessions, and then extracted textual and acoustic features from the recordings and trained machine learning (ML) models to detect problem encounters. Next, we iteratively designed and developed a visual analytics tool, VisTA, which enables dynamic investigation of think-aloud sessions with a timeline visualization of ML predictions and input features. We conducted a between-subjects laboratory study to compare three conditions, i.e., VisTA, VisTASimple (no visualization of the ML’s input features), and Baseline (no ML information at all), with 30 UX professionals. The findings show that UX professionals identified more problem encounters when using VisTA than Baseline by leveraging the problem visualization as an overview, anticipations, and anchors as well as the feature visualization as a means to understand what ML considers and omits. Our findings also provide insights into how they treated ML, dealt with (dis)agreement with ML, and reviewed the videos (i.e., play, pause, and rewind).
Publication Details
  • IEEE VIS 2019
  • Oct 20, 2019

Abstract

Close
The analysis of bipartite networks is critical in a variety of application domains, such as exploring entity co-occurrences in intelligence analysis and investigating gene expression in bio-informatics. One important task is missing link prediction, which infers the existence of unseen links based on currently observed ones. In this paper, we propose MissBiN that involves analysts in the loop for making sense of link prediction results. MissBiN combines a novel method for link prediction and an interactive visualization for examining and understanding the algorithm outputs. Further, we conducted quantitative experiments to assess the performance of the proposed link prediction algorithm, and a case study to evaluate the overall effectiveness of MissBiN.

Abstract

Close
Localization in an indoor and/or Global Positioning System (GPS)-denied environment is paramount to drive various applications that require locating humans and/or robots in an unknown environment. Various localization systems using different ubiquitous sensors such as camera, radio frequency, inertial measurement unit have been developed. Most of these systems cannot accommodate for scenarios which have substan- tial changes in the environment such as a large number of people (unpredictable) and sudden change in the environment floor plan (unstructured). In this paper, we propose a system, InFo that can leverage real-time visual information captured by surveillance cameras and augment that with images captured by the smart device user to deliver accurate discretized location information. Through our experiments, we demonstrate that our deep learning based InFo system provides an improvement of 10% as compared to a system that does not utilize this real-time information.
Publication Details
  • British Machine Vision Conference (BMVC 2019)
  • Sep 1, 2019

Abstract

Close
Automatic medical report generation from chest X-ray images is one possibility for assisting doctors to reduce their workload. However, the different patterns and data distribution of normal and abnormal cases can bias machine learning models. Previous attempts did not focus on isolating the generation of the abnormal and normal sentences in order to increase the variability of generated paragraphs. To address this, we propose to separate abnormal and normal sentence generation by using a dual word LSTM in a hierarchical LSTM model. In addition, we conduct an analysis on the distinctiveness of generated sentences compared to the BLEU score, which increases when less distinct reports are generated. Together with this analysis, we propose a way of selecting a model that generates more distinctive sentences. We hope our findings will help to encourage the development of new metrics to better verify methods of automatic medical report generation.
Publication Details
  • To appear in Natural Language Engineering
  • Aug 16, 2019

Abstract

Close
Twitter and other social media platforms are often used for sharing interest in products. The identification of purchase decision stages, such as in the AIDA model (Awareness, Interest, Desire, Action), can enable more personalized e-commerce services and a finer-grained targeting of ads than predicting purchase intent only. In this paper, we propose and analyze neural models for identifying the purchase stage of single tweets in a user's tweet sequence. In particular, we identify three challenges of purchase stage identification: imbalanced label distribution with a high number of negative instances, limited amount of training data, and domain adaptation with no or only little target domain data. Our experiments reveal that the imbalanced label distribution is the main challenge for our models. We address it with ranking loss and perform detailed investigations of the performance of our models on the different output classes. In order to improve the generalization of the models and augment the limited amount of training data, we examine the use of sentiment analysis as a complementary, secondary task in a multitask framework. For applying our models to tweets from another product domain, we consider two scenarios: For the first scenario without any labeled data in the target product domain, we show that learning domain-invariant representations with adversarial training is most promising while for the second scenario with a small number of labeled target examples, finetuning the source model weights performs best. Finally, we conduct several analyses, including extracting attention weights and representative phrases for the different purchase stages. The results suggest that the model is learning features indicative of purchase stages and that the confusion errors are sensible.
Publication Details
  • The 17th IEEE International Conference on Embedded and Ubiquitous Computing (IEEE EUC 2019)
  • Aug 2, 2019

Abstract

Close
Human activity forecasting from videos in routine-based tasks is an open research problem that has numerous applications in robotics, visual monitoring and skill assessment. Currently, many challenges exist in activity forecasting because human actions are not fully observable from continuous recording. Additionally, a large number of human activities involve fine-grained articulated human motions that are hard to capture using frame-level representations. To overcome thesechallenges, we propose a method that forecasts human actions by learning the dynamics of local motion patterns extracted from dense trajectories using longshort-term memory (LSTM). The experiments on a pub-lic dataset validated the effectiveness of our proposed method in activity forecasting and demonstrate large improvements over the baseline two stream end-to-endmodel. We also learnt that human activity forecasting benefits from learning both the short-range motion pat-terns and long-term dependencies between actions.
Publication Details
  • 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019)
  • Jul 28, 2019

Abstract

Close
A common issue in training a deep learning, abstractive summarization model is lack of a large set of training summaries. This paper examines techniques for adapting from a labeled source domain to an unlabeled target domain in the context of an encoder-decoder model for text generation. In addition to adversarial domain adaptation (ADA), we introduce the use of artificial titles and sequential training to capture the grammatical style of the unlabeled target domain. Evaluation on adapting to/from news articles and Stack Exchange posts indicates that the use of these techniques can boost performance for both unsupervised adaptation as well as fine-tuning with limited target data.

Abstract

Close
An open challenge in current telecommunication systems including Skype and other existing research systems is a lack of physical interaction, and consequently a restricted feeling of connection for users. For example, those telecommunication systems cannot allow remote users to move pieces of a board game while playing with a local user. We propose that installing a robot arm and teleoperating it can address the problem by enabling remote physical interaction. We compare three methods for remote control to study the relationship between connection, and how it relates to agency and autonomy for each control scheme.
Publication Details
  • ACM SIGMOD/PODS workshop on Human-In-the-Loop Data Analytics (HILDA)
  • Jun 30, 2019

Abstract

Close
Manufacturing environments require changes in work procedures and settings based on changes in product demand affecting the types of products for production. Resource re-organization and time needed for worker adaptation to such frequent changes can be expensive. For example, for each change, managers in a factory may be required to manually create a list of inventory items to be picked up by workers. Uncertainty in predicting the appropriate pick-up time due to differences in worker-determined routes may make it difficult for managers to generate a fixed schedule for delivery to the assembly line. To address these problems, we propose OPaPi, a human-centric system that improves the efficiency of manufacturing by optimizing parts pick-up routes and schedules. OPaPi leverages frequent pattern mining and the traveling salesman problem solver to suggest rack placement for more efficient routes. The system further employs interactive visualization to incorporate an expert’s domain knowledge and different manufacturing constraints for real-time adaptive decision making.
Publication Details
  • Designing Interactive Systems (DIS) 2019
  • Jun 23, 2019

Abstract

Close
As our landscape of wearable technologies proliferates, we find more devices situated on our heads. However, many challenges hinder them from widespread adoption---from their awkward, bulky form factor (today's AR and VR goggles) to their socially stigmatized designs (Google Glass) and a lack of a well-developed head-based interaction design language. In this paper, we explore a socially acceptable, large, head-worn interactive wearable---a hat. We report results from a gesture elicitation study with 17 participants, extract a taxonomy of gestures, and define a set of design concerns for interactive hats. Through this lens, we detail the design and fabrication of three hat prototypes capable of sensing touch, head movements, and gestures, and including ambient displays of several types. Finally, we report an evaluation of our hat prototype and insights to inform the design of future hat technologies.
Publication Details
  • International Conference on Weblogs and Social Media (ICWSM) 2019
  • Jun 12, 2019

Abstract

Close
Millions of images are shared through social media every day. Yet, we know little about how the activities and preferences of users are dependent on the content of these images. In this paper, we seek to understand viewers engagement with photos. We design a quantitative study to expand previous research on in-app visual effects (also known as filters) through the examination of visual content identified through computer vision. This study is based on analysis of 4.9M Flickr images and is organized around three important engagement factors, likes, comments and favorites. We find that filtered photos are not equally engaging across different categories of content. Photos of food and people attract more engagement when filters are used, while photos of natural scenes and photos taken at night are more engaging when left unfiltered. In addition to contributing to the research around social media engagement and photography practices, our findings offer several design implications for mobile photo sharing platforms.
Publication Details
  • arxiv
  • Jun 5, 2019

Abstract

Close
In multi-participant postings, as in online chat conversations, several conversations or topic threads may take place concurrently. This leads to difficulties for readers reviewing the postings in not only following discussions but also in quickly identifying their essence. A two-step process, disentanglement of interleaved posts followed by summarization of each thread, addresses the issue, but disentanglement errors are propagated to the summarization step, degrading the overall performance. To address this, we propose an end-to-end trainable encoder-decoder network for summarizing interleaved posts. The interleaved posts are encoded hierarchically, i.e., word-to-word (words in a post) followed by post-to-post (posts in a channel). The decoder also generates summaries hierarchically, thread-to-thread (generate thread representations) followed by word-to-word (i.e., generate summary words). Additionally, we propose a hierarchical attention mechanism for interleaved text. Overall, our end-to-end trainable hierarchical framework enhances performance over a sequence to sequence framework by 8\% on a synthetic interleaved texts dataset.
Publication Details
  • ACM TVX 2019
  • Jun 5, 2019

Abstract

Close
Advancements in 360° cameras have increased their related livestreams. In the case of video conferencing, 360° cameras provide almost unrestricted visibility into a conference room for a remote viewer without the need for an articulating camera. However, local participants are left wondering if someone is connected and where remote participants might be looking. To address this, we fabricated a prototype device that shows the gaze and presence of remote 360° viewers using a ring of LEDs that match the remote viewports. We discuss the long term use of one of the prototypes in a lecture hall and present future directions for visualizing gaze presence in 360° video streams.
Publication Details
  • ACM TVX 2019
  • Jun 5, 2019

Abstract

Close
Livestreaming and video calls have grown in popularity due to the increased connectivity and advancements in mobile devices. Our interactions with these cameras are limited as the cameras are either fixed or manually remote controlled. Here we present a Wizard-of-Oz elicitation study to inform the design of interactions with smart 360\textdegree\ cameras or robotic mobile desk cameras for use in video-conferences and live-streaming situations. There was an overall preference for devices that can minimize distraction as well as preferences for devices that can show they demonstrate an understanding of video-meeting context. We find participants dynamically grow with regards to the complexity of interactions which illustrate the need for deeper event semantics within the Camera AI. Finally, we detail interaction techniques and design insights to inform the next generation of personal video cameras for streaming and collaboration.
Publication Details
  • Personal and Ubiquitous Computing
  • May 7, 2019

Abstract

Close
Reliable location estimation has been a key enabler of many applications in the UbiComp space. Much progress has been made on the development of accurate of indoor location systems, which form the foundation of many interesting applications, particularly in consumer scenarios. However, many location-based applications in enterprise settings also require addressing another facet of reliability: assurance. Without having strong guarantees of a location estimate’s legitimacy, stakeholders must explicitly balance the advantages offered with the risks of falsification. In this space, there are two key threats: replay attacks, where signal and sensor information is collected in one location and replayed in another to falsify a location estimation later in time; and wormhole attacks, where signal and sensor information is forwarded to a remote location by a colluding device to falsify location estimation in real-time. In this work, we improve upon the state of the art in wormhole-resistant location estimation techniques. Specifically, we present the Location Anchor, which leverages a combination of technical solutions and social contracts to provide high-assurance proofs of device location that are resistant to wormhole attacks. Unlike existing work, the Location Anchor has minimal hardware costs, supports a rich tapestry of applications, and is compatible with commodity smartphone and tablet platforms. We show that the Location Anchor can extend existing replay-resistant location systems into wormhole-resistant location systems, even in the face of very aggressive attacker assumptions. We describe the protocols underlying the Location Anchor, as well as report on the efficacy of a prototype implementation.

Augmenting Knowledge Tracing by Considering Forgetting Behavior

Publication Details
  • The Web Conference 2019 (formerly WWW)
  • Apr 29, 2019

Abstract

Close
We describe a corpus analysis method to extract terminology from a collection of technical specifications book in the field of construction. Using statistics and word n-grams analyzes, we extract the terminology of the domain and then perform pruning steps with linguistic patterns and internet queries to improve the quality of the final terminology. In this paper we specifically focus on the improvements got by applying Internet queries and patterns. These improvements are evaluated by using a manual evaluation carried out by 6 experts in the field in the case of technical specification books.
Publication Details
  • CHI 2019
  • Apr 27, 2019

Abstract

Close
Work breaks -- both physical and digital -- play an important role in productivity and workplace wellbeing. Yet, the growing availability of digital distractions from online content can turn breaks into prolonged "cyberloafing". In this paper, we present UpTime, a system that aims to support workers' transitions from breaks back to work--moments susceptible to digital distractions. Combining a browser extension and chatbot, users interact with UpTime through proactive and reactive chat prompts. By sensing transitions from inactivity, UpTime helps workers avoid distractions by automatically blocking distracting websites temporarily, while still giving them control to take necessary digital breaks. We report findings from a 3-week comparative field study with 15 workers. Our results show that automatic, temporary blocking at transition points can significantly reduce digital distractions and stress without sacrificing workers' sense of control. Our findings, however, also emphasize that overloading users' existing communication channels for chatbot interaction should be done thoughtfully.