Publications

By Lyndon Kennedy (Clear Search)

“Notice: FX Palo Alto Laboratory will be closing. All Research and related operations will cease as of June 30, 2020.”

2020

Abstract

Close
Managing post-surgical pain is critical for successful surgical outcomes. One of the challenges of pain management is accurately assessing the pain level of patients. Self-reported numeric pain ratings are limited because they are subjective, can be affected by mood, and can influence the patient’s perception of pain when making comparisons. In this paper, we introduce an approach that analyzes 2D and 3D facial keypoints of post-surgical patients to estimate their pain intensity level.Our approach leverages the previously unexplored capabilities of a smartphone to capture a dense3D representation of a person’s face as input for pain intensity level estimation. Our contributions are a data collection study with post-surgical patients to collect ground-truth labeled sequences of2D and 3D facial keypoints for developing a pain estimation algorithm, a pain estimation model that uses multiple instance learning to overcome inherent limitations in facial keypoint sequences, and the preliminary results of the pain estimation model using 2D and 3D features with comparisons of alternate approaches.
2019
Publication Details
  • International Conference on Weblogs and Social Media (ICWSM) 2019
  • Jun 12, 2019

Abstract

Close
Millions of images are shared through social media every day. Yet, we know little about how the activities and preferences of users are dependent on the content of these images. In this paper, we seek to understand viewers engagement with photos. We design a quantitative study to expand previous research on in-app visual effects (also known as filters) through the examination of visual content identified through computer vision. This study is based on analysis of 4.9M Flickr images and is organized around three important engagement factors, likes, comments and favorites. We find that filtered photos are not equally engaging across different categories of content. Photos of food and people attract more engagement when filters are used, while photos of natural scenes and photos taken at night are more engaging when left unfiltered. In addition to contributing to the research around social media engagement and photography practices, our findings offer several design implications for mobile photo sharing platforms.
Publication Details
  • IEEE 2nd International Conference on Multimedia Information Processing and Retrieval
  • Mar 14, 2019

Abstract

Close
We present an approach to detect speech impairments from video of people with aphasia, a neurological condition that affects the ability to comprehend and produce speech. To counter inherent privacy issues, we propose a cross-media approach using only visual facial features to detect speech properties without listening to the audio content of speech. Our method uses facial landmark detections to measure facial motion over time. We show how to detect speech and pause instances based on temporal mouth shape analysis and identify repeating mouth patterns using a dynamic warping mechanism. We relate our developed features for pause frequency, mouth pattern repetitions, and pattern variety to actual symptoms of people with aphasia in the AphasiaBank dataset. Our evaluation shows that our developed features are able to reliably differentiate dysfluent speech production of people with aphasia from those without aphasia with an accuracy of 0.86. A combination of these handcrafted features and further statistical measures on talking and repetition improves classification performance to an accuracy of 0.88.