David Ayman Shamma, Ph. D.

Senior Research Scientist

David Ayman Shamma

Dr. David A. Shamma is a senior research scientist at FX Palo Alto Labratory (FXPAL). Prior to FXPAL, he was a principal investigator at Centrum Wiskunde & Informatica (CWI) where he lead a project on Artificial Intelligence (AI), wearables, and fashion. Before CWI, he was the founding director of the HCI Research Group at Yahoo Labs and Flickr. He investigates social computing systems (how people interact, engage, and share media experiences both online and in-the-world) through three avenues: AI, systems & prototypes, and qualitative research; his goal is to create and understand methods for media-mediated communication in small environments and at web scale. Ayman holds a B.S./M.S. from the Institute for Human and Machine Cognition at The University of West Florida and a Ph.D. in Computer Science from the Intelligent Information Laboratory at Northwestern University. He has taught courses at the Medill School of Journalism and also in many Computer Science and Studio Art departments. Prior to his Ph.D., he was a visiting research scientist in the Center for Mars Exploration at NASA Ames Research Center. Ayman’s research on technology and creative acts has attracted international attention from Wired, New York Magazine, and the Library of Congress to name a few. Outside of the lab, Ayman’s media art installations have been reviewed by The New York Times and Chicago Magazine and exhibited internationally, including the Amsterdam Dance Event, Second City Chicago, the Berkeley Art Museum, SIGGRAPH, Chicago Improv Festival, and Wired NextFest/NextMusic.

Specialties: Artificial Intelligence, HCI, Photos, Video, Synchronous Interaction, Microblogging Sharing, Social Networks, Design, Socio-Digital Systems.

Co-Authors

Publications

2018
Publication Details
  • Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
  • Apr 21, 2018

Abstract

Close
Massive Open Online Course (MOOC) platforms have scaled online education to unprecedented enrollments, but remain limited by their rigid, predetermined curricula. This paper presents MOOCex, a technique that can offer a more flexible learning experience for MOOCs. MOOCex can recommend lecture videos across different courses with multiple perspectives, and considers both the video content and also sequential inter-topic relationships mined from course syllabi. MOOCex is also equipped with interactive visualization allowing learners to explore the semantic space of recommendations within their current learning context. The results of comparisons to traditional methods, including content-based recommendation and ranked list representation, indicate the effectiveness of MOOCex. Further, feedback from MOOC learners and instructors suggests that MOOCex enhances both MOOC-based learning and teaching.
Publication Details
  • CHI 2018
  • Apr 21, 2018

Abstract

Close
This paper describes the development of a multi-sensory clubbing experience which was deployed during two a two-day event within the context of the Amsterdam Dance Event in October 2016 in Amsterdam. We present how the entire experience was developed end-to-end and deployed at the event through the collaboration of several project partners from industries such as art and design, music, food, technology and research. Central to the system are smart textiles, namely wristbands equipped with Bluetooth LE sensors which were used to sense people attending the dance event. We describe the components of the system, the development process, collaboration between the involved entities and the event itself. To conclude the paper, we highlight insights gained from conducting a real world research deployment across many collaborators and stakeholders.

Rethinking Summarization and Storytelling for Modern Social Multimedia

Publication Details
  • Multimedia Modeling
  • Feb 5, 2018

Abstract

Close
Traditional summarization initiatives have been focused on specific types of documents such as articles, reviews, videos, image feeds, or tweets, a practice which may result in pigeonholing the summarization task in the surrounding of modern, content-rich multimedia collections. Consequently, much of the research to date has revolved around mostly toy problems in narrow domains and working on single-source media types. We argue that summarization and story generation systems need to refocus the problem space in order to meet the information needs in the age of user-generated content in different formats and languages. Here we create a framework for flexible multimedia storytelling. Narratives, stories, and summaries carry a set of challenges in big data and dynamic multi-source media that give rise to new research in spatial-temporal representation, viewpoint generation, and explanation.
2017
Publication Details
  • ACM MM Workshop
  • Oct 23, 2017

Abstract

Close
Humans are complex and their behaviors follow complex multimodal patterns, however to solve many social computing problems one often looks at complexity in large-scale yet single point data sources or methodologies. While single data/single method techniques, fueled by large scale data, enjoyed some success, it is not without fault. Often with one type of data and method, all the other aspects of human behavior are overlooked, discarded, or, worse, misrepresented. We identify this as two succinct problems. First, social computing problems that cannot be solved using a single data source and need intelligence from multiple modals and, second, social behavior that cannot be fully understood using only one form of methodology. Throughout this talk, we discuss these problems and their implications, illustrate examples, and propose new directives to properly approach in the social computing research in today’s age.