Chidansh Bhatt, Ph.D.

Research Scientist

Chidansh Bhatt

Chidansh is a Research Scientist at FXPAL. His research focuses on context-related media search, classification, recommendation, and interactive visualization. Further interests are multimedia data mining, information retrieval, machine learning, natural language processing, big data analytics, IoT, HCI, semantic analytics (concept/action/event /object detection, novelty re-ranking), analysis of social media data and user behavior using crowdsourcing techniques.

Prior to joining FXPAL, Chidansh was working as an assistant professor at Indian Institute of Technology (IIT), Roorkee, India. Chidansh was a post-doc researcher at IDIAP Research Institute, Switzerland, where he developed multimodal recommender and summarization system with visualization for scientific material (Video-Lectures) and his system secured the 1st position for hyperlinking task in MediaEval benchmarking evaluation. Chidansh also worked as a researcher at Big Data Experimental Laboratory, Hitachi Research & Development Ltd., Singapore and did research internship at University of Winnipeg, Canada. Chidansh actively participates as a technical program committee member/reviewer of leading international conferences and journals (e.g., best reviewer award at ICME 2014, ETRI 2012).

Dr. Bhatt received a Ph.D. in computer science from National University of Singapore (NUS) in 2012. He also holds a M.E. in internet science and engineering from Indian Institute of Science (IISc) and a B.E. in information science and engineering from Visweswariah Technological University (VTU).

Co-Authors

Publications

2017
Publication Details
  • TRECVID Workshop
  • Mar 1, 2017

Abstract

Close
This is a summary of our participation in the TRECVID 2016 video hyperlinking task (LNK). We submitted four runs in total. A baseline system combined on established vectorspace text indexing and cosine similarity. Our other runs explored the use of distributed word representations in combination with fine-grained inter-segment text similarity measures.
2016
Publication Details
  • ACM International Conference on Multimedia Retrieval (ICMR)
  • Jun 6, 2016

Abstract

Close
We propose a method for extractive summarization of audiovisual recordings focusing on topic-level segments. We first build a content similarity graph between all segments of all documents in the collection, using word vectors from the transcripts, and then select the most central segments for the summaries. We evaluate the method quantitatively on the AMI Meeting Corpus using gold standard reference summaries and the Rouge metric, and qualitatively on lecture recordings using a novel two-tiered approach with human judges. The results show that our method compares favorably with others in terms of Rouge, and outperforms the baselines for human scores, thus also validating our evaluation protocol.