Yulius Tjahjadi

Senior Software Engineer

Yulius Tjahjadi

Co-Authors

Publications

2019

Abstract

Close
Localization in an indoor and/or Global Positioning System (GPS)-denied environment is paramount to drive various applications that require locating humans and/or robots in an unknown environment. Various localization systems using different ubiquitous sensors such as camera, radio frequency, inertial measurement unit have been developed. Most of these systems cannot accommodate for scenarios which have substan- tial changes in the environment such as a large number of people (unpredictable) and sudden change in the environment floor plan (unstructured). In this paper, we propose a system, InFo that can leverage real-time visual information captured by surveillance cameras and augment that with images captured by the smart device user to deliver accurate discretized location information. Through our experiments, we demonstrate that our deep learning based InFo system provides an improvement of 10% as compared to a system that does not utilize this real-time information.
Publication Details
  • The 17th IEEE International Conference on Embedded and Ubiquitous Computing (IEEE EUC 2019)
  • Aug 2, 2019

Abstract

Close
Human activity forecasting from videos in routine-based tasks is an open research problem that has numerous applications in robotics, visual monitoring and skill assessment. Currently, many challenges exist in activity forecasting because human actions are not fully observable from continuous recording. Additionally, a large number of human activities involve fine-grained articulated human motions that are hard to capture using frame-level representations. To overcome thesechallenges, we propose a method that forecasts human actions by learning the dynamics of local motion patterns extracted from dense trajectories using longshort-term memory (LSTM). The experiments on a pub-lic dataset validated the effectiveness of our proposed method in activity forecasting and demonstrate large improvements over the baseline two stream end-to-endmodel. We also learnt that human activity forecasting benefits from learning both the short-range motion pat-terns and long-term dependencies between actions.
Publication Details
  • ACM TVX 2019
  • Jun 5, 2019

Abstract

Close
Advancements in 360° cameras have increased their related livestreams. In the case of video conferencing, 360° cameras provide almost unrestricted visibility into a conference room for a remote viewer without the need for an articulating camera. However, local participants are left wondering if someone is connected and where remote participants might be looking. To address this, we fabricated a prototype device that shows the gaze and presence of remote 360° viewers using a ring of LEDs that match the remote viewports. We discuss the long term use of one of the prototypes in a lecture hall and present future directions for visualizing gaze presence in 360° video streams.