This paper reports our design, and implementation of an automatic lecture-room camera-management system. The motivation for building this system is to facilitate online lecture access and reduce the expense of producing high quality lecture videos. The goal of this project is a camera-management system that can perform as a human video-production team. To achieve this goal, our system collects audio/video signals available in the lecture room and uses the multimodal information to direct our video cameras to interesting events. Compared to previous work–which has tended to be technology centric–we started with interviews with professional video producers and used their knowledge and expertise to create video production rules. We then targeted technology components that allowed us to implement a substantial portion of these rules, including the design of a virtual video director, a speaker cinematographer, and an audience cinematographer. The complete system is installed in parallel with a human-operated video production system in a middle-sized corporate lecture room, and used for broadcasting lectures through the web. The systemÃ*s performance was compared to that of a human operator via a user study. Results suggest that our system’s quality is close to that of a human-controlled system.