Much of the broadcasting and multimedia content is based on archived film and video material. Searching for the most striking video sequence, is both difficult and time consuming, and is greatly dependent on the archiving of material. Project VICAR (Video Indexing, Classification, Annotation and Retrieval) developed a software tool, Video Navigator, which partly automates the annotation work. Each single shot of the video is analyzed, and representative frames are assigned a composite index containing information on color, brightness, camera movement, etc. Beyond that, the frames are classified into user-specific classes, such as VIPs, settings and objects. The archived material can be searched with queries based on text, or on a visual example, using either still or moving images, and allows for queries based on person, location, camera motion and object motion.
Sound & Vision, Joanneum Research (AU), Sentient Machine Research, Österreichischer Rundfunk (AU), Vienna Center for Parallel Computation (AU), VU University Amsterdam, Südwestrundfunk (DE)