Web Client - PARC-projects/video-query-home GitHub Wiki

Front end for the Video Query application

Overview

The Video Query app enables agile, user-in-the-loop mining of video datasets. Rather than requiring an upfront specification of actions and investment in labeling data for those actions, Video Query embraces early exploration by a user of evolving questions and ideas. The effort spent increases only as the user iterates to, focuses down on, and validates a particular line of inquiry. This app enables agile browsing through a large dataset, in order to discover and zoom in on relevant data in a dynamic and interactive way.

How it works

A user of the Video Query app begins by choosing a video clip of interest. For instance, the example clip could be a forward-looking dash cam of a vehicle stopping for pedestrians. The user receives back a small sample of possible matches, and after inspecting these candidates, the user validates or invalidates them and requests a refined search. The refined search returns a new sample of candidates that reflects the scoring in the previous round. After enough iterations are conducted to satisfy the user, the final video search criterion is used to retrieve all matches in the dataset of interest. Following up on the above instance, these matches could be all video clips where a vehicle stops for pedestrians at a marked crosswalk, or perhaps more specifically at a marked crosswalk in clear weather.

Pages

⚠️ **GitHub.com Fallback** ⚠️