Skip to main content Skip to secondary navigation
Journal Article

NoScope: optimizing neural network queries over video at scale

Abstract

Recent advances in computer vision-in the form of deep neural networks-have made it possible to query increasing volumes of video data with high accuracy. However, neural network inference is computationally expensive at scale: applying a state-of-the-art object detector in real time (i.e., 30+ frames per second) to a single video requires a $4000 GPU. In response, we present NoScope, a system for querying videos that can reduce the cost of neural network video analysis by up to three orders of magnitude via inference-optimized model search. Given a target video, object to detect, and reference neural network, NoScope automatically searches for and trains a sequence, or cascade, of models that preserves the accuracy of the reference network but is specialized to the target video and are therefore far less computationally expensive. NoScope cascades two types of models: specialized models that forego the full generality of the reference model but faithfully mimic its behavior for the target video and object; and difference detectors that highlight temporal differences across frames. We show that the optimal cascade architecture differs across videos and objects, so NoScope uses an efficient cost-based optimizer to search across models and cascades. With this approach, NoScope achieves two to three order of magnitude speed-ups (265-15,500x real-time) on binary classification tasks over fixed-angle webcam and surveillance video while maintaining accuracy within 1-5% of state-of-the-art neural networks.

Project page

A system for querying videos at scale using neural networks and for accelerating neural network inference by over 1000× by exploiting model specialization and dynamic cascades.
Author(s)
Daniel Kang
John Emmons
Firas Abuzaid
Peter Bailis
Matei Zaharia
Journal Name
Proceedings of the VLDB Endowment
Publication Date
August 1, 2017
DOI
10.14778/3137628.3137664
Publisher
ACM