Stanford DAWN at SysML 2018

The DAWN PIs recently helped start a new research conference called SysML that targets research at the intersection of Systems and Machine Learning. The first conference was very well-attended, with over 200 poster submissions and sold-out registration, demonstrating the huge interest in this new and evolving research area from both academia and industry. At SysML, many members of DAWN presented posters about our latest research; in this post, we highlight the work we presented.

Accelerating Model Search with Model Batching

Model search applications today can take thousands of GPU hours to run. These applications often train a number of extremely similar models – ModelBatch leverages this fact to better utilize a GPU by batching model execution, sharing input preprocessing while better using the available SIMD processing cores on a modern GPU. [extended abstract] [poster]

BlazeIt: An Optimizing Query Engine for Video at Scale

Running semantic queries like ‘‘When did buses pass by a given traffic intersection’’ on videos today is extremely time-consuming and computationally inefficient. BlazeIt makes running such queries over video easier and faster by combining a new SQL-like query language with a query optimizer that leverages the spatial and temporal locality inherent in video. [extended abstract] [poster]

DAWNBench: An End-to-End Deep Learning Benchmark and Competition

Deep learning has seen rapid advancements in software systems, algorithms, and hardware to make computation more efficient – but many of these techniques sacrifice model accuracy, or increase time to convergence. Existing benchmarks for deep learning performance however either consider accuracy or performance, but not both – DAWNBench is an end-to-end benchmark and competition that reasons about both in a joint setting for training and inference. [extended abstract] [poster]

YellowFin: Adaptive Optimization for (A)synchronous Systems

Tuning the learning rate and momentum schedules for standard SGD optimizers is extremely time-consuming; adaptive method like Adam and AdaGrad can help, but are also often observed to give worse test performance than a carefully tuned SGD optimizer. YellowFin is an automatic tuner for momentum and learning rate in the SGD optimizer; an extended variant of YellowFin can outperform existing optimizers in asynchronous-parallel settings as well. [extended abstract] [poster]

Efficient Mergeable Quantile Sketches using Moments

The volume of data in modern deployments makes it difficult to compute quantiles over fine-grained populations (e.g., user make, model, device type) at interactive speeds. We propose a small and easily updatable moment-based sketch to answer these queries efficiently, delivering order of magnitude speedups over comparably-accurate sketches. [extended abstract] [poster]

Finding Heavily-Weighted Features with the Weight-Median Sketch

Classification algorithms often need to be run on streaming data in memory-constrained settings. The Weight-Median Sketch is a new sub-linear-space sketch for learning compressed linear classifiers over data streams while supporting the efficient recovery of large-magnitude weights in the model. [extended abstract] [poster]

We also hope to see you next year at SysML 2019! Current DAWN project members Matei Zaharia and Virginia Smith will be the Program Chairs for the next conference, and we are sure that many of us will be there to present our latest ideas at the intersection of Systems and Machine Learning. Check www.sysml.cc for updates!