DAWNBench

An End-to-End Deep Learning Benchmark and Competition

DAWNBench is a benchmark suite for end-to-end deep learning training and inference. Computation time and cost are critical resources in building deep models, yet many existing benchmarks focus solely on model accuracy. DAWNBench provides a reference set of common deep learning workloads for quantifying training time, training cost, inference latency, and inference cost across different optimization strategies, model architectures, software frameworks, clouds, and hardware.

Image Classification on ImageNet

Training Time

Objective: Time taken to train an image classification model to a top-5 validation accuracy of 93% or greater on ImageNet.

Rank Time to 93% Accuracy Model Hardware Framework
1

Jan 2018

14:37:59 ResNet50

DIUX

source

p3.16xlarge tensorflow 1.5, tensorpack 0.8.1
2

Dec 2017

1 day, 20:28:27 ResNet152

ppwwyyxx

source

8 P100 / 512 GB / 40 CPU (NVIDIA DGX-1) tensorpack 0.8.0
3

Oct 2017

10 days, 3:59:59 ResNet152

Stanford DAWN

source

8 K80 / 488 GB / 32 CPU (Amazon EC2 [p2.8xlarge]) MXNet 0.11.0

Training Cost

Objective: Total cost of public cloud instances to train an image classification model to a top-5 validation accuracy of 93% or greater on ImageNet.

Rank Cost (USD) Model Hardware Framework
1

Jan 2018

$358.22 ResNet50

DIUX

source

p3.16xlarge tensorflow 1.5, tensorpack 0.8.1
2

Oct 2017

$1112.64 ResNet152

Stanford DAWN

source

8 K80 / 488 GB / 32 CPU (Amazon EC2 [p2.8xlarge]) MXNet 0.11.0
3

Oct 2017

$2323.39 ResNet152

Stanford DAWN

source

4 M60 / 488 GB / 64 CPU (Amazon EC2 [g3.16xlarge]) TensorFlow v1.3

Inference Latency

Objective: Latency required to classify one ImageNet image using a model with a top-5 validation accuracy of 93% or greater.

Rank 1-example Latency (milliseconds) Model Hardware Framework
1

Nov 2017

22.2700 ResNet 152

Stanford DAWN

source

1 P100 / 30 GB / 8 CPU (Google Compute) TensorFlow v1.2
2

Nov 2017

26.8200 ResNet 152

Stanford DAWN

source

1 P100 / 30 GB / 8 CPU (Google Compute) MXNet 0.11.0
3

Nov 2017

29.2400 ResNet 152

Stanford DAWN

source

1 K80 / 61 GB / 4 CPU (Amazon EC2 [p2.xlarge]) MXNet 0.11.0

Inference Cost

Objective: Average cost on public cloud instances to classify 10,000 validation images from ImageNet using of an image classification model with a top-5 validation accuracy of 93% or greater.

Rank Cost (USD) Model Framework Hardware
1

Nov 2017

$0.07 ResNet 152

Stanford DAWN

source

MXNet 0.11.0 1 K80 / 61 GB / 4 CPU (Amazon EC2 [p2.xlarge])
2

Nov 2017

$0.11 ResNet 152

Stanford DAWN

source

TensorFlow v1.2 1 P100 / 30 GB / 8 CPU (Google Compute)
3

Nov 2017

$0.12 ResNet 152

Stanford DAWN

source

TensorFlow v1.2 1 K80 / 61 GB / 4 CPU (Amazon EC2 [p2.xlarge])

Image Classification on CIFAR10

Training Time

Objective: Time taken to train an image classification model to a test accuracy of 94% or greater on CIFAR10.

Rank Time to 94% Accuracy Model Framework Hardware
1

Jan 2018

1:07:55 ResNet50

DIUX

source

tensorflow 1.5, tensorpack 0.8.1 p3.2xlarge
2

Oct 2017

2:31:42 ResNet 164 (without bottleneck)

Stanford DAWN

source

PyTorch v0.1.12 1 P100 / 512 GB / 56 CPU (DAWN Internal Cluster)
3

Feb 2018

2:47:50 ResNet 164 (without bottleneck)

Stanford DAWN

source

TensorFlow v1.3 1 P100 / 512 GB / 56 CPU (DAWN Internal Cluster)

Training Cost

Objective: Total cost for public cloud instances to train an image classification model to a test accuracy of 94% or greater on CIFAR10.

Rank Cost (USD) Model Framework Hardware
1

Jan 2018

$3.46 ResNet50

DIUX

source

tensorflow 1.5, tensorpack 0.8.1 p3.2xlarge
2

Jan 2018

$3.78 ResNet50

DIUX

source

tensorflow 1.5, tensorpack 0.8.1 g3.4xlarge
3

Oct 2017

$8.35 ResNet 164 (with bottleneck)

Stanford DAWN

source

PyTorch v0.1.12 1 K80 / 61 GB / 4 CPU (Amazon EC2 [p2.xlarge])

Inference Latency

Objective: Latency required to classify one CIFAR10 image using a model with a test accuracy of 94% or greater.

Rank 1-example Latency (milliseconds) Model Framework Hardware
1

Oct 2017

9.7843 ResNet 56

Stanford DAWN

source

PyTorch v0.1.12 1 K80 / 61 GB / 4 CPU (Amazon EC2 [p2.xlarge])
2

Oct 2017

24.6291 ResNet 164 (with bottleneck)

Stanford DAWN

source

PyTorch v0.1.12 1 P100 / 512 GB / 56 CPU (DAWN Internal Cluster)
3

Oct 2017

24.9200 ResNet 164 (without bottleneck)

Stanford DAWN

source

TensorFlow v1.2 60 GB / 16 CPU (Google Cloud [n1-standard-16])

Inference Cost

Objective: Average cost on public cloud instances to classify 10,000 test images from CIFAR10 using an image classification model with a test accuracy of 94% or greater.

Rank Cost (USD) Model Framework Hardware
1

Oct 2017

$0.02 ResNet 56

Stanford DAWN

source

PyTorch v0.1.12 1 K80 / 61 GB / 4 CPU (Amazon EC2 [p2.xlarge])
2

Oct 2017

$0.04 ResNet 164 (without bottleneck)

Stanford DAWN

source

TensorFlow v1.2 60 GB / 16 CPU (Google Cloud [n1-standard-16])
3

Oct 2017

$0.05 ResNet 164 (with bottleneck)

Stanford DAWN

source

TensorFlow v1.2 60 GB / 16 CPU (Google Cloud [n1-standard-16])

Question Answering on SQuAD

Training Time

Objective: Time taken to train a question answering model to a F1 score of 0.75 or greater on the SQuAD development dataset.

Rank Time to 0.75 F1 Model Framework Hardware
1

Oct 2017

7:38:10 BiDAF

Stanford DAWN

source

TensorFlow v1.2 1 K80 / 61 GB / 4 CPU (Amazon EC2 [p2.xlarge])
2

Oct 2017

7:51:22 BiDAF

Stanford DAWN

source

TensorFlow v1.2 1 P100 / 512 GB / 56 CPU (DAWN Internal Cluster)
3

Oct 2017

8:43:40 BiDAF

Stanford DAWN

source

TensorFlow v1.2 1 K80 / 30 GB / 8 CPU (Google Cloud)

Training Cost

Objective: Total cost for public cloud instances to train a question answering model to a F1 score of 0.75 or greater on the SQuAD development dataset.

Rank Cost (USD) Model Framework Hardware
1

Oct 2017

$5.78 BiDAF

Stanford DAWN

source

TensorFlow v1.2 60 GB / 16 CPU (Google Cloud [n1-standard-16])
2

Oct 2017

$6.87 BiDAF

Stanford DAWN

source

TensorFlow v1.2 1 K80 / 61 GB / 4 CPU (Amazon EC2 [p2.xlarge])
3

Oct 2017

$8.44 BiDAF

Stanford DAWN

source

TensorFlow v1.2 1 K80 / 30 GB / 8 CPU (Google Cloud)

Inference Latency

Objective: Latency required to answer one SQuAD question using a model with a F1 score of at least 0.75 on the development dataset.

Rank 1-example Latency (milliseconds) Model Framework Hardware
1

Oct 2017

100.0000 BiDAF

Stanford DAWN

source

TensorFlow v1.2 60 GB / 16 CPU (Google Cloud [n1-standard-16])
2

Oct 2017

590.0000 BiDAF

Stanford DAWN

source

TensorFlow v1.2 1 K80 / 30 GB / 8 CPU (Google Cloud)
3

Oct 2017

638.1000 BiDAF

Stanford DAWN

source

TensorFlow v1.2 1 P100 / 512 GB / 56 CPU (DAWN Internal Cluster)

Inference Cost

Objective: Average cost on public cloud instances to answer 10,000 questions from the SQuAD development dataset using a question answering model to a dev F1 score of 0.75% or greater.

Rank Cost (USD) Model Framework Hardware
1

Oct 2017

$0.15 BiDAF

Stanford DAWN

source

TensorFlow v1.2 60 GB / 16 CPU (Google Cloud [n1-standard-16])
2

Oct 2017

$1.58 BiDAF

Stanford DAWN

source

TensorFlow v1.2 1 K80 / 30 GB / 8 CPU (Google Cloud)
3

Oct 2017

$1.76 BiDAF

Stanford DAWN

source

TensorFlow v1.2 1 K80 / 61 GB / 4 CPU (Amazon EC2 [p2.xlarge])

Join Us

DAWNBench is part of a larger community conversation about the future of machine learning infrastructure. Sound off on the DAWNBench google group.