DAWNBench

An End-to-End Deep Learning Benchmark and Competition

CIFAR10 Training

Submission Date Model Time to 94% Accuracy Cost (USD) Max Accuracy Hardware Framework

Oct 2017

ResNet 164 (with bottleneck)

Stanford DAWN

source

4 days, 6:48:08 $54.79 94.58% 60 GB / 16 CPU (Google Cloud [n1-standard-16]) TensorFlow v1.2

Oct 2017

ResNet 164 (with bottleneck)

Stanford DAWN

source

11:00:22 $10.64 94.45% 1 K80 / 30 GB / 8 CPU (Google Cloud) TensorFlow v1.2

Oct 2017

ResNet 164 (with bottleneck)

Stanford DAWN

source

3:29:30 N/A 94.46% 1 P100 / 512 GB / 56 CPU (DAWN Internal Cluster) TensorFlow v1.2

Oct 2017

ResNet 164 (with bottleneck)

Stanford DAWN

source

4 days, 1:10:39 $51.80 94.79% 60 GB / 16 CPU (Google Cloud [n1-standard-16]) PyTorch v0.1.12

Oct 2017

ResNet 164 (without bottleneck)

Stanford DAWN

source

2:31:42 N/A 94.46% 1 P100 / 512 GB / 56 CPU (DAWN Internal Cluster) PyTorch v0.1.12

Apr 2018

Custom Wide Resnet

fast.ai + students team: Jeremy Howard, Andrew Shaw, Brett Koonce, Sylvain Gugger

source

0:06:45 $0.26 94.20% Paperspace Volta (V100) fastai / pytorch

Feb 2018

ResNet 164 (without bottleneck)

Stanford DAWN

source

2:47:50 N/A 94.18% 1 P100 / 512 GB / 56 CPU (DAWN Internal Cluster) TensorFlow v1.3

Oct 2017

ResNet 164 (without bottleneck)

Stanford DAWN

source

10:32:14 $9.48 94.32% 1 K80 / 61 GB / 4 CPU (Amazon EC2 [p2.xlarge]) PyTorch v0.1.12

Apr 2018

KervResNet34

Chen Wang

source

0:35:37 N/A 95.29% 1 GPU (Nvidia GeForce GTX 1080 Ti) PyTorch 0.3.1

Jan 2018

ResNet50

DIUX

source

1:07:55 $3.46 94.60% p3.2xlarge tensorflow 1.5, tensorpack 0.8.1

Apr 2018

Resnet18 + minor modifications

bkj

source

0:05:41 $0.29 94.34% V100 (AWS p3.2xlarge) pytorch 0.3.1.post2

Oct 2017

ResNet 164 (with bottleneck)

Stanford DAWN

source

10:53:08 $9.80 94.58% 1 K80 / 61 GB / 4 CPU (Amazon EC2 [p2.xlarge]) TensorFlow v1.2

Oct 2017

ResNet 164 (with bottleneck)

Stanford DAWN

source

9:03:29 $8.76 94.91% 1 K80 / 30 GB / 8 CPU (Google Cloud) PyTorch v0.1.12

Jan 2018

ResNet50

DIUX

source

3:18:50 $3.78 94.51% g3.4xlarge tensorflow 1.5, tensorpack 0.8.1

Oct 2017

ResNet 164 (with bottleneck)

Stanford DAWN

source

3:01:52 N/A 94.82% 1 P100 / 512 GB / 56 CPU (DAWN Internal Cluster) PyTorch v0.1.12

Oct 2017

ResNet 164 (without bottleneck)

Stanford DAWN

source

3 days, 7:26:54 $42.35 94.37% 60 GB / 16 CPU (Google Cloud [n1-standard-16]) PyTorch v0.1.12

Oct 2017

ResNet 164 (with bottleneck)

Stanford DAWN

source

9:16:32 $8.35 94.61% 1 K80 / 61 GB / 4 CPU (Amazon EC2 [p2.xlarge]) PyTorch v0.1.12

Oct 2017

ResNet 164 (without bottleneck)

Stanford DAWN

source

3 days, 22:09:47 $50.19 94.04% 60 GB / 16 CPU (Google Cloud [n1-standard-16]) TensorFlow v1.2

Oct 2017

ResNet 164 (without bottleneck)

Stanford DAWN

source

9:37:52 $9.31 94.37% 1 K80 / 30 GB / 8 CPU (Google Cloud) PyTorch v0.1.12

Apr 2018

Custom Wide Resnet

fast.ai + students team: Jeremy Howard, Andrew Shaw, Brett Koonce, Sylvain Gugger

source

0:02:54 $1.18 94.39% 8 * V100 (AWS p3.16xlarge) fastai / pytorch

Oct 2017

ResNet 164 (without bottleneck)

Stanford DAWN

source

3:20:27 N/A 94.19% 1 P100 / 512 GB / 56 CPU (DAWN Internal Cluster) TensorFlow v1.2

Oct 2017

ResNet 164 (without bottleneck)

Stanford DAWN

source

10:02:45 $9.71 94.31% 1 K80 / 30 GB / 8 CPU (Google Cloud) TensorFlow v1.2
Disclosure: The Stanford DAWN research project is a five-year industrial affiliates program at Stanford University and is financially supported in part by founding members including Intel, Microsoft, NEC, Teradata, VMWare, and Google. For more information, including information regarding Stanford’s policies on openness in research and policies affecting industrial affiliates program membership, please see DAWN's membership page.