DAWNBench

An End-to-End Deep Learning Benchmark and Competition

CIFAR10 Training

Submission Date Model Time to 94% Accuracy Cost (USD) Max Accuracy Hardware Framework

Nov 2018

Custom ResNet 9

David Page, myrtle.ai

source

0:01:15 $0.06 94.08% V100 (AWS p3.2xlarge) pytorch 0.4.0

Oct 2017

ResNet 164 (with bottleneck)

Stanford DAWN

source

4 days, 6:48:08 $54.79 94.58% 60 GB / 16 CPU (Google Cloud [n1-standard-16]) TensorFlow v1.2

Oct 2017

ResNet 164 (with bottleneck)

Stanford DAWN

source

11:00:22 $10.64 94.45% 1 K80 / 30 GB / 8 CPU (Google Cloud) TensorFlow v1.2

Oct 2017

ResNet 164 (with bottleneck)

Stanford DAWN

source

3:29:30 N/A 94.46% 1 P100 / 512 GB / 56 CPU (DAWN Internal Cluster) TensorFlow v1.2

Apr 2019

Custom ResNet 9

Ajay Uppili Arasanipalai

source

0:01:14 N/A 94.06% IBM AC922 + Nvidia Tesla V100 (Nimbix np9g1) PowerAI 1.6.0 + PyTorch 1.0.1

May 2019

BaiduNet9P

Baidu USA GAIT LEOPARD team: Baopu Li, Zhiyu Cheng, Yingze Bao

source

0:00:45 $0.11 94.18% Baidu Cloud Tesla 8*V100-16GB/448 GB/96 CPU PyTorch v1.0.1 and PaddlePaddle

Oct 2017

ResNet 164 (with bottleneck)

Stanford DAWN

source

4 days, 1:10:39 $51.80 94.79% 60 GB / 16 CPU (Google Cloud [n1-standard-16]) PyTorch v0.1.12

Oct 2017

ResNet 164 (without bottleneck)

Stanford DAWN

source

2:31:42 N/A 94.46% 1 P100 / 512 GB / 56 CPU (DAWN Internal Cluster) PyTorch v0.1.12

Apr 2018

Custom Wide Resnet

fast.ai + students team: Jeremy Howard, Andrew Shaw, Brett Koonce, Sylvain Gugger

source

0:06:45 $0.26 94.20% Paperspace Volta (V100) fastai / pytorch

Jan 2020

Custom ResNet 9

Ajay Uppili Arasanipalai

source

0:00:11 N/A 94.05% IBM AC922 + 4 * Nvidia Tesla V100 (NCSA HAL) PyTorch 1.1.0

Feb 2018

ResNet 164 (without bottleneck)

Stanford DAWN

source

2:47:50 N/A 94.18% 1 P100 / 512 GB / 56 CPU (DAWN Internal Cluster) TensorFlow v1.3

Oct 2017

ResNet 164 (without bottleneck)

Stanford DAWN

source

10:32:14 $9.48 94.32% 1 K80 / 61 GB / 4 CPU (Amazon EC2 [p2.xlarge]) PyTorch v0.1.12

Apr 2018

KervResNet34

Chen Wang

source

0:35:37 N/A 95.29% 1 GPU (Nvidia GeForce GTX 1080 Ti) PyTorch 0.3.1

Jan 2018

ResNet50

DIUX

source

1:07:55 $3.46 94.60% p3.2xlarge tensorflow 1.5, tensorpack 0.8.1

Apr 2018

Resnet18 + minor modifications

bkj

source

0:05:41 $0.29 94.34% V100 (AWS p3.2xlarge) pytorch 0.3.1.post2

Oct 2017

ResNet 164 (with bottleneck)

Stanford DAWN

source

10:53:08 $9.80 94.58% 1 K80 / 61 GB / 4 CPU (Amazon EC2 [p2.xlarge]) TensorFlow v1.2

Oct 2017

ResNet 164 (with bottleneck)

Stanford DAWN

source

9:03:29 $8.76 94.91% 1 K80 / 30 GB / 8 CPU (Google Cloud) PyTorch v0.1.12

Jan 2018

ResNet50

DIUX

source

3:18:50 $3.78 94.51% g3.4xlarge tensorflow 1.5, tensorpack 0.8.1

Aug 2019

BaiduNet9

Chuan Li

source

0:01:42 $0.04 94.02% Lambda GPU Cloud - 4x GTX 1080 Ti fastai / Pytorch 1.0.0

Oct 2017

ResNet 164 (with bottleneck)

Stanford DAWN

source

3:01:52 N/A 94.82% 1 P100 / 512 GB / 56 CPU (DAWN Internal Cluster) PyTorch v0.1.12

Oct 2017

ResNet 164 (without bottleneck)

Stanford DAWN

source

3 days, 7:26:54 $42.35 94.37% 60 GB / 16 CPU (Google Cloud [n1-standard-16]) PyTorch v0.1.12

Oct 2017

ResNet 164 (with bottleneck)

Stanford DAWN

source

9:16:32 $8.35 94.61% 1 K80 / 61 GB / 4 CPU (Amazon EC2 [p2.xlarge]) PyTorch v0.1.12

May 2019

BaiduNet9

Baidu USA GAIT LEOPARD team: Baopu Li, Zhiyu Cheng, Yingze Bao

source

0:01:12 $0.02 94.10% Baidu Cloud Tesla V100*1-16GB/56 GB/12 CPU PyTorch v1.0.1 and PaddlePaddle

Oct 2019

Kakao Brain Custom ResNet9

clint@KakaoBrain

source

0:00:28 N/A 94.04% Tesla V100 * 4 GPU / 488 GB / 56 CPU (Kakao Brain BrainCloud) PyTorch 1.1.0

Oct 2017

ResNet 164 (without bottleneck)

Stanford DAWN

source

3 days, 22:09:47 $50.19 94.04% 60 GB / 16 CPU (Google Cloud [n1-standard-16]) TensorFlow v1.2

Oct 2019

Kakao Brain Custom ResNet9

clint@KakaoBrain

source

0:00:58 N/A 94.20% Tesla V100 * 1 GPU / 488 GB / 56 CPU (Kakao Brain BrainCloud) PyTorch 1.1.0

Oct 2017

ResNet 164 (without bottleneck)

Stanford DAWN

source

9:37:52 $9.31 94.37% 1 K80 / 30 GB / 8 CPU (Google Cloud) PyTorch v0.1.12

Apr 2018

Custom Wide Resnet

fast.ai + students team: Jeremy Howard, Andrew Shaw, Brett Koonce, Sylvain Gugger

source

0:02:54 $1.18 94.39% 8 * V100 (AWS p3.16xlarge) fastai / pytorch

Oct 2017

ResNet 164 (without bottleneck)

Stanford DAWN

source

3:20:27 N/A 94.19% 1 P100 / 512 GB / 56 CPU (DAWN Internal Cluster) TensorFlow v1.2

Oct 2017

ResNet 164 (without bottleneck)

Stanford DAWN

source

10:02:45 $9.71 94.31% 1 K80 / 30 GB / 8 CPU (Google Cloud) TensorFlow v1.2

Dec 2019

Custom Resnet 9

Santiago Akle Serrano, Hadi Pour Ansari, Vipul Gupta, Dennis DeCoste

source

0:00:10 N/A 94.23% Tesla V100 * 8 GPU / 32 GB / 40 CPU Pytorch 1.1.0
Disclosure: The Stanford DAWN research project is a five-year industrial affiliates program at Stanford University and is financially supported in part by founding members including Intel, Microsoft, NEC, Teradata, VMWare, and Google. For more information, including information regarding Stanford’s policies on openness in research and policies affecting industrial affiliates program membership, please see DAWN's membership page.