| Nov 2019 | ResNet8 ModelArts Service of Huawei Cloud source | 0.1345 | N/A | 94.20% | Huawei Cloud [pi2.2xlarge.4] | ModelArts-AIBOX + TensorRT |
| Apr 2019 | BaiduNet8 using PyTorch JIT in C++ Baidu USA GAIT LEOPARD team: Baopu Li, Zhiyu Cheng, Jiazhuo Wang, Haofeng Kou, Yingze Bao source | 0.6830 | $0.00 | 94.32% | Baidu Cloud Tesla V100*1/60 GB/12 CPU | PyTorch v1.0.1 and PaddlePaddle |
| Nov 2018 | Custom ResNet 9 using PyTorch JIT in C++ Laurent Mazare source | 0.8280 | N/A | 94.53% | 1 P100 / 128 GB / 16 CPU | PyTorch v1.0.0.dev20181116 |
| Oct 2019 | Kakao Brain Custom ResNet9 using PyTorch JIT in python clint@KakaoBrain source | 0.8570 | N/A | 94.23% | Tesla V100 * 1 GPU / 488 GB / 56 CPU (Kakao Brain BrainCloud) | PyTorch 1.1.0 |
| Oct 2017 | ResNet 56 Stanford DAWN source | 9.7843 | $0.02 | 94.09% | 1 K80 / 61 GB / 4 CPU (Amazon EC2 [p2.xlarge]) | PyTorch v0.1.12 |
| Mar 2019 | ResNet 164 (without bottleneck) Ryan source | 23.3871 | N/A | 94.10% | 1 P100 / 384 GB / 48 CPU (x86_64 architecture machine) | TensorFlow v1.2 |
| Oct 2017 | ResNet 164 (with bottleneck) Stanford DAWN source | 24.6291 | N/A | 94.97% | 1 P100 / 512 GB / 56 CPU (DAWN Internal Cluster) | PyTorch v0.1.12 |
| Oct 2017 | ResNet 164 (without bottleneck) Stanford DAWN source | 24.9200 | $0.04 | 94.04% | 60 GB / 16 CPU (Google Cloud [n1-standard-16]) | TensorFlow v1.2 |
| Oct 2017 | ResNet 164 (without bottleneck) Stanford DAWN source | 25.2188 | N/A | 94.46% | 1 P100 / 512 GB / 56 CPU (DAWN Internal Cluster) | PyTorch v0.1.12 |
| Oct 2017 | ResNet 164 (with bottleneck) Stanford DAWN source | 28.1000 | N/A | 94.46% | 1 P100 / 512 GB / 56 CPU (DAWN Internal Cluster) | TensorFlow v1.2 |
| Oct 2017 | ResNet 164 (without bottleneck) Stanford DAWN source | 28.3201 | $0.07 | 94.49% | 1 K80 / 61 GB / 4 CPU (Amazon EC2 [p2.xlarge]) | PyTorch v0.1.12 |
| Oct 2017 | ResNet 164 (with bottleneck) Stanford DAWN source | 28.6880 | $0.07 | 94.97% | 1 K80 / 61 GB / 4 CPU (Amazon EC2 [p2.xlarge]) | PyTorch v0.1.12 |
| Oct 2017 | ResNet 164 (with bottleneck) Stanford DAWN source | 31.1300 | $0.05 | 94.58% | 60 GB / 16 CPU (Google Cloud [n1-standard-16]) | TensorFlow v1.2 |
| Oct 2017 | ResNet 164 (with bottleneck) Stanford DAWN source | 31.3490 | $0.08 | 94.94% | 1 K80 / 30 GB / 8 CPU (Google Cloud) | PyTorch v0.1.12 |
| Oct 2017 | ResNet 164 (without bottleneck) Stanford DAWN source | 31.7121 | $0.09 | 94.39% | 1 K80 / 30 GB / 8 CPU (Google Cloud) | PyTorch v0.1.12 |
| Oct 2017 | ResNet 164 (without bottleneck) Stanford DAWN source | 35.4519 | N/A | 94.19% | 1 P100 / 512 GB / 56 CPU (DAWN Internal Cluster) | TensorFlow v1.2 |
| Oct 2017 | ResNet 164 (with bottleneck) Stanford DAWN source | 38.5826 | $0.10 | 94.58% | 1 K80 / 61 GB / 4 CPU (Amazon EC2 [p2.xlarge]) | TensorFlow v1.2 |
| Oct 2017 | ResNet 164 (with bottleneck) Stanford DAWN source | 44.1859 | $0.12 | 94.45% | 1 K80 / 30 GB / 8 CPU (Google Cloud) | TensorFlow v1.2 |
| Oct 2017 | ResNet 164 (without bottleneck) Stanford DAWN source | 58.9259 | $0.16 | 94.31% | 1 K80 / 30 GB / 8 CPU (Google Cloud) | TensorFlow v1.2 |
| Oct 2017 | ResNet 164 (with bottleneck) Stanford DAWN source | 75.3522 | $0.11 | 95.01% | 60 GB / 16 CPU (Google Cloud [n1-standard-16]) | PyTorch v0.1.12 |
| Oct 2017 | ResNet 164 (without bottleneck) Stanford DAWN source | 85.8511 | $0.13 | 94.48% | 60 GB / 16 CPU (Google Cloud [n1-standard-16]) | PyTorch v0.1.12 |