Nvidia DGX - AshokBhat/notes GitHub Wiki
About
- NVIDIA produced server line
- GPGPU for machine learning
Configuration
DGX-2 | DGX-1 (gen2) | DGX-1 (gen1) | |
---|---|---|---|
GPU | 16x Tesla V100 | 8x Tesla V100 | 8x Tesla P100 |
Performance | 2 petaFLOPS | 1 petaFLOPS | 170 teraFLOPS |
CUDA Cores | 81920 | 40960 | 28672 |
Tensor Cores | 10240 | 5120 | |
Maximum Power Usage | 10kW | 3.5kW | 3.2KW |
CPU | Dual [Intel]] Xeon Platinum 8168, 2.7 GHz, 24-cores ](/AshokBhat/notes/wiki/Dual-[[Intel) Xeon E5-2698 v4, 2.2 GHz, 20-Cores | Dual Intel Xeon E5-2698 v4, 2.2 GHz, 20-Cores |
See also
- [Nvidia DGX]] ](/AshokBhat/notes/wiki/[[Nvidia-Jetson) | Nvidia Tesla
- [CUDA]] ](/AshokBhat/notes/wiki/[[OpenCL)