NVIDIA GPU - AshokBhat/ml GitHub Wiki
About
A100
H100
L4 vs L40S
| Feature | NVIDIA L4 | NVIDIA L40S |
|---|---|---|
| Architecture | Ada Lovelace | Ada Lovelace |
| GPU Memory | 24 GB GDDR6 | 48 GB GDDR6 |
| CUDA Cores | 7,424 | 18,176 |
| Tensor Cores | 232 | 568 |
| RT Cores | 58 | 142 |
| FP32 Performance | ~30 TFLOPS | ~91.6 TFLOPS |
| FP16/BF16 Performance | ~60 TFLOPS | ~183.2 TFLOPS |
| INT8 Performance | ~485 TOPS | ~1,466 TOPS |
| Max Power Consumption | 72 W | 350 W |
| Form Factor | Low-profile, single-slot | Dual-slot |
| Target Use Cases | AI inference, video, graphics, VDI | AI training & inference, graphics, rendering |
| Availability on GCP | Yes (G2 VMs) | No |