AWS Elastic Inference - AshokBhat/ml GitHub Wiki
AWS Elastic Inference
- Pay only for the accelerator hours you use.
- No upfront costs or minimum fees.
Options
- Elastic Inference Accelerators (
EIA1
and EIA2
) with 3 different types in each.
Type |
FP32-TFLOPS |
FP16-TFLOPS |
Memory |
eia2.medium |
1 |
8 |
2GB |
eia2.large |
2 |
16 |
4GB |
eia2.xlarge |
4 |
32 |
8GB |
eia1.medium |
1 |
8 |
1GB |
eia1.large |
2 |
16 |
2GB |
eia1.xlarge |
4 |
32 |
4GB |
See also
- [AWS SageMaker]] ](/AshokBhat/ml/wiki/[[AWS-Lambda)
- [AWS EC2]] ](/AshokBhat/ml/wiki/[[AWS-Elastic-Inference)
- [AWS Graviton]] ](/AshokBhat/ml/wiki/[[AWS-Inferentia)