CM3Leon - AshokBhat/ml GitHub Wiki
About
- Pronounced like “chameleon”
- Foundation model for text-to-image and image-to-text generation.
- Decoder-only transformer architecture
- Trained on a licensed dataset
- State-of-the-art performance for text-to-image generation
- By Meta in July 2023
- Parameter size: 350M, 700M and 7B.
- Source code not available.
Benefits
- Trained with five times less compute than previous transformer-based methods.