GPT - AshokBhat/ml GitHub Wiki

About

Models

Model Year of Release Parameters Key Features
GPT-5 2025 Hybrid (multi-model) Dynamic model routing, native multimodal, state-of-the-art coding and reasoning, “thinking” modes, enterprise APIs
GPT-4.1 2025 Not specified Outperforms GPT-4, faster and more efficient variants (mini, nano)
GPT-4o 2024 Not specified Unified text, image, audio input/output; faster/cheaper; improved multimodal capability
GPT-4 2023 ~1.76T (MoE est.) Multimodal (text+image), strong logical reasoning, larger context window (up to 128k tokens), safety improvements
GPT-3.5 2022 ~175B (improved) Optimized for conversation, better instruction-following, more robust dialogue
GPT-3 2020 175B Few-shot/zero-shot learning, remarkable language generation, massive scale, improved fluency
GPT-2 2019 1.5B Larger model, improved generation quality, better coherence, trained on bigger dataset, use of advanced training tricks
GPT-1 2018 117M First Transformer-based model; general-purpose language modeling; moderate generation quality

See also