LLMs incompatibility with our project - amosproj/amos2025ss04-ai-driven-testing GitHub Wiki


❌ Resons for incompatibility

  • Large-Scale Models: Models like GPT-4 or PaLM with billions of parameters may require significant computational resources, including high-end GPUs or cloud-based infrastructure, making them unsuitable for running on a standard laptop.

  • Cloud-Only Models: LLMs that are only available as cloud services (e.g., OpenAI's GPT-3/4 via API) and do not offer on-premise deployment options would not meet the requirement for a self-contained, on-premise solution.

  • Proprietary Models with Restrictive Licenses: Models that have restrictive licenses preventing modification, redistribution, or commercial use would not align with the project's need for flexibility and integration into existing environments.


🤖 LLMs not to use:

  • GPT-3 and GPT-4 (OpenAI): Reason: These models are typically accessed via cloud-based APIs and require significant computational resources, making them unsuitable for on-premise deployment on standard laptops.

  • PaLM and Gemini(Google): Reason: Similar to GPT-3/4, PaLM and Gemini are designed for large-scale cloud deployment and is not available for local use or Docker deployment.

  • Claude (Anthropic): Reason: Primarily available as a cloud service, which does not align with the requirement for on-premise deployment.