LLM Assistance Methodology - reecewayt/llm-assisted-design-portfolio GitHub Wiki
LLM Assistance Methodology
Using LLMs can help a developer (with proper usage) to achieve things they couldn't individually and much faster.
This page documents how I leveraged Large Language Models (LLMs) in my hardware design process, with a particular focus on "vibe coding" and SW/HW co-design paradigms.
Vibe Coding in Hardware Design
What is Vibe Coding?
Vibe coding represents a paradigm shift in how we approach coding. Vibe coding embraces a collaborative approach where designers communicate high-level concepts, design intentions, and functional "vibes" to AI systems. These systems then translate these intentions into implementation. LLMs were used extensively in this project to implement hardware descriptions in SystemVerilog and MyHDL.
Industry Adoption
Major hardware and software companies are exploring vibe coding methodologies to accelerate development cycles. While this approach is new and shows mixed results, it likely isn't going anywhere. Engineers will not only need to adopt these new strategies, but also have literacy in AI/ML to best leverage these technologies.
Portfolio Implementation
In this portfolio, I've employed vibe coding by:
- Being committed to using LLMs in all steps of my development, knowing very well the drawbacks of such an approach.
- Engage in prompt engineering tactics
- Exploring and understanding the design space of AI/ML Hardware before prompting LLMs to do the implementation
- Creating thoughtful plans and orchestration, while letting LLMs do the implementation
- Iteratively refining designs through conversational feedback loops (human-in-the-loop design)
SW/HW Co-Design with LLMs
Modern Co-Design Approaches
Traditional hardware/software boundaries are increasingly blurred in modern systems, and many experts believe that most benefits to performance and energy will be achieved by creating highly specialized hardware for specific tasks. Therefore, the lines continue to blur, and we will likely see more and more software algorithms being implemented in hardware.
Where do LLMs come in?
LLMs can be used to generate code and act as agents to help in developing projects. Hence, I used Claud.ai during all aspects of the development of this portfolio.
Results and Impact
While working on this project and utilizing LLMs, I've come to the following conclusions:
- LLMs increase development speed during early designs
- As complexity increases, LLMs generally struggle and tend to generate more buggy code
- Prompt engineering greatly increases your chances of getting a usable response.