03) How to setup and effectively prompt Cursor's AI Agents - Coding-With-The-Force/Salesforce-Cursor-IDE-Claude-AI-Setup-Guide GitHub Wiki
How to setup your Cursor Agent
Setting up a Cursor Agent is pretty quick and simple! I would suggest the following settings for an Agent in Cursor in an enterprise environment:
- You default them to plan mode so that they can only make code updates when you approve them to make them. You can change this setting in File -> Preferences -> Cursor Settings, then once you are there change the "Default Mode" setting from "Agent" to "Ask".
- You set their model to the "claude-3.5-sonnet" model. It's the best output for the lowest cost by a mile. You can change this by clicking the "Auto" words next to the words "Agent" in your agent chat window, and then toggling off "Auto" and clicking on "claude-3.5-sonnet".
- You turn off auto-run and auto-fix-errors on your agent. This will prevent it from running wild on your codebase without your explicit permission. You can change this setting by going to File -> Preferences -> Cursor Settings, once you're there toggle the "Auto-Run Mode" to off.
My suggested Large Language Model (LLM) to use
I have tested almost every model out there (at a great expense to my wallet), and the best bang for your buck is absolutely, without a doubt, Claude 3.5 Sonnet. While Claude Opus 4 generates better output, it's not worth the cost yet in my opinion. Additionally, in my opinion, the gpt and gemini models are just not ready for prime time yet when it comes to writing enterprise grade code. You can check out more info on [https://www.anthropic.com/news/claude-3-5-sonnet](Claude 3.5 Sonnet here), and you can setup your Cursor agents to use Claude using the instructions in the section above this one!
Setting up project and user rules
In Cursor you have 2 options for setting up rules you expect your Cursor agents to follow when outputting their code. They are Project Rules, and User Rules. You want these rules to be as descript and useful as possible while making sure they are not overly verbose. The more words you use, the more tokens you use per transaction, the faster you hit rate limits.
User rules persist through every Cursor project, so unless you know you will always be generating code for a specific language in all of your Cursor projects, user rules should be more generate application architecture based rules. They should be things like "Make sure to use the solid design principles when writing code", or "make sure to write a corresponding test for all classes or modules you create". That way the rules remain relevant regardless of the code you are writing.
Project Rules exist only for the project you currently have open in a window. These rules should be extremely project specific. They should explain who the agent is supposed to be (a Salesforce dev maybe?), expectations you have for the different types of code they may output, your expectations for documentation they should write, libraries they should use, etc. An example of a project specific rules file can be found here: Example project rules file for Salesforce development
How to effectively prompt your AI Agents
You CANNOT just prompt an agent like you would Google (the request, "Build me a Salesforce app that displays open tasks" for instance, is not gonna yield spectacular results), I think this is a very common mistake that makes people instantly believe AI is completely useless. Your agents output is ONLY as good as the context that you give them, and in an enterprise code base, that can be a TON of information you need to feed it. That said, you need to also not be too wordy in your requests, there is a fine line lol. If you're too wordy the LLM's will easily get lost in all the information, sometimes making more mistakes than if you just threw out a google style request, additionally, the wordier the request the more tokens you use, and the more money everything costs.
So what do you do? You create a ton of markdown files that you can feed to your agents/LLM's with each request. This allows you to easily tweak parameters and see what works best. It's what I like to call a little bit of markdown magic lol. Anyway, you'll create markdown files and code examples that you can reference in your requests, much like the ones located in this folder. Then in your agent requests you'll be able to give it a ton of context, and easily tweak the wording and make adjustments to the requests with each transaction. Using this approach, in combination with user and project rules will assist in making your code output by your LLM considerably more reliable.
Also, no matter how much you prompt it, YOU WILL ALMOST NEVER GET A PERFECT RESULT! I literally never have. You will get at best a 95% working application (if you're lucky) and you will need to tweak it yourself, or request the agent tweak it the rest of the way. So DO NOT attempt to build prompts and markdown files that will always yield perfect results. Try, rather, to aim for 75%-85% accuracy. I mean, if it can write 75% of your code in a way that is easily readable in under 5 minutes, that's still a massive win.