I have spent the last several months using Claude Code across every project at StarApple AI, Maestro AI Labs, Section 9, and IMPACT AI Lab. Four labs. Dozens of codebases. Hundreds of sessions. And I keep finding things that make a massive difference in output quality that I do not see anyone writing about.
These are not tips from a tutorial. These are patterns I learned by running production systems in the Caribbean, where the margin for error is thin and the consequences of shipping broken code are real.
1. Give Claude Code Your Constraints Before Your Requirements
Most people start a Claude Code session by saying what they want built. That is backwards. Start with what you cannot do.
When I work on a financial inclusion project, the first thing I tell Claude Code is not "build me a credit scoring API." The first thing I tell it is: we are operating under Caribbean data protection regulations. We cannot store raw national ID numbers. We cannot make external API calls that send personal data to servers outside of the region. The response time must stay under 200 milliseconds because our users are on 3G connections in rural Jamaica.
Once Claude Code understands the box it is working within, the code it produces is dramatically better. It makes different architectural decisions. It chooses lighter libraries. It builds offline fallbacks without being asked. Constraints are not limitations to Claude Code. They are design inputs. Feed them first.
I tell my engineers this all the time: the AI Boss approach to Claude Code is the same as the AI Boss approach to everything. You set the boundaries. You define the rules. Then you let the tool do its work inside those boundaries. That is how you stay in control.
2. Use Claude Code to Write Your Tests Before Your Code
This one changed how my entire team works. Instead of asking Claude Code to build a feature and then write tests for it, flip the order. Describe the behaviour you want and ask Claude Code to write the test suite first. Then ask it to write the code that passes those tests.
Why does this work so well? Because when Claude Code writes both the code and the tests at the same time, it unconsciously writes tests that pass. That is not testing. That is confirmation bias in code form. When you separate the two steps, the tests become an actual check on the code rather than a mirror of it.
At Section 9 AI Lab, we started doing this for every new module. The result was immediate. We caught three bugs in a single sprint that would have made it to production under the old workflow. Three bugs in safety-critical AI systems. The kind of bugs that erode trust and, in our line of work, could affect real people.
Write the tests first. It takes ten extra minutes. Those ten minutes have saved us weeks of debugging.
3. Create a CLAUDE.md File and Treat It Like Your Constitution
Claude Code looks for a file called CLAUDE.md in your project root. Most developers either do not know this or treat it as optional. It is not optional. It is the single most impactful thing you can do to improve Claude Code's output.
Your CLAUDE.md should contain:
- Your project's architecture in plain English. Not a diagram. Not a link to a wiki. Plain words that describe how the system is structured.
- Your coding conventions. Do you use snake_case or camelCase? Do you prefer explicit imports or wildcard? Do you write docstrings or not? Tell Claude Code.
- Your hard rules. Things that must never happen. "Never use eval()." "Never log personal data." "Always validate inputs from external sources." Claude Code follows these consistently once you state them.
- Your preferred libraries. If you want requests instead of httpx, say so. If you want pytest instead of unittest, say so. Otherwise Claude Code will pick whatever it thinks is best, and its opinion may differ from yours.
I maintain a CLAUDE.md in every active repo at StarApple AI. Each one is different because each project has different constraints. The financial inclusion projects have strict data handling rules. The sports AI projects have performance requirements. The climate resilience projects have specific data format standards. Claude Code reads all of it and adjusts accordingly.
This is the "Be the Boss of AI" principle in practice. You set the standard. The AI follows it.
4. When Claude Code Gets Stuck, Do Not Push. Restart.
This is counterintuitive and it took me weeks to learn. When Claude Code starts producing bad results, when it is going in circles, when it keeps trying variations of the same broken approach, the worst thing you can do is keep pushing. More instructions, more corrections, more context. It does not help. It makes things worse.
What works: close the session entirely. Take two minutes. Think about what you actually want. Then start a new session with a clearer, simpler description of the problem.
I have a rule at Maestro AI Labs: if Claude Code has not solved the problem in three attempts, the problem is your prompt, not the tool. Rewrite the prompt from scratch. Describe the problem differently. Break it into smaller pieces. Every single time I have done this, the new session produces better results than the tenth attempt in the old one.
There is something about the conversation history in a struggling session that poisons the well. Claude Code starts optimising for consistency with its previous wrong answers rather than finding the right answer. A fresh start fixes that. Use it liberally.
5. Use Claude Code to Onboard New Engineers to Your Codebase
This is the tip that gets the most surprised reactions when I share it. When a new engineer joins one of our teams, part of their onboarding now includes a Claude Code session with the codebase.
I do not mean they read the code with Claude Code. I mean they interview it. They ask: what does this service do? Why is this function structured this way? What happens when this endpoint receives invalid data? Where are the database migrations handled? How does the authentication flow work?
Claude Code reads the codebase and answers accurately. Not perfectly. But accurately enough that a new engineer can get oriented in hours instead of days. They still need to read the critical paths themselves. They still need a human mentor. But the baseline understanding that used to take a week of confused README reading and hesitant Slack questions now happens in an afternoon.
For a Caribbean startup with limited engineering resources, this matters enormously. We cannot afford a month-long onboarding process. We need people productive fast. Claude Code gets them there.
The Bigger Picture
All five of these tips come down to the same idea: Claude Code is a power tool, and power tools reward skill. A chainsaw in the hands of someone who knows what they are doing is transformative. In the hands of someone who does not, it is dangerous. The difference is not the tool. It is the operator.
I have been telling people to be the Boss of AI since 2014, when a nontechnical staff member at my first startup coined the name during a sprint session where I taught her to build a neural network from scratch. She looked at me and said, "You are the AI Boss." The name stuck because I would not stop saying it. Claude Code is the most powerful AI tool I have worked with, and the principle holds. You set the direction. You define the constraints. You evaluate the output. You take responsibility for what ships.
The tips are practical. The philosophy is what makes them work.
"Claude Code rewards clarity the way a good engineer rewards a good spec. Be precise about what you want. Be honest about what you do not know. The tool will meet you where you are." - Adrian Dunkley, AI Boss