2 minutes ago by Dak
I Love TechHow to use AI
Effective AI Coding Is About Context, Not Magic

People love to talk about AI as if it’s some omniscient co-pilot that will do the heavy lifting for them. That’s wishful thinking. AI is powerful, but it’s not a magician. It’s more like a teammate with perfect memory and infinite hands, but no intuition. To get good results, you have to treat it accordingly.
Treat It Like a Teammate with a Shared Context
When I work with AI, I speak to it the way I’d explain a problem to another engineer sitting next to me. The difference is that this “teammate” has a massive context engine. It can look at my codebase, internal documentation, and even external references. But there’s a catch: its “brain” has limits. Think of it as a big but bounded window.
Here’s how I break down its accuracy:
- Codebase: ~70% accurate. It’s familiar, but not omniscient.
- Documentation: ~90%. It thrives on structured language.
- The Web: 50/50 at best. The web is a mess of misinformation and near-miss solutions.
If you don’t point the model at the right sources with clear instructions, you’ll get garbage. Specificity is everything. Point it at the right files, patterns, docs, or web pages, and suddenly it becomes incredibly useful.
Patterns Matter More Than Ever
I’ve spent my career wanting to instill deeper patterns in codebases: things that make iteration faster, onboarding smoother, and quality more predictable. And every time, those patterns have been sacrificed in the name of “shipping faster.” Sure, you move product forward, maybe make some money, and promise yourself you’ll clean it up “after.”
That “after” rarely comes. The tech debt piles up. Teams slow down. And now, with AI entering the picture, the cracks widen. Why? Because AI performs exponentially better when your codebase has solid, consistent patterns. It thrives in well-structured environments. Ironically, the same practices that would’ve made humans more productive are now prerequisites for AI to shine.
Understand What It Produces or You’re Flying Blind
Let’s say you ask AI to build a webpage. You look at it in the browser, and it seems fine. But is it mobile-friendly? Does it handle iPads? Are all the buttons wired correctly? Is the SEO structure right? You can’t just accept the output at face value.
When something breaks, your prompt matters. “This button doesn’t work” is useless. “The href inside this anchor link is null, but should be a string. We’re using X framework” is gold. That’s the difference between the model fumbling around and actually solving your problem.
If you can’t read or validate the code it produces, you’re working in the dark. And working in the dark is how you end up shipping nonsense.
AI Extends My Hands, Not My Brain
AI isn’t an extension of my intellect. It’s an extension of my physical capacity. I can’t write two files at once. AI can. I can point it at a problem, give it the right context, and let it work while I handle something else. That’s where the real leverage comes from.
But if the output doesn’t match the standard I’d set myself, then I’ve failed to guide it properly. It’s not the model’s fault. It’s mine.
When I Don’t Know the Answer
Sometimes, I don’t know exactly what I want. If you showed me the finished code, I’d know it’s right. But getting there requires exploration. That’s when I use AI as a conversational partner.
I enter “ask” or “plan” mode, different IDEs use different terms. Here, the model doesn’t write any files. It’s like a design discussion with a colleague. I’ll ask:
How would you implement a 3D rendered card that tilts on hover?
It responds with a plan. I critique parts of it, refine others, specify libraries, colors, logic, and patterns. This back-and-forth builds shared context. Once we’re aligned, I switch to “action” mode and say two words: do it. The result is usually spot on, because we agreed on the direction beforehand.
The Confidence Game
The most important factor in all of this is confidence. If I don’t understand what it’s doing, or if I’m not confident in the context I’ve given it, I know the output will be junk. People love to say “I gave AI this and it gave me slop.” Yeah, no surprise. If your confidence level is low, your inputs are weak, and your context is shallow, you’ll get exactly what you put in.
High confidence comes from:
- Deep understanding of your own codebase.
- Clear, specific prompting.
- Consistent patterns.
- Iterative alignment before execution.
Either It Teaches Me, or It Works for Me
The best AI interactions fall into two buckets:
- It helps me learn something I didn’t know.
- It does something for me physically.
Both require me to stay in the driver’s seat. If I don’t, the quality drops, and the whole exercise becomes a gimmick.
I’m genuinely curious: how do you use AI in your workflow? What repeatable processes give you consistently high-quality outputs? Share them. We need more real processes and fewer magic-wand fantasies.
Enjoyed this post?
Follow me for more content like this.