Treating AI Like a Contractor: Guide to Force Multiplication
AI proficiency has become the difference between thriving and being left behind in software development. AI advances rapidly, so we must rapidly improve our ability to use it effectively.
The good news? Like any skill, deliberate practice builds competence[1]. The more targeted experience you gain, the more you accomplish. This article shares what works based on practical experience.
The Contractor in Your IDE
Think of AI as a type of contractor: brilliant, fast, and tireless—but with no memory, no wisdom, and no stake in your project’s success. This mental model changes how you interact with AI tools. Future articles will explore other mental models in detail.
Just as with human contractors, you must clearly communicate your needs and expectations to AI. Great results require clear context, concrete constraints, and validation checkpoints.
The harsh truth: AI excels at certain tasks, delivering results faster and cheaper than any human. Many developers initially reject this reality.
Have you thought or said something like this?
I have tried AI. It’s nothing more than a fancy auto-complete.
Or
I tried it. I can produce better code faster without its so-called help.
I thought exactly these things before I realized the problem wasn’t AI, it was my approach. More specifically, my lack of skill in working with it. If you cannot write better software faster than you ever imagined using AI, then the issue is technique, and that’s fixable.
AI as a Force Multiplier
AI isn’t a 10x developer, it’s a force multiplier. A developer understands how the project fits with the business, larger mission, and other knowledge not available to the AI. This means you must carefully focus and direct the AI to useful activities. When done well, you benefit from useful results quickly.
Here’s what that means in practice.
AI Contractor Superpowers
- Execute tasks at superhuman speed
- Recognize patterns across vast knowledge domains
- Generate creative solutions you might not consider
- Work tirelessly without reduction in performance
How AI Contractors Differ from Humans
- No memory between interactions unless you explicitly provide it
- Zero wisdom about your codebase, architecture, or constraints
- Will confidently hallucinate when uncertain, responding with convincing (and sometimes amusing) but incorrect information[2]
- Cannot make strategic decisions
- Can enter repetitive loops when stuck on a problem
While these appear to be limitations, understanding them transforms them into manageable constraints. Sometimes these characteristics can even be useful.
Your Responsibilities
- Provide robust, efficient context
- Validate every output
- Make architectural decisions with clear implementation expectations
- Iterate and refine specifications and improve problem definitions
- Redirect to ensure effective progress
- Use the best tool for each job
- Protect your private data
You must define what you want done, how to do it, and establish validations.
Data Privacy and Security
Like human contractors, AI tools require careful data management. Never assume data you share is private. Encrypt or obscure sensitive information before sharing with AI.
Tools like Age provide practical encryption without complex infrastructure. Remember that AI assistants may not follow instructions as literally as traditional software. For example, if you specify a working directory but later reference an external file, the AI might access that file despite your initial constraints.
Choosing the Right Tool and Mission
AI focus has two aspects: knowing where to apply AI and maintaining proper aim. AI is not the best tool for every job, so focus on the things it does best. I am reminded of Star Trek where Dr. McCoy, when asked to do something outside his area of expertise, often reminds the captain that he is a doctor and not trained as a bricklayer or whatever.
Just as you expect better results for a plumbing job when hiring a plumbing contractor than a roofing contractor, you need to ensure you use the best models and tools for the job at hand. Future articles will explore this in detail.
A force multiplier is useful only when it brings you closer to your goal. Think of golf. The goal is to move the ball into the hole with the fewest strokes. Beginner golfers often focus on hitting the ball as far as they can. Golfers quickly learn that hitting the ball twice as far is good only when this results in moving the ball closer to the hole. Aim is as important as distance.
Understanding AI Behavior
What we observe as hallucinations aren’t defects, they’re features of probabilistic design[2], [3]. They enable creativity but require validation loops.
Test-Driven Development still holds regardless of how the code is written[4]. When interacting with AI, we also need, Validation-Driven Prompting (VDP).
Test‑Driven Development ensures code aligns with expectations defined in tests; similarly, Validation-Driven Prompting ensures AI responses or agent actions align with expectations defined in prompts.
I recommend instructing AI agents to develop software using TDD. AI agents benefit from TDD the same way human developers do. Defining tests first helps communicate more precisely what we need.
It provides better context.
VDP helps the AI know when and how a response meets our needs. Here is a quick example.
While working on a markdown table, I asked AI to insert two new columns. Multiple prompts failed until I instructed the AI to validate each field as a data matrix. After adding that validation instruction, the AI updated the table correctly. This is just one example of many I have experienced personally.
For this series, when I use the term “AI,” I generally mean Large Language Models (LLMs) — systems trained to predict probable text outputs based on input patterns[5]. Understanding they’re probabilistic, not deterministic, explains both their power and pitfalls.
Steve Jobs famously described a computer as a bicycle for the mind. AI is becoming a mind for that bicycle. The difference between beautiful, production-ready code and total garbage comes down to how you interact with that mind. While this is true when working with any contractor, your AI contractor lacks the agency afforded a human, so you must adjust your approach accordingly.
Building Skill on a Moving Frontier
AI skill development requires deliberate practice with feedback loops[1]. But here’s the twist: the frontier moves while you’re learning it.
AI changes so quickly that some things true today are outdated tomorrow. Another challenge is grasping non-linear improvement. AI capabilities don’t grow steadily, they move, stall, and make dramatic jumps.
This Means Two Things
Focus on transferable skills: Context engineering, validation patterns, and mental models (like the contractor framework) remain valuable even as specific tools change.
Build your practice loop: Nobody knows what will work best tomorrow because those techniques don’t exist yet. You need a system for continuous experimentation and improvement.
The Context Engineering Evolution
Prompt engineering exemplifies how the field evolves. Initially popular, it was declared unnecessary as models gained automated reasoning capabilities. Then practitioners discovered it wasn’t dead—
Response quality still depends on context quality.
The “death” of prompt engineering gave birth to context engineering[6]. The net here is that no matter what you do with AI, context matters. Mastering context is a transferable skill.
What This Series Covers
This series targets developers with basic AI experience who want to move from “fancy autocomplete” to genuine force multiplication[7].
Examples will be based on my macOS setup, but translate easily to other platforms.
Next Steps
AI isn’t magic. It’s a tool that rewards skill. Like learning any technology, you need deliberate practice, feedback loops, and experimentation. The difference? The frontier keeps moving, so your learning system matters more than memorizing specific techniques.
The most important transferable skill right now? Context engineering.
The next articles will focus on context engineering, including some things rarely discussed elsewhere.
Try this now: Take your last frustrating AI interaction. How many prompts did it take until you were satisfied with the response or gave up? If you were finally satisfied, what did your last prompt provide that was missing from the previous prompts? If you were never satisfied with the response, why not?
Questions? Experiments to share? Reach out. We accomplish more through collaboration.



