🌏 閱讀中文版本
First, Let’s Clarify: These Tools Are Different
Many people treat “AI coding” as one thing, but tools on the market have very different positioning.
GitHub Copilot
The first and most widely used AI coding assistant.
Positioned as “autocomplete on steroids.” You type a few characters, it guesses what you want to write, you press Tab to accept.
Pros: Seamlessly integrates into your IDE, barely changes your workflow. Cons: It only sees your current file and limited context, doesn’t understand the whole project.
Cursor
A new product with AI deeply integrated into the IDE.
Positioned as an “AI-native development environment.” Not just autocomplete, but conversation, refactoring, code explanation.
Pros: Better context understanding than Copilot, can index your entire project. Cons: Requires switching editors, steeper learning curve.
Claude Code
Anthropic’s CLI tool—I’m using it right now to write this article.
Positioned as an “AI Agent.” Not just completion or conversation, but can autonomously execute multi-step tasks: read files, write code, run tests, commit.
Pros: Can handle complex tasks, true “delegation” not just “assistance.” Cons: Requires trusting it to operate on your project, need to get comfortable letting go.
Quick summary:
| Tool | Position | Interaction | Context Understanding |
|---|---|---|---|
| Copilot | Smart autocomplete | Type → suggestion → Tab | Single file + limited |
| Cursor | AI IDE | Conversation + completion | Project-level |
| Claude Code | AI Agent | Delegate tasks | Project-level + executable |
Before choosing a tool, ask yourself: Do you want “faster typing” or “delegating tasks”?
Scenarios Where AI Actually Helps
Regardless of which tool you use, AI genuinely helps in these scenarios:
Scenario 1: Boilerplate Code
The most obvious productivity boost.
Writing CRUD, API endpoints, React component skeletons, test setup… This highly repetitive, pattern-based code, AI writes quickly and well.
A standard REST controller that used to take 10 minutes now takes 30 seconds.
This isn’t “AI is better than you”—it’s “this kind of code shouldn’t take human time anyway.”
Scenario 2: Unfamiliar Frameworks or Languages
You’re a Java developer, suddenly need to modify some Python. First time using Next.js, not sure how to write routing. Need to use a library you’ve never touched.
Before, you’d read docs, look at examples, trial and error. Now ask AI, it gives you working code directly.
Note: What it gives isn’t necessarily “best practice,” but at least it works. You can adjust from a working version.
Scenario 3: Debugging and Error Message Interpretation
Get an incomprehensible error message, paste it to AI, it usually explains the cause and solution.
This saves not just time, but frustration. Especially those problems that Stack Overflow doesn’t have, where Google results are all outdated.
Scenario 4: Code Refactoring
“Split this class into three” “Convert this callback to async/await” “Add TypeScript types”
This kind of mechanical but error-prone work, AI does more reliably than humans. Because it won’t miss changes, won’t make typos.
Scenario 5: Writing Documentation and Comments
Many engineers hate writing docs. AI can generate explanations from code, you just modify and adjust.
Going from “blank page” to “editing a draft” lowers the psychological barrier significantly.
But Here’s the Problem
If AI is so useful, why do some people get slower? Why do some complain “AI writes terrible code”?
Because AI has two fundamental problems:
Problem 1: AI Doesn’t Understand Your Project Conventions
Every project has its own conventions:
- Variable naming style (camelCase? snake_case?)
- File structure (where do services go? how are utils organized?)
- Error handling approach (throw exception? return error object?)
- Test writing style (how to mock? where do fixtures go?)
AI doesn’t know these. It only knows how code is written “in general.”
So the code it generates is technically correct, but stylistically inconsistent with your project.
You either spend time fixing it, or accept your project style becoming increasingly chaotic.
Both choices have costs.
Problem 2: AI Doesn’t Understand Your Business Logic
This is more fatal.
AI can help you write a “standard” shopping cart feature. But it doesn’t know:
- Your company’s discount rules have 17 exception cases
- This field has special meaning in the legacy system
- That last bug was because a certain edge case wasn’t handled
- This API’s naming is wrong, but can’t be changed because clients are already using it
This context lives in senior engineers’ heads, not in any documentation, and certainly not in AI’s training data.
Code AI generates is correct in “general scenarios,” but might be wrong in “your project.”
Who Uses It Well, Who Doesn’t
From observation, people who use AI well share several traits:
Trait 1: Familiar with the Project
They know project conventions, so they can quickly judge “what needs changing in AI’s output.”
They know business logic edge cases, so they can add necessary handling after AI produces code.
They know where AI commonly makes mistakes, so they specifically check those parts.
Trait 2: They Track and Verify
They don’t directly use AI’s output. Instead:
- Read through it, confirm logic is correct
- Run tests, confirm behavior is correct
- Compare with original code, confirm no side effects
Many people skip this “verification” step. The result is bugs get buried, costing more time to fix later.
Trait 3: They Learn from AI Output
Good users observe how AI writes, learning new approaches or techniques.
“Oh, this library has this method” “Oh, you can handle errors this way”
AI becomes a senior engineer you can ask anytime, not just a typing machine.
Conversely, people who use it poorly typically:
Situation 1: Unfamiliar with Project, Can’t Judge
New hire uses AI to write code, writes fast, but PR gets rejected with tons of issues.
Because they don’t know project conventions, can’t see where AI went wrong.
Result: Time spent fixing exceeds time if they’d written it themselves.
Situation 2: Over-trust AI, Don’t Verify
“AI wrote it, should be fine, right?” Commit directly, deploy directly.
Then comes the incident.
Situation 3: Only Use AI to Speed Up, Don’t Think
Treat AI as a “typing accelerator,” not a “thinking partner.”
Can type more, but code quality doesn’t improve, might even decline.
Conclusion: The Key Is Judgment
AI-assisted development tools—used well, they’re a multiplier; used poorly, they’re a burden.
The difference isn’t the tool itself, but whether the user has judgment:
- Judging what’s right and wrong in AI’s output
- Judging what to let AI write, what to write yourself
- Judging when to trust AI, when to question it
This judgment comes from:
- Familiarity with the project: You need to know conventions to judge violations
- Understanding of the business: You need to know requirements to judge correctness
- Mastery of the technology: You need to understand principles to judge reasonableness
AI won’t give you these. These require time to accumulate, real experience, lessons from mistakes.
So the conclusion is:
AI tools make strong developers stronger, but won’t make weak developers strong.
If you already have judgment, AI doubles your efficiency. If you lack judgment, AI lets you produce more problematic code faster.
Tools are amplifiers. They amplify capabilities you already have.
How do you cultivate judgment? That’s another big topic. I’ll write a separate article to discuss it.
If you’re interested in AI-assisted testing, check out this more technical article:
👉 AI Won’t Replace QA, But It Will Replace QAs Who Only Execute