🌏 閱讀中文版
Something happened on my team recently
A junior engineer used AI to build a feature.
About 500 lines, took two hours. Pretty efficient. He was proud of it.
Our senior engineer spent ten minutes reviewing it, then frowned: “There’s a logic issue here.”
If that bug had shipped, it would have affected 100,000 user records.
Later, I kept thinking: what were those ten minutes worth?
I’m starting to think the definition of “senior” is changing
We used to call someone senior because they wrote great code, had lots of experience, and knew all the pitfalls.
But now AI can write a lot of code.
Fast. And sometimes, pretty good.
So where’s the value of a senior engineer?
My observation: it’s in judgment.
Not writing more than AI can. It’s seeing what AI got wrong.
That senior engineer wasn’t valuable because he could write 500 lines.
He was valuable because in 10 minutes, he spotted the problem.
AI writes 500 lines, you judge for 10 minutes. Those 10 minutes are your value.
I call this the “10-Minute Value.”
Definition: The 10-Minute Value refers to a senior engineer’s core competency in the AI era—the ability to make critical judgments in minimal time and prevent major issues. This judgment comes from accumulated experience, fundamental knowledge, and holistic system understanding.
Key Insight: The definition of “senior” is changing in the AI era. Value lies not in writing more code, but in spotting what AI got wrong.
What’s AI like?
I think of AI as an eager intern with no experience.
It can do a lot: write code, look things up, generate reports, organize documents.
It doesn’t get tired, doesn’t complain, works 24/7.
But it needs you to check its work, guide it, and make the final call.
It doesn’t know if this code will become technical debt in three months.
It doesn’t know if this API design will make the frontend team want to quit.
It doesn’t know that this requirement will probably change next week.
You know these things.
Your value isn’t “doing more than the intern.” It’s “knowing whether the intern got it right.”
If you compete with AI on output, you’ll lose.
If you compete with AI on judgment—that’s something it can’t do.
Key Insight: AI is like an eager intern with no experience. It doesn’t know if code will become technical debt, if API designs are sensible, or if requirements will change—these are where your value lies.
Let’s be honest: AI isn’t perfect
Before we go further, I want to acknowledge something: AI has its problems.
Research shows that developers using Copilot actually have higher bug rates.
Microsoft’s data says it takes 11 weeks to fully realize the benefits of AI tools.
If your manager has concerns about AI, they’re not ignorant. Their worries are grounded in evidence.
But here’s my take: the question isn’t “whether to use AI,” it’s “how to use it.”
If you treat AI as “a tool to write more code,” you might create more bugs.
If you treat AI as “a tool to save time so you can focus on judgment,” you’ll level up.
The difference is mindset.
So what is “judgment”?
Many people hear “judgment” and think it’s abstract.
Like something only senior architects do.
But it’s not.
You make judgments every day. You just don’t realize it.
- Should this API have pagination? Judgment.
- What error message should we return? Judgment.
- Should we add an index to this field? Judgment.
- Should this logic be extracted into a function? Judgment.
- Does this requirement make sense? Should we push back? Judgment.
You’re not lacking judgment. You’re just not aware of it.
Start noticing, and you’ve already started growing.
Three common misconceptions
Let me share three mistakes I keep seeing.
Misconception 1: More AI-generated code is better
Some people think the AI era is about “producing more.”
Using AI to write 500 lines is better than writing 100 yourself.
But here’s reality: your value isn’t output volume—it’s knowing whether the output is correct.
That junior wrote 500 lines with AI.
That senior reviewed for just 10 minutes.
But the senior’s salary is twice as high.
Why? Because those 10 minutes prevented a bug that could have cost the company millions.
Don’t compete with AI on output. You’ll lose.
Misconception 2: You don’t need fundamentals anymore
Some people think: AI can write code, so why learn data structures, algorithms, or system design?
But think about it: how do you judge whether AI’s output is correct?
AI writes SQL that runs, but you need to understand indexes to know if it’ll be slow.
AI writes architecture that looks reasonable, but you need experience to know if it’ll scale.
Fundamentals aren’t for writing code. They’re for knowing whether AI’s code is right.
Learning fundamentals is investing in your judgment.
Key Insight: Fundamentals (data structures, algorithms, system design) aren’t for writing code in the AI era—they’re for judging whether AI’s code is correct. Learning fundamentals is investing in your judgment.
Misconception 3: Just knowing prompts is enough
Many people are learning “how to talk to AI” and “how to write good prompts.”
That’s great, but that’s just the tool.
The real value is: AI gives you three options—which do you pick? Why?
Lots of people know how to ask questions.
Fewer people know how to judge the answers.
A simple framework: The Execution-Judgment Spectrum
When thinking about this, I drew a simple diagram:
flowchart LR
subgraph Execution["⚙️ Execution Zone — AI Can Do This"]
direction TB
A1["Write CRUD"]
A2["Build UI layouts"]
A3["Write basic SQL"]
A4["Apply templates"]
A5["Generate reports"]
end
subgraph Judgment["🎯 Judgment Zone — Your Value"]
direction TB
B1["Code Review"]
B2["Architecture decisions"]
B3["Technology selection"]
B4["Evaluate requirements"]
B5["Prioritization decisions"]
end
Execution -- "Level up = Move right" --> Judgment
style Execution fill:#ffe6e6,stroke:#ff9999,stroke-width:2px
style Judgment fill:#e6ffe6,stroke:#99cc99,stroke-width:2px
Think about the past week. Where did you spend your time?
If mostly on the left, you’re competing with AI.
If partly on the right, you’re doing what AI can’t.
Leveling up means moving right.
What to do at different stages
If you’re a mid-level engineer (2-5 years)
You might be thinking: I know I need to “move up,” but I just write CRUD all day. Where’s the opportunity?
Some suggestions:
1. In your next code review, ask one more question
Beyond “is it correct,” ask: “Could this design cause problems in the future?”
That’s practicing judgment.
2. After writing code with AI, spend 5 minutes asking yourself
“How would I have written this without AI? What’s different? Is there a problem with AI’s approach?”
That’s also practicing judgment.
3. Start recording your decisions
Spend 10 minutes each week writing down one judgment you made:
Date: 2025/12/15
Situation: PM requested a new API
Options: A. Modify existing API B. Create new API
My judgment: Chose B
Reasoning: Existing API is used in 10 places; modifying it is too risky
Result: (add later) Shipped smoothly, no impact on other features
After three months, you’ll have 12+ “judgment cases.”
That’s your story bank for interviews and evidence for promotions.
4. Don’t wait for permission
Want to become senior but haven’t been given architecture work?
Don’t wait.
- In code reviews, suggest architectural improvements
- In technical discussions, proactively draw diagrams and explain your thinking
- Write a doc: “If I were to refactor this module, here’s how I’d design it”
A senior isn’t “someone assigned to do architecture.”
It’s “someone who proactively thinks about architecture.”
If you’re a senior engineer / Tech Lead
You’re probably already using AI. So what can this article tell you?
I think there’s one thing worth considering: what’s your new value?
AI makes juniors more productive. But code review burden has increased too.
You might see that as a burden. But flip it around:
Your judgment is your team’s quality assurance.
Those reviews, those “there’s a problem here” ten-minute moments—that’s your value.
Some things you can do:
1. Create an “AI Output Code Review Checklist”
AI makes predictable mistakes. Document them. Share with the team.
2. In 1:1s, ask your reports: “What judgment did you make this week?”
This helps them start recognizing the importance of judgment.
3. Share your thinking process
Don’t just say “there’s a problem here.” Say “here’s how I spotted it.”
Help juniors learn “how to think,” not just “what to do.”
4. In interviews, try asking
- “What have you built with AI? What problems did you discover? How did you handle them?”
- “Give me an example where AI suggested A, but you chose B. Why?”
These questions reveal whether someone has “judgment,” not just “knows how to use AI.”
If your manager is skeptical about AI
This is a common situation.
You want to use AI, but your manager says: “AI output quality is unstable. Let’s hold off.”
My suggestion: don’t rush to convince them.
Their concerns usually aren’t unfounded.
Bug rates might actually increase. Learning curves do exist. Security risks are real.
So instead of saying “AI is great, you should let me use it,” try this:
“I’d like to try it on this small feature. If something goes wrong, I’ll take responsibility. I’ll report back with data in two weeks.”
This works because:
- It reduces the manager’s perceived risk (small scope, someone accountable)
- It gives you a chance to prove yourself with data
- Even if it fails, it’s a learning opportunity
After testing, report concrete data:
“This feature saved me X hours using AI. Bug count was Y. Compared to similar features before, quality was about the same / better / needs improvement.”
Let your manager see data, not your subjective feelings.
Also, this might help:
“Every line AI writes, I’ll review it like code from a junior. If there’s a problem, it’s on me, not AI.”
This tells your manager: you’re not trying to slack off. You’re trying to work smarter.
If you’re a manager
Maybe you’re reading this thinking: “Should my team use AI? How do I manage it?”
My view: complete prohibition and complete freedom are probably both wrong.
Your concerns are valid
- Bug rates might actually increase
- Learning curves do exist
- Security risks need consideration
This isn’t ignorance. It’s caution.
But complete prohibition has risks too
- Your competitors might be using it
- Your team members might be using it secretly (and you don’t know)
- Talented people might not want to join your team because of it
Suggested approach
1. Manage by scenario
Not “can we use it,” but “when can we use it.”
For example:
- Internal tools, prototypes: allowed
- Core transaction logic, sensitive data: requires additional review
2. Establish review standards
AI-generated code must be human-reviewed.
Review focus: logical correctness, security, maintainability.
This isn’t extra burden. It’s quality assurance.
3. Track data
Projects using AI vs. not using AI:
- Development time difference?
- Bug rate difference?
- Maintenance cost difference?
Decide with data, not feelings.
4. Build judgment, not dependency
Teach your team to “judge whether AI is right,” not “blindly accept AI.”
That’s the real job of a manager in the AI era: not managing AI, but developing people who can wield AI.
The question isn’t “will AI replace you”
I often hear people ask: “Will engineers be replaced by AI?”
I think that’s the wrong question.
A better question: “What am I doing with the time AI saves me?”
If you use AI to save time, then write more CRUD—you’re competing with AI on output. Long-term, that’s not a good strategy.
If you use AI to save time, then spend it on architecture, reviews, learning—you’re leveling up.
AI won’t replace you. But “people who use AI + have judgment” will replace “people who don’t use AI” or “people who only use AI but lack judgment.”
Key Insight: The question isn’t “will AI replace you”—it’s “what are you doing with the time AI saves you?” Using saved time to write more CRUD means competing with AI; using it for architecture, code reviews, and learning is true leveling up.
Back to those 10 minutes
What happened after that story at the beginning?
The junior talked with the senior later.
He asked: “How did you spot the problem in just 10 minutes?”
The senior said: “I’ve made this kind of logic error before. Three years ago, we shipped it, only found out afterward. Took two weeks to fix.”
“So it’s from experience?”
“Not entirely. I pay special attention to certain things: edge cases, null handling, concurrency. These are where AI slips up most.”
The junior said: “I thought writing fast with AI was progress.”
The senior said: “Writing fast is progress. But knowing where things will break—that’s a different kind of progress.”
Finally
I don’t know what engineers will look like in five years.
Nobody does.
But I’d guess that looking back, we’ll find:
The ones who stayed weren’t the ones who wrote the most code.
They were the ones who made the right calls at critical moments.
Those calls might have taken just 10 minutes each time.
But those 10 minutes made all the difference.
Where are you spending your 10 minutes today?
Related Articles
- What’s the Value of a PM in the AI Era?
- AI Won’t Replace QA, But It Will Replace QAs Who Only Execute
Sources
- Figma 2025 AI Report — Survey on AI’s impact on design and development work
- GitHub Copilot Productivity Research — GitHub’s official quantitative study on Copilot effectiveness
- Uplevel: Copilot and Bug Rate Study — Independent research on bug rate changes with AI tools
- ISC2 2024 Cybersecurity Workforce Study — Global cybersecurity talent trends and skill requirements
- Microsoft AI Tools Effectiveness Research — Source for the 11-week learning curve data