Judgment in the AI Era: What You Should Learn Isn’t Prompting

🌏 閱讀中文版本


Prompt Skills Are Overrated

“Learn to prompt and you’ll master AI.”

This statement has misled many people.

Prompt Engineering is useful. Good prompts make AI produce more precise, relevant results.

But this only solves half the problem.

Prompting makes AI produce “more” and “faster.” But do you know if the output is “correct” or “good”?

AI wrote some code—can you tell if it has bugs? AI suggested an architecture—can you judge if it fits your project? AI said “this is the best approach”—can you evaluate if its reasoning is sound?

If you can’t, it doesn’t matter how good your prompts are.

You’re just producing “things you’re not sure are correct” faster.

The real bottleneck isn’t AI’s capability. It’s your judgment.

What Is Judgment?

“Judgment” sounds abstract. Let me break it into four concrete levels.

Level 1: Judging “Is It Correct?”

The most basic level: Is AI’s output correct?

  • Does this code have bugs?
  • Is the logic correct?
  • Are there unhandled edge cases?

This sounds simple, but many people can’t do it.

Example: AI wrote a sorting function. Can you tell if it’s O(n²) or O(n log n)? With large datasets, this difference could mean “works fine” versus “system crashes.”

Level 2: Judging “Is It Good?”

Correct, but is it the best solution?

  • Is there a cleaner way to write this?
  • Is there a more maintainable structure?
  • Is there a more efficient algorithm?

Example: AI used 5 if-else statements to handle state transitions. Technically correct, but do you know you could solve this in one line with a map or state machine—and it’d be easier to maintain?

“Correct” and “good” are two different things. Many people can only judge the former.

Level 3: Judging “Is It Appropriate?”

Best solution, but is it appropriate for this project?

  • Does it follow team coding conventions?
  • Is it consistent with the existing codebase style?
  • Can other team members understand it?

Example: AI used the latest ES2024 syntax—looks elegant. But your project needs to support IE11. This code won’t even run.

“Best solution” is relative. Best in one context might be worst in another.

Level 4: Judging “Is It Worth It?”

Appropriate, but is it worth the time?

  • What’s the ROI of this optimization?
  • Do it now or later?
  • What’s the business impact?

Example: AI suggests refactoring the entire module to make the code cleaner. Great suggestion, but deadline is tomorrow. Do you ship first or refactor first?

This level requires business judgment, not just technical judgment.

Why Do Most People Get Stuck at Levels 1-2?

Because Levels 3-4 require “project context” and “business understanding.”

AI doesn’t have these. It doesn’t know your team conventions, your deadlines, what your customers care about.

So AI can only give you “generically correct answers.”

Turning that into “the correct answer for your project” is your job.

That’s judgment.

Where Does Judgment Come From?

Judgment isn’t talent. It’s accumulation.

Specifically, there are five sources.

Source 1: Mistakes You’ve Made (Experience)

How many times have production bugs taught you lessons?

Every incident is nourishment for judgment. Because you remember: how this error happened, what you missed, what to watch for next time.

But the question is: Did you actually learn? Or did you just fix it and forget?

Many people hit a bug, fix it, and move on. No review, no documentation, no synthesis.

This way, even after 100 bugs, judgment doesn’t improve.

Source 2: Code You’ve Read (Exposure)

How much “good code” have you seen?

To judge “is it good,” you first need to know what good looks like.

If you’ve only seen your own code, your standard is yourself. That’s dangerous.

Where is good code? Open source projects, colleagues’ PRs, technical books, internal best practice docs.

See more, standards rise. Higher standards, better judgment.

Source 3: Problems You’ve Solved (Pattern Recognition)

Encounter the same problem three times, and the fourth time you’ll have intuition.

“This bug looks like a race condition.” “This architecture smells like it won’t scale.” “This requirement sounds like it’ll scope creep.”

This intuition is part of judgment. It comes from pattern recognition—your brain unconsciously matching against past experiences.

But the prerequisite is “consciously synthesizing.” Otherwise experience stays experience and never becomes intuition.

Source 4: PRs You’ve Been Reviewed On (Others’ Perspectives)

Others point out problems you didn’t see.

“There’s null pointer risk here.” “This naming isn’t clear enough.” “This design will be hard to change later.”

This feedback is the fastest way to build judgment. Because you see your “blind spots.”

Everyone has blind spots. The review process lets others’ eyes fill in what you can’t see.

Prerequisite: Actually read the review comments carefully. Don’t just seek approval.

Source 5: Independent Thinking (Most Critical)

The above four are all “inputs.”

But without thinking, inputs don’t become yours.

What is independent thinking? Asking “why” and forming “your own opinion.”

Specifically:

See a best practice, ask “why is this better?”

Don’t just follow it. Understand the reasoning. What problem does this practice solve? When doesn’t it apply?

PR gets rejected, ask “what did the reviewer see that I didn’t?”

Don’t just make changes and resubmit. Understand the reviewer’s thinking. Could you have spotted this issue yourself next time?

AI gives an answer, ask “if I were AI, how would I answer? Same or different?”

This question is interesting. It forces you to form your own judgment first, then compare with AI. Where’s the difference? Who’s right? Why?

This thinking process is slow. But this is how judgment truly “grows into you.”

Experience without thinking is just a log. Experience with thinking becomes judgment.

How to Develop It?

Knowing the sources isn’t enough. You need methods.

Here are five concrete things you can do.

Method 1: Ask One More Question Every Time You Use AI

Deliberate practice for judgment: Every time AI gives you output, ask yourself four questions.

  • “Is this correct?” (Level 1)
  • “Is there a better way?” (Level 2)
  • “Is this appropriate for my project?” (Level 3)
  • “Is this worth doing now?” (Level 4)

It’ll be slow at first. You might spend more time judging than AI spent generating.

That’s normal. That’s deliberate practice.

Once proficient, these four questions become instinct—seconds to judge.

Method 2: Build a Personal Checklist

Turn mistakes you’ve made into a checklist.

After every AI output, run through the checklist.

Example:


  • Does it handle null/undefined?

  • Does it consider concurrency/race conditions?

  • Does it handle errors/exceptions?

  • Does it follow team naming conventions?

  • Are there tests?

  • Is performance acceptable?

Your checklist will grow longer with experience.

This checklist is your judgment made concrete. Others can see your checklist and know how many pits you’ve fallen into.

Method 3: Proactively Do Code Reviews

Don’t just wait for others to review you. Proactively review others’ code.

Why does this help?

Because reviewing others’ code forces you to “actively use judgment.”

You have to judge: Is this code correct? Good? Appropriate?

You also have to articulate: Why is there a problem here? How should it be fixed?

This builds judgment more than passively receiving reviews.

Method 4: Document Failures

Every time AI makes a mistake, every time you miss something, write it down.

  • What did AI miss?
  • What did you miss?
  • How to avoid it next time?

Spend 10 minutes weekly reviewing these records.

These records become your judgment database. Over time, you’ll notice certain errors recurring—those are your blind spots, requiring special attention.

Method 5: Practice, Then Practice More

No shortcuts.

Judgment is a muscle. You have to train it.

Use AI 100 times, consciously judge 100 times, and you’ll improve.

The key word is “consciously.” If you use AI 100 times but accept output directly every time without judging, 100 times equals 1 time.

Practice + consciousness = improvement.

Common Misconceptions

Misconception 1: Technical Ability = Judgment

You might be great at writing code, but poor at judging “should I write this.”

Technical ability is “how to do it.” Judgment is “whether to do it,” “what to do,” “when to do it.”

Many technically strong engineers have poor judgment. They’ll spend three days optimizing a feature nobody uses because “it’s technically interesting.”

Technical skill is a tool. Judgment is knowing when to use which tool.

Misconception 2: Years of Experience = Judgment

10 years of experience without reflection = 1 year of experience repeated 10 times.

Judgment comes from “conscious accumulation,” not time.

I’ve seen people with 2 years of experience who have strong judgment—because they reflect, synthesize, and learn every day.

I’ve also seen people with 10 years of experience who have mediocre judgment—because they just repeat the same work without growing.

Years don’t equal ability. Conscious practice equals ability.

Misconception 3: Good Prompts = No Need for Judgment

Good prompts → Better AI output.

This is true.

But “better output” still needs your judgment.

Like “good employees” still need managers to judge direction. No matter how strong the employee’s execution, wrong direction is wasted effort.

AI is your employee. Its execution is strong. But whether the direction is right—that’s still your judgment call.

Prompting makes the employee better at doing things. Judgment is knowing what things should be done.

Both are needed, but judgment is scarcer.

Conclusion

Judgment is the scarcest ability in the AI era.

Because AI can do more and more, “execution ability” is no longer scarce. Anyone can have AI write code, write docs, do analysis.

What’s scarce: Knowing what should be done, what’s correct, when to do it.

That’s judgment.

It’s not talent. It’s accumulation.

Make mistakes, read code, solve problems, get reviewed, and then—think.

No shortcuts. Just practice, then practice more.

Starting today: Every time you use AI, ask one more question—“Is this really correct? Why?”

This habit will make you increasingly valuable in the AI era.

Leave a Comment