Clone Won’t Run? Let AI Fix Your Dev Environment

🌏 閱讀中文版本


READMEs Are Lies

You inherit a new project.

Open the README:

npm install
npm run dev

Looks simple.

Run npm install. Error. Wrong Node version.

Fine. Install nvm, switch versions, try again.

This time install works, but npm run dev fails. Missing environment variables.

README doesn’t say which ones.

Check .env.example. 20 variables. Which are required? Which can be empty? No idea.

Fill in a few randomly. Try again.

It starts, but can’t connect to the database.

Ask a colleague: “How do you run this project?”

Colleague says: “Works on my machine. Did you forget to install something?”

Half a day later, it finally runs.

How many times have you lived this?


Why Claude Code CLI

There are many AI tools out there. Why did I choose Claude Code CLI for dev environments?

1. It can read the entire project

It’s not limited to snippets you paste. It can directly read package.json, docker-compose.yml, .env.example, and understand the whole project structure.

2. It can execute commands directly

No copy-paste needed. It runs npm install, chmod 600, docker-compose up right in your terminal.

3. Continuous conversation, remembers context

Unlike ChatGPT where you keep re-pasting background info. It remembers what errors you hit, what you tried, what the project structure looks like.

4. It can modify files

Not just suggestions. It directly edits package.json, writes .env, updates README.md.

These features combined make it perfect for “get this project running” tasks that require back-and-forth debugging.


Decision Framework: When to Let AI Take Over

Not everything should go to AI. Here’s my framework:

✅ Fully Delegate to AI

Environment setup – Node version switching – Dependency installation – Environment variable configuration – Docker setup

Dependency upgrades – Upgrading package versions – Resolving dependency conflicts – Handling deprecated warnings

Security fixesnpm audit fix – Upgrading vulnerable dependencies – Fixing security issues

These are rule-based tasks with clear answers. AI is faster than humans and won’t miss anything.

🤝 Human-AI Collaboration

Code changes after upgrades – API changed, need to update calls – Syntax changed, need to rewrite – Behavior changed, need to adjust logic

AI can do these, but you need to verify results. Let AI make changes, you review.

❌ Human Decides

Whether to upgrade – Risk vs. benefit of upgrading – Timeline pressure – Team capacity

Business logic – Should this feature stay or go – Is this behavior a bug or feature – Should we refactor while we’re here

These require business context that AI can’t see. You decide.


Real Case: A Three-Year-Old React Project

Our team recently inherited a project.

Built three years ago. Original author left the company.

Tech stack: – React 16 (now we’re at 18) – Node 14 (now we’re at 20) – Webpack 4 (tons of deprecated warnings) – Environment variables scattered everywhere – npm audit shows 47 vulnerabilities

README says:

npm install
npm start

What actually happens:

npm install
npm ERR! ERESOLVE unable to resolve dependency tree
npm ERR! peer dep missing: react@^15.0.0

Classic.


Step 1: Let AI Diagnose + Plan the Upgrade Path

I open Claude Code and navigate to the project:

cd legacy-react-project
claude

Then I say:

“This is a three-year-old React project. I need to get it running locally. Read package.json, package-lock.json, and README. Diagnose the current state and give me an upgrade plan.”

It reads and reports:

Project Diagnosis:
- React 16.8.0 → recommend upgrading to 16.14.0 (same major version, low risk)
- Requires Node 14.x, you're on 20.x
- 47 security vulnerabilities (12 high, 3 critical)
- package-lock.json is three years old, recommend deleting and rebuilding

Recommended upgrade path:
1. First get it running with Node 14 (confirm functionality)
2. Fix security vulnerabilities
3. Then consider whether to upgrade React major version

Should I start?

This diagnosis and plan would take me at least 30 minutes to compile. AI did it in 2 minutes.


Step 2: Let AI Execute Environment Setup

I say: “Yes, let’s get it running with Node 14 first.”

AI starts executing:

1. Switching Node version...
   Executing: nvm install 14 && nvm use 14
   ✓ Now using node v14.21.3

2. Deleting old lock file...
   Executing: rm package-lock.json node_modules -rf
   ✓ Cleared

3. Installing dependencies...
   Executing: npm install --legacy-peer-deps
   ✓ Installation complete (warnings can be ignored)

4. Scanning for required environment variables...
   Found these environment variables needed:
   - REACT_APP_API_URL (required)
   - REACT_APP_ENV (suggested: development)

   Should I create the .env file?

I say: “Create it. Use http://localhost:3001 for API_URL.”

Executing: Creating .env file
✓ .env created

5. Starting dev server...
   Executing: npm start
   ✓ Project running at http://localhost:3000

From clone to running: 10 minutes.


Step 3: Let AI Handle Security Vulnerabilities

Once it’s running, I say:

“Now handle the security vulnerabilities. Run npm audit first, then fix them.”

Executing: npm audit

Found 47 vulnerabilities:
- 3 critical (need immediate attention)
- 12 high
- 32 moderate

Critical vulnerabilities:
1. nth-check <2.0.1 - Regular expression DoS
2. postcss <8.4.31 - Parser vulnerability
3. webpack-dev-server <4.0.0 - Path traversal

Attempting automatic fix...
Executing: npm audit fix

After fix: 8 vulnerabilities remaining (need manual handling)

Remaining vulnerabilities require major version upgrades that may break existing code:
- react-scripts 3.x → 5.x
- webpack 4.x → 5.x

Should I upgrade and fix the related code?

I say: “Upgrade react-scripts. Fix any code that breaks from the upgrade.”

Executing: npm install react-scripts@5 --legacy-peer-deps

Upgrade complete. Checking for code changes needed...

Found 3 places to modify:

1. src/setupProxy.js
   - Old: module.exports = function(app) { ... }
   - New: needs ES module format

   Modifying...
   ✓ Fixed

2. src/index.js
   - ReactDOM.render is deprecated in React 18
   - But we're still on React 16, this can stay

   Skipping (no impact)

3. webpack config
   - react-scripts 5.x handles this internally

   Skipping

Executing: npm start
✓ Project starts normally

Executing: npm audit
✓ 0 vulnerabilities

From 47 vulnerabilities to 0. AI handled it all, including code changes.


Step 4: After It Runs—What If Features Are Broken?

Many people think “it runs” means we’re done.

Not quite. Running is just the beginning.

After the project started, I clicked around and found the login page was blank.

I said:

“Login page is blank. Check the Console for errors.”

Reading browser Console...

Error message:
TypeError: Cannot read property 'map' of undefined
    at UserList.js:23

Analyzing...

Problem: API response format changed.
- Old format: returns array directly [user1, user2, ...]
- New format: wrapped in object { data: [user1, user2, ...] }

Code expects array but receives object, so .map() fails.

Should I fix it?

I said: “Fix it. Also check if other places have the same issue.”

Modifying src/components/UserList.js...
- Old: users.map(user => ...)
- New: users.data.map(user => ...)
✓ Fixed

Scanning other files...
Found 3 more places with same pattern:
- src/components/OrderList.js:45
- src/components/ProductGrid.js:78
- src/pages/Dashboard.js:123

Modifying...
✓ All fixed

Executing: npm start
✓ Login page displays normally

One problem led to finding 4. AI fixed them all at once.


Step 5: What If All Tests Fail?

After fixing the login page, I wanted to check for other issues.

“Run the tests.”

Executing: npm test

Result: 47 tests, 39 failed ❌

Analyzing failure reasons...

Categories:
- 28: snapshots outdated (UI changed, need snapshot update)
- 8: mocked API format wrong (same issue as before)
- 3: appear to be real bugs

Recommended order:
1. Update snapshots first (automatic, low risk)
2. Fix mock API format (I'll handle this)
3. The last 3 need you to confirm if they're expected behavior

Should I start?

I said: “Handle the first two. List the third category for me to review.”

1. Updating snapshots...
   Executing: npm test -- -u
   ✓ 28 snapshots updated

2. Fixing mock API format...
   Modifying src/__mocks__/api.js
   - Old: return [mockUser1, mockUser2]
   - New: return { data: [mockUser1, mockUser2] }
   ✓ 8 tests passing

3. Remaining 3 failed tests:

   a) test/auth.test.js:45 - "should auto-logout when token expires"
      Expected: redirect to login page
      Actual: stays on current page
      → Possibly token expiration handling logic issue

   b) test/cart.test.js:89 - "should disable buy button when stock is 0"
      Expected: button disabled
      Actual: button clickable
      → Possibly stock check timing issue

   c) test/checkout.test.js:156 - "should show error when discount code is invalid"
      Expected: shows "Invalid discount code"
      Actual: shows "Network error"
      → Possibly error handling logic issue

Should I fix these, or do you want to look first?

I said: “Leave these 3 for now. I’ll check if they’re pre-existing bugs. Continue.”

From 39 failures to 3. AI handled 36. The remaining ones need business judgment.


Step 6: Let AI Update Documentation

Finally, I said:

“Based on everything we just did, update the README so the next person can get running in 10 minutes.”

It produced:

## Requirements

- Node.js 14.x (recommend using nvm)
- npm 6.x

## Quick Start

# Switch Node version
nvm use 14

# Install dependencies
npm install --legacy-peer-deps

# Set up environment variables
cp .env.example .env
# Edit .env and fill in required values

# Start dev server
npm start

## Environment Variables

| Variable | Required | Description | Example |
|----------|----------|-------------|---------|
| REACT_APP_API_URL | Yes | Backend API URL | http://localhost:3001 |
| REACT_APP_ENV | No | Environment identifier | development |

## Known Issues

1. If npm install reports peer dependency errors, add --legacy-peer-deps
2. Must use Node 14, Node 16+ has compatibility issues

## Changelog

- 2025-12-08: Fixed security vulnerabilities, upgraded react-scripts to 5.x

Complete, clear, ready for the next person.


From Firefighting to Paying Down Tech Debt

At this point, you might think the story is over.

Project runs, tests fixed, docs updated. Done.

But I didn’t stop there.

I said:

“Now that the project is stable, evaluate this: if we want to upgrade React from 16 to 18, how much work is it? What are the risks?”

Scanning project...

Assessment Report:

Upgrade scope:
- React 16.14.0 → 18.2.0
- ReactDOM 16.14.0 → 18.2.0
- 12 related dependencies need updating

Code changes required:
- src/index.js: ReactDOM.render → createRoot (required)
- 17 components use componentWillMount (deprecated)
- 3 places use findDOMNode (not supported in React 18)
- Test files need updated render approach

Estimated effort:
- Auto upgrade + modifications: ~30 minutes
- Manual verification + testing: ~2 hours
- Total: half a day

Risk assessment:
- Low risk: mostly syntax updates, behavior unchanged
- Medium risk: those 3 findDOMNode places need logic rewrite
- High risk: none

Recommendation: Safe to upgrade. Should I start?

I said: “Start.”

30 minutes later:

Upgrade complete ✓

Change summary:
- React 16 → 18
- Updated index.js to createRoot API
- Rewrote 17 componentWillMount → useEffect
- Refactored 3 findDOMNode to use useRef
- Updated all test render methods
- Fixed 2 lint errors from upgrade

Executing: npm test
✓ All 47 tests passing

Executing: npm run build
✓ Production build successful

Should I update the README version info?

This Is AI’s Real Value

Looking back at this entire process:

Original goal: Get the project running Actually completed: 1. Environment setup ✓ 2. Dependency fixes ✓ 3. Security vulnerabilities eliminated ✓ 4. Feature fixes ✓ 5. Test fixes ✓ 6. React upgrade (16 → 18) ✓ 7. Documentation update ✓

These tasks, done traditionally, might take a week.

With AI, half a day.

And it’s not “barely running”—it’s better than before.

That’s the difference between firefighting and paying down debt.


More Examples of AI Taking Over Completely

The case above shows AI can do a lot. Here are a few more common scenarios:

Scenario 1: Docker Environment Setup

You: "This project has a docker-compose.yml. Get the whole environment running."

AI:
1. Reading docker-compose.yml...
2. Found .env file needed, creating...
3. Executing docker-compose up -d
4. Waiting for services to start...
5. Checking service status...
   - postgres: running
   - redis: running
   - app: running
6. ✓ All services up, app at http://localhost:8080

Scenario 2: Python Project Environment

You: "This is a Python project. Set up virtualenv and install dependencies."

AI:
1. Checking Python version requirement... needs 3.9+
2. Creating virtualenv...
   Executing: python3 -m venv venv
3. Activating virtualenv...
   Executing: source venv/bin/activate
4. Installing dependencies...
   Executing: pip install -r requirements.txt
5. Found missing system dependency libpq-dev...
   Executing: brew install postgresql (macOS)
6. Reinstalling...
   ✓ Dependencies installed

Scenario 3: Fixing CI/CD Failures

You: "GitHub Actions is failing. Check the log and fix it."

AI:
1. Reading .github/workflows/ci.yml...
2. Reading failure log...
   Error: Node version mismatch, CI uses 18, project needs 14
3. Modifying workflow file...
   Changing node-version: 18 to node-version: 14
4. ✓ Fixed. Should I commit and push?

When NOT to Let AI Take Over

Now that we’ve covered what AI can do, let’s talk about what it can’t.

1. Things requiring permissions

  • Company VPN
  • Internal npm registry
  • API keys or database accounts that need approval

AI doesn’t know your company’s processes. Handle these yourself.

2. Zero documentation

If the project has no README and package.json is a mess, AI can only guess.

You’ll need to ask a human.

3. Whether to upgrade decisions

AI can upgrade for you, but “should we upgrade” is your call.

  • How long will this project be maintained?
  • Is the upgrade ROI worth it?
  • Does the team have time to handle upgrade issues?

These are business decisions, not technical problems.

4. Business logic judgments

After it runs, you notice weird behavior. Is it a bug or a feature?

AI doesn’t know. It hasn’t seen the PRD, heard the PM’s explanation, or known the requirements context from three years ago.

These questions need a human.


Extension: Local Machine Setup Works Too

If you haven’t even set up your local dev environment, AI can help with that too.

When I get a new computer, I say:

“I’m a full-stack engineer, mainly using Node.js and Python. List the tools I need on macOS, then install them one by one.”

It will: 1. Install Homebrew 2. Install nvm, pyenv 3. Configure Git (name, email, SSH key) 4. Install VS Code with recommended extensions 5. Set up shell (zsh, oh-my-zsh)

The whole process, you just confirm and enter passwords.


Extension: Deployment Environments Too

After the project runs locally, deployment environments work the same way.

You: "This project needs to deploy to AWS. Write a GitHub Actions CI/CD pipeline."

You: "Optimize this Dockerfile. Build takes 5 minutes, way too long."

You: "Set up nginx reverse proxy. Route /api to the backend service."

AI excels at these standardized DevOps tasks.


It’s Not Just About Saving Time

Using AI for dev environment setup looks like it saves time.

But the real value is:

1. Consistency

AI follows standard procedures. No more “my machine is special” variations.

2. Documentation

Every time you set up, have AI update the docs. Knowledge stops living in one person’s head.

3. Lower barriers

New hire onboarding goes from “ask colleagues + stumble for half a day” to “chat with AI for 10 minutes.”

4. Security

AI proactively handles security vulnerabilities. No more “let’s just get it running first” and forgetting about them.

This is AI’s real value—not replacing you, but freeing you from repetitive grunt work.


Next Time You Clone a Project

Try this flow:

  1. Open Claude Code: claude
  2. Say: “Read this project and tell me how to run it”
  3. When you hit errors, just paste them
  4. Let it handle security vulnerabilities
  5. Finally, ask it to update the README

10 minutes. Done.

Then you have time for what actually matters—writing code, not debugging environments.

Leave a Comment