🔥 PCIP Framework: How I Fixed AI Coding Assistants with Parenting Psychology
TL;DR
Got tired of AI coding assistants being absolute garbage at understanding context, so I built a framework based on child psychology that actually works.
3-Line Summary:
- SOTA AI coding tools suck balls when given vague instructions - they ignore project context and write partial, broken, or structurally insane code
- PCIP Framework analyzes project architecture and domain context during task execution, selects appropriate experts (parents), and guides optimal code implementation
- This connects even abstract user requests to stable, consistent coding results, with real-time integration of external knowledge (RAG/MCP tools when supported)
🚨 Part 1: Why I Built This (Story Time)
Real Disasters I Actually Experienced
The Great Caching Catastrophe
- Me: “Optimize performance, this is slow as hell”
- Claude: Spits out Redis caching code
- Me: “Holy shit, this is actually fast now! Awesome!”
- 1 week later: Users flooding support with “I updated my profile but it still shows old info”
- Root cause: Zero cache invalidation logic
- Result: Public apology to users and nuking the entire cache 🤡
That’s when it hit me: “These AIs can code, but they completely forget about CONTEXT”
The Manual Labor I Had To Do
Me: "Build a payment system"
AI: "Sure! I'll make a basic payment form!"
Me: "Wait wait wait... you need to consider security..."
Me: "PCI DSS compliance too..."
Me: "And refund handling..."
Me: "And rollback on failures..."
Me: "And fraud prevention..."
Me: "And proper logging..."
Me: "And monitoring..."
Me: "And error handling..."
AI: "Oh, let me rebuild this from scratch ^^;"
Me: "FML..."
THIS. HAPPENED. EVERY. FUCKING. TIME.
🧠 Part 2: Lightbulb Moment from Parenting Psychology
One night while working late, I had YouTube on and this parenting show with Dr. Oh Eun-young came on…
The Scene That Blew My Mind
Situation:
- Mom: “Doctor, my kid only plays games all day. Games games games…”
- Mom: “He won’t listen when I tell him to stop. What should I do?”
Typical Parent Response:
- “Stop playing games!” (commanding)
- “I’m confiscating your console!” (force)
- “No games = no allowance!” (threats)
Dr. Oh’s Approach:
- “You can’t just tell kids ‘stop playing games’”
- “First, you need to understand WHY the child plays games”
- “Understand what need they’re trying to fulfill through gaming”
- “Then provide alternatives and guide them with long-term perspective”
That moment in my brain:
Oh shit... this is it!
It's the exact same with AI!
- Just saying "code this" (X)
- Understand context, provide guidance (O)
AI needs a 'good parent' role too!
Dr. Oh’s Good Parenting Principles
- Situation Analysis: “First understand why the child behaves this way”
- Risk Assessment: “Think about what consequences this behavior might lead to”
- Context Consideration: “Consider our family situation, child’s personality, long-term goals”
- Provide Guidance: “Not just commands, but direction with reasoning”
- Continuous Monitoring: “Observe results and continuously adjust”
Apply this to AI coding:
- Situation Analysis: “What type of request is this and how complex?”
- Risk Assessment: “What’s the risk level of this task?”
- Context Consideration: “Consider project characteristics, tech stack, user base”
- Provide Guidance: “Which expert should approach this from what perspective?”
- Continuous Monitoring: “Verify if the output meets standards”
💡 Part 3: The PCIP System I Designed
Old Way’s Limitations
User → AI → Output
"Build payment system" → "Sure!" → Dangerously half-assed code
PCIP Way
User → PM(Context Analysis) → Expert(Specialist Mentoring) → Child(Precise Execution)
"Build payment system" → "High risk, deploying Payment Expert + Security Expert" → "Build with PCI compliance like this" → Safe, high-quality code
🏗️ Part 4: Detailed Design Intent Analysis
4-1. Why 3 Layers Specifically
Why did I split it into 3 layers?
Trial and Error:
- 1 Layer (existing): User → AI → Output = Context ignored, quality garbage
- 2 Layer attempt: User → Manager → Execution = Manager doing too much, lack of expertise
- 4 Layer attempt: Too complex, impractical, slow
3 Layers is optimal because:
- PM (Meta): Big picture situation analysis (CEO role)
- Expert (Parent): Domain expertise for specific guidance (Team Lead role)
- Child (Execution): Precise execution following guidance (Developer role)
Same structure as real company org charts = proven system
4-2. Why “15-year experienced PM” Specifically
Initial failure:
PM role: "manager"
Result: AI acted vaguely, no judgment criteria, inconsistent as hell
After improvement:
PM role: "15-year experienced project manager"
Specific role: "conversation analysis, expert selection, resource coordination"
Result: Clear judgment criteria, consistent decision-making
Analogy:
- “Act” (vague) vs “Act like a divorced father in his mid-40s” (specific)
- More specific = AI performs role more accurately
4-3. The Science(?) Behind 5-Level Risk System
Level 5 (Very Low) - Changing CSS colors:
- Real experience: AI starts with security review just to change a color 🤦♂️
- Solution: Immediate execution (Silent Mode)
Level 1 (Critical) - Payment systems:
- Real experience: AI builds it quick and calls it done → security disaster
- Solution: Mandatory detailed planning then approval (Explicit Mode)
Why 5 levels? Why odd number?
- 3 levels: Too rough, inaccurate distinctions
- 7 levels: Too complex, impractical
- 5 levels: Appropriately granular yet practical
- Odd number: Middle value (Level 3) exists → handles ambiguous cases
4-4. Dynamic Expert Selection System
Why not pre-define, but choose in real-time?
Fixed approach limitations:
User: "Build an e-commerce site"
System: "Deploying web developer"
User: "Oh and it needs payments too"
System: "Uh... need payment expert too... start over?"
Dynamic approach advantages:
User: "Build an e-commerce site"
PM: "Deploying web developer"
User: "Need payments too"
PM: "Got it. Adding Payment Expert + Security Expert"
4-5. Why Split Silent Mode vs Explicit Mode
Real usage pattern analysis:
Silent Mode (Level 5, 4):
User: "Change button color to red"
My mind: "This is simple, just do it quickly..."
Explicit Mode (Level 1, 2):
User: "Build payment system"
My mind: "This is big, I want to see the plan first..."
Matches natural human expectations:
- Simple things: quickly
- Complex things: carefully
4-6. Natural Language Learning System
Old problem:
Me: "Build login functionality"
Me: "Oh and since this is security-related, consider security expert perspective"
Me: "UI should be pretty too, so consider UX"
Me: "Need mobile support too..."
PCIP solution:
Me: "Build login functionality. I'm worried about security but UI should be pretty"
AI: *analyzes conversation flow* → automatically deploys Backend + Security + UX experts
Just like a real team leader:
- Team lead listens to project discussion and assigns appropriate team members
- Natural conversation reveals intent
4-7. External Knowledge Integration
Why did I add this feature?
Real problems I faced:
- Asked about latest Next.js 15 features, AI answered based on Next.js 13
- Asked about PCI DSS compliance, got inaccurate information
- Specific library latest API questions with wrong info
Solution:
AI's 3-level confidence check:
- High: My knowledge is sufficient → direct answer
- Medium: Reference external sources for verification → verified answer
- Low: "Let me look up specialized resources" → accurate information
“If you don’t know, look it up” system:
- “Let me check that for you” is way better than wrong information
- Can verify latest info and accurate regulations
4-8. Why Template System is Necessary
Problems without templates:
Same question but:
- Sometimes just throws code
- Sometimes only long explanations
- Sometimes completely different format
→ Unpredictable, inconsistent
After applying templates:
Silent Mode: Always [Analysis] → [Execution] → [Brief explanation] order
Explicit Mode: Always [Detailed analysis] → [Plan] → [Approval request] → [Execution] order
Same as real company work:
- Reason for report templates, meeting minute templates
- Unified format lets you focus on content
- Ensures quality consistency
4-9. Triple Quality Verification System
Without verification system:
AI makes code → I just trust and use → find problems later
"Why did it build it like this?" → fix again → time wasted
Triple verification system:
1. Child: Syntactic correctness, basic functionality check
2. Parent: Domain-specific quality standards (security, performance, UX, etc.)
3. PM: Overall project consistency, architecture alignment check
Real effect:
- Proactive problem detection and fixing
- Dramatically improved final output reliability
- Eliminated “can I trust this?” anxiety
📊 Part 5: Before & After Comparison (Honest Assessment)
Development Process Comparison
Before (Traditional AI):
Request → Quick implementation → Find problems → Fix → More problems → More fixes → ...
1-hour task eventually becomes 1 week
After (PCIP):
Request → Analysis → Planning → Proper implementation in one go → Done
Bit slower at first but much faster overall
Real Usage Experience
Pros:
- ✅ Don’t need detailed explanations every time
- ✅ Expert perspectives automatically applied
- ✅ Code quality significantly improved
- ✅ Rarely have problems later
- ✅ Project consistency maintained
Cons:
- ❌ Even simple tasks slightly slower (minimized with Silent Mode)
- ❌ Long prompt looks complex at first
- ❌ Not perfect for every situation (still AI after all)
🎯 Part 6: Practical Application Guide
Recommended for:
- Developers working on complex projects
- People who care about code quality
- Anyone tired of explaining every detail to AI
- Projects that need long-term maintenance
Not recommended for:
- “Hello World” level stuff only
- One-time prototypes
- Quick tests for disposable code
Real usage tips:
- Start with simple tasks → Experience Silent Mode
- Gradually expand to complex tasks → Experience Explicit Mode
- Talk naturally → “I’m worried about security”, “Performance is important”, etc.
- Review the output → AI isn’t perfect, final check is essential
💡 Conclusion: Overall Design Philosophy
“Make AI not just a tool, but a skilled team member”
- Dr. Oh’s parenting method: Situation analysis → Context understanding → Provide guidance
- Real dev team organization: PM → Team Lead → Developer structure
- My painful experience: Learned what to consider the hard way
- Human-friendly design: Natural conversation, predictable results
= PCIP Framework was born
P.S.
- Thanks to Dr. Oh Eun-young, my dev life got easier lol
- If any developers have similar experiences, please share your thoughts
- If you actually try this and give feedback, I’d be grateful!
Usage
Apply the SystemPromptEN.md prompt as system prompt (instructions) and start coding.
In Cursor, use User Rules or Project Rules. If it seems like the AI doesn’t get it, just paste the prompt directly and tell it to use it as system prompt.
Obviously I use Cursor, so the example is for Cursor too.
Built by a developer who got tired of AI assistants that code like junior developers with ADHD