devtestudinidae

DevTestudinidae

I'm growing slowly @github.

🔥 PCIP Framework: How I Fixed AI Coding Assistants with Parenting Psychology

TL;DR

Got tired of AI coding assistants being absolute garbage at understanding context, so I built a framework based on child psychology that actually works.

3-Line Summary:


🚨 Part 1: Why I Built This (Story Time)

Real Disasters I Actually Experienced

The Great Caching Catastrophe


That’s when it hit me: “These AIs can code, but they completely forget about CONTEXT”

The Manual Labor I Had To Do

Me: "Build a payment system"
AI: "Sure! I'll make a basic payment form!"

Me: "Wait wait wait... you need to consider security..."
Me: "PCI DSS compliance too..."  
Me: "And refund handling..."
Me: "And rollback on failures..."
Me: "And fraud prevention..."
Me: "And proper logging..."
Me: "And monitoring..."
Me: "And error handling..."

AI: "Oh, let me rebuild this from scratch ^^;"

Me: "FML..."

THIS. HAPPENED. EVERY. FUCKING. TIME.


🧠 Part 2: Lightbulb Moment from Parenting Psychology

One night while working late, I had YouTube on and this parenting show with Dr. Oh Eun-young came on…

The Scene That Blew My Mind

Situation:

Typical Parent Response:

Dr. Oh’s Approach:

That moment in my brain:

Oh shit... this is it!

It's the exact same with AI!
- Just saying "code this" (X)
- Understand context, provide guidance (O)

AI needs a 'good parent' role too!

Dr. Oh’s Good Parenting Principles

  1. Situation Analysis: “First understand why the child behaves this way”
  2. Risk Assessment: “Think about what consequences this behavior might lead to”
  3. Context Consideration: “Consider our family situation, child’s personality, long-term goals”
  4. Provide Guidance: “Not just commands, but direction with reasoning”
  5. Continuous Monitoring: “Observe results and continuously adjust”

Apply this to AI coding:

  1. Situation Analysis: “What type of request is this and how complex?”
  2. Risk Assessment: “What’s the risk level of this task?”
  3. Context Consideration: “Consider project characteristics, tech stack, user base”
  4. Provide Guidance: “Which expert should approach this from what perspective?”
  5. Continuous Monitoring: “Verify if the output meets standards”

💡 Part 3: The PCIP System I Designed

Old Way’s Limitations

User → AI → Output
"Build payment system" → "Sure!" → Dangerously half-assed code

PCIP Way

User → PM(Context Analysis) → Expert(Specialist Mentoring) → Child(Precise Execution)
"Build payment system" → "High risk, deploying Payment Expert + Security Expert" → "Build with PCI compliance like this" → Safe, high-quality code

🏗️ Part 4: Detailed Design Intent Analysis

4-1. Why 3 Layers Specifically

Why did I split it into 3 layers?

Trial and Error:

3 Layers is optimal because:

Same structure as real company org charts = proven system

4-2. Why “15-year experienced PM” Specifically

Initial failure:

PM role: "manager"
Result: AI acted vaguely, no judgment criteria, inconsistent as hell

After improvement:

PM role: "15-year experienced project manager"
Specific role: "conversation analysis, expert selection, resource coordination"
Result: Clear judgment criteria, consistent decision-making

Analogy:

4-3. The Science(?) Behind 5-Level Risk System

Level 5 (Very Low) - Changing CSS colors:

Level 1 (Critical) - Payment systems:

Why 5 levels? Why odd number?

4-4. Dynamic Expert Selection System

Why not pre-define, but choose in real-time?

Fixed approach limitations:

User: "Build an e-commerce site"
System: "Deploying web developer"
User: "Oh and it needs payments too"
System: "Uh... need payment expert too... start over?"

Dynamic approach advantages:

User: "Build an e-commerce site" 
PM: "Deploying web developer"
User: "Need payments too"
PM: "Got it. Adding Payment Expert + Security Expert"

4-5. Why Split Silent Mode vs Explicit Mode

Real usage pattern analysis:

Silent Mode (Level 5, 4):

User: "Change button color to red"
My mind: "This is simple, just do it quickly..."

Explicit Mode (Level 1, 2):

User: "Build payment system"  
My mind: "This is big, I want to see the plan first..."

Matches natural human expectations:

4-6. Natural Language Learning System

Old problem:

Me: "Build login functionality"
Me: "Oh and since this is security-related, consider security expert perspective"
Me: "UI should be pretty too, so consider UX"
Me: "Need mobile support too..."

PCIP solution:

Me: "Build login functionality. I'm worried about security but UI should be pretty"
AI: *analyzes conversation flow* → automatically deploys Backend + Security + UX experts

Just like a real team leader:

4-7. External Knowledge Integration

Why did I add this feature?

Real problems I faced:

Solution:

AI's 3-level confidence check:
- High: My knowledge is sufficient → direct answer
- Medium: Reference external sources for verification → verified answer  
- Low: "Let me look up specialized resources" → accurate information

“If you don’t know, look it up” system:

4-8. Why Template System is Necessary

Problems without templates:

Same question but:
- Sometimes just throws code
- Sometimes only long explanations
- Sometimes completely different format
→ Unpredictable, inconsistent

After applying templates:

Silent Mode: Always [Analysis] → [Execution] → [Brief explanation] order
Explicit Mode: Always [Detailed analysis] → [Plan] → [Approval request] → [Execution] order

Same as real company work:

4-9. Triple Quality Verification System

Without verification system:

AI makes code → I just trust and use → find problems later
"Why did it build it like this?" → fix again → time wasted

Triple verification system:

1. Child: Syntactic correctness, basic functionality check
2. Parent: Domain-specific quality standards (security, performance, UX, etc.)
3. PM: Overall project consistency, architecture alignment check

Real effect:


📊 Part 5: Before & After Comparison (Honest Assessment)

Development Process Comparison

Before (Traditional AI):

Request → Quick implementation → Find problems → Fix → More problems → More fixes → ...
1-hour task eventually becomes 1 week

After (PCIP):

Request → Analysis → Planning → Proper implementation in one go → Done
Bit slower at first but much faster overall

Real Usage Experience

Pros:

Cons:


🎯 Part 6: Practical Application Guide

Real usage tips:

  1. Start with simple tasks → Experience Silent Mode
  2. Gradually expand to complex tasks → Experience Explicit Mode
  3. Talk naturally → “I’m worried about security”, “Performance is important”, etc.
  4. Review the output → AI isn’t perfect, final check is essential

💡 Conclusion: Overall Design Philosophy

“Make AI not just a tool, but a skilled team member”

= PCIP Framework was born


P.S.

Usage

Apply the SystemPromptEN.md prompt as system prompt (instructions) and start coding.

In Cursor, use User Rules or Project Rules. If it seems like the AI doesn’t get it, just paste the prompt directly and tell it to use it as system prompt.

Obviously I use Cursor, so the example is for Cursor too.


Built by a developer who got tired of AI assistants that code like junior developers with ADHD

GitHub Repository