The Twilight of Low-Code Platforms: Why Claude Agent SDK Will Make Dify History
"The best tool is the one that aligns with human thinking." — A developer who escaped from Dify
I. Opening Scene: How Would You Assign a Task?
Imagine you're a restaurant owner who needs to teach a new employee how to handle customer complaints. What would you do?
Approach A: Draw a flowchart
[Start] → [Listen to customer] → [Is food cold?]
↓ Yes ↓ No
[Reheat] → [Apologize] → [Is service slow?]
↓ Yes ↓ No
[Free dish] → [Is it attitude?]
↓ Yes ↓ No
[Change server] → [...]
Then tell the employee: "Follow this flowchart exactly. Don't miss any node."
Approach B: Just tell them
"Hey, handling complaints is simple: First listen to what the problem is. If the food is cold, reheat it. If service is slow, offer a free dish and apologize. If it's an attitude problem, have someone else handle it. Bottom line: make the customer happy. Use your judgment for the specifics."
Which would you choose?
I bet 99% of people would choose Approach B.
Because this is how humans have transferred knowledge for thousands of years: describe intent and principles in language, let the other person understand and execute flexibly.
Nobody draws a 100-node flowchart to teach an apprentice.
But ironically, when we started using AI, the entire industry chose Approach A.
This is the fundamental problem with Dify and other low-code platforms.
II. First Principles of LLMs: Language Intelligence, Not Flowchart Executor
Let's think from first principles: What is a large language model?
The essence of an LLM is: an intelligent agent that understands and generates natural language.
- It's trained on trillions of natural language texts
- It understands human intent, context, and implicit logic
- Its core capability is language comprehension and reasoning
What does this mean?
It means LLMs naturally understand "human language".
When you say "handle customer complaints," it understands what that means. When you say "if the food is cold, reheat it," it understands the logic. When you say "make the customer happy," it understands the goal.
It doesn't need you to convert these into a flowchart.
Just like you wouldn't assign a task to a Chinese speaker by first translating to English, then to a flowchart, then have them execute it.
You just speak Chinese directly.
III. Human Primitive Behavior Pattern: Natural Language for Process Transfer
How have humans transferred work processes for thousands of years?
Master Teaching Apprentice
"Apprentice, here's how to do woodwork: First measure, then draw lines, cut along the lines with a saw, then sand smooth. Remember, cut steadily, don't go crooked."
No flowchart, no nodes, just natural language description.
Teacher Teaching Student
"An essay needs an opening, body, and conclusion. The opening should hook readers, the body should have logic, the conclusion should tie back to the theme. How you write it depends on your topic."
No fixed template, just principles and examples.
Boss Assigning Work
"Go handle that customer complaint. First understand the situation, pay if you need to pay, apologize if you need to apologize, just don't let it blow up."
No SOP document (okay, modern corporations have them, but that's another topic), just goals and direction.
This is human primitive behavior pattern: describe intent, principles, and processes in natural language, let the executor understand and handle flexibly.
Why is Natural Language More Natural?
Because natural language has several irreplaceable advantages:
- Preserves context: "Pay if you need to pay" implies "within reason"
- Allows flexibility: "Use your judgment" gives execution space
- Conveys intent: Not just steps, but "why do it this way"
- Easy to understand: Anyone can understand, no special language to learn
When did flowcharts appear?
After the Industrial Revolution.
Why? Because machines don't understand human language, they can only execute fixed instructions.
So we invented flowcharts, SOPs, standardized operations—stripping away human flexibility, making processes mechanical.
But LLMs aren't machines, they're intelligent agents.
They understand human language, so why constrain them with Industrial Age flowcharts?
IV. Dify's Essential Problem: Using Graphics to Fight Language Intelligence
Now we understand:
- LLMs are language intelligence
- Humans use natural language to transfer processes
So what is Dify doing?
Dify requires you to translate natural language intent into graphical nodes and connections.
For example:
Your original intent (natural language):
"Handle order complaints: First check order info, analyze complaint type, if it's a delay issue calculate compensation and refund, if it's a quality issue contact the supplier, finally log the resolution."
In Dify, you need to:
- Drag a "Database Query" node, configure query parameters
- Drag an "AI Analysis" node, configure prompt
- Drag a "Conditional" node, set judgment rules
- Drag two branches to handle delay and quality issues separately
- Drag 3-5 more nodes in each branch
- Configure connections and variable passing between all nodes
- Test, debug, modify, test again...
You end up with a 20-node flowchart.
What's the Problem?
- Added translation cost: Natural language → Graphical nodes (completely unnecessary)
- Lost context: Nodes can only express "what to do," hard to express "why"
- Limited flexibility: Must pre-design all branches, AI can't decide flexibly
- Maintenance nightmare: Once nodes multiply, changing one thing means checking ten connections
More critically: This completely fights against the LLM's core capability.
LLMs naturally understand processes described in natural language. Yet you force them to understand graphical abstractions.
It's like assigning a task to a Chinese speaker, but first translating Chinese to Martian, then having them execute.
Why the extra step?
V. The Essence of Skills: Describing Processes in Natural Language
Now let's see how Claude Agent SDK Skills work.
Skill = A .md File
That's right, just a regular Markdown file.
handle-order-complaint.skill.md:
# Handle Order Complaint Process
## Goal
Properly handle customer order complaints, ensure customer satisfaction, and log the process.
## Available Tools (MCP)
- database.query: Query order information
- ai.analyze_sentiment: Analyze complaint type
- payment.calculate_compensation: Calculate compensation amount
- payment.apply_refund: Execute refund
- supplier.contact: Contact supplier
- logging.log_event: Log event
## Process
When receiving an order complaint:
1. Use database.query to get detailed order information
2. Use ai.analyze_sentiment to analyze complaint type (delay, quality, service, etc.)
3. Take appropriate action based on complaint type:
- If delay issue:
- Use payment.calculate_compensation to calculate compensation
- Use payment.apply_refund to execute refund
- Apologize to customer and explain compensation plan
- If quality issue:
- Use payment.calculate_refund to calculate refund amount
- Use payment.apply_refund to execute refund
- Use supplier.contact to report issue to supplier
- If service attitude issue:
- Apologize to customer
- If customer still unsatisfied, transfer to human support
4. Use logging.log_event to record entire process (for compliance)
5. Summarize resolution and notify customer
## Notes
- Always maintain courtesy and empathy
- Compensation must be reasonable, not exceeding 50% of order value
- Major complaints (over $1000) must be escalated to senior support
- All operations must be logged for compliance
That's it.
This is what Dify needs 20 nodes to implement.
But here there's only natural language description.
Skills Can Schedule Sub-Skills and MCP Tools
Skills aren't isolated, they can:
Schedule MCP tools (atomic capabilities)
database.query: Database querypayment.apply_refund: Payment interfacelogging.log_event: Logging
Schedule other Skills (sub-processes)
# Handle Order Complaint Process (Main Process) If complaint amount exceeds $1000: - Call escalate-to-senior.skill (escalate to senior support) If need to calculate compensation: - Call calculate-compensation.skill (calculate compensation amount)
It's as natural as function calls.
Main Skill describes overall process, sub-Skills handle specific steps, MCP tools provide atomic capabilities.
Clear hierarchy, flexible composition.
AI Understands Natural Language, Executes Intelligently
When a customer complains "My order hasn't shipped after three days":
AI reads the Skill.md file and understands:
- This is an order complaint scenario, should use
handle-order-complaint.skill - First check order info (call
database.query) - Analyze complaint type (call
ai.analyze_sentiment), determines it's a "delay issue" - Follow delay issue process: calculate compensation, execute refund
- Log event (call
logging.log_event) - Notify customer
AI makes decisions itself, executes flexibly, no flowchart needed.
Comparison: Dify vs Skill
| Dimension | Dify Workflow | Skill.md |
|---|---|---|
| Description Method | Graphical nodes + connections | Natural language Markdown |
| Creation Method | Drag, configure, connect | Write a .md file |
| Maintenance Difficulty | Hard to understand with many nodes | Clear at a glance |
| Flexibility | Fixed paths | AI flexible decisions |
| Learning Curve | Learn platform-specific concepts | Natural language, zero barrier |
| Portability | Platform-specific | Plain text, works anywhere |
| Aligns with Human Thinking | ❌ | ✅ |
| Aligns with AI Capability | ❌ | ✅ |
VI. Why This Is an Essential Difference and Paradigm Shift?
1. Return to Human Primitive Behavior Pattern
For thousands of years, humans have used natural language to transfer processes.
Dify requires you to describe processes graphically (Industrial Age artifact).
Skills let you return to natural language (human primitive pattern).
This isn't technological regression, it's conceptual progress.
2. Unleash the LLM's Core Capability
LLMs are language intelligence, not flowchart executors.
Dify limits the LLM's capability (can only execute fixed paths).
Skills unleash the LLM's capability (understand intent, decide flexibly).
This is the right way to use AI.
3. Lower the Real Barrier
Dify says it lowers the programming barrier, but it creates a new barrier: learning graphical orchestration.
Skills truly lower the barrier: you just describe what you want in human language.
No need to learn programming, learn platforms, learn new concepts.
If you can speak, you can use Skills.
4. Aligns with First Principles
Dify's logic:
- Programming is hard → Use graphics to lower barrier → Create new complexity → Steep learning curve
Skill's logic:
- Programming is hard → Let AI understand human language → Describe processes in natural language → Zero learning curve
Which aligns more with first principles?
5. Paradigm Shift
Old Paradigm (Dify): Humans adapt to machines
- Translate natural language intent to graphical processes
- Solidify flexible logic into node connections
- Shape human thinking into machine-understandable form
New Paradigm (Skills): Machines adapt to humans
- AI directly understands natural language descriptions
- AI decides flexibly based on context
- Humans express intent in the most natural way
This is the essence of the AI era: not making humans more like machines, but making machines understand humans better.
VII. The Complexity Trap of Workflows
Some still say: "My business is complex, I need Workflows."
Let's see what "complex business" looks like in Dify:
A real e-commerce customer service Workflow:
- 147 nodes
- 89 connection lines
- Handles 20+ customer intents
- 3-5 branches per intent
- Plus error handling, logging, compliance, A/B testing...
The experience of maintaining this Workflow:
- Change one node, check 10 connections
- Add one feature, test for 2 days
- New employees can't understand it in 3 days
- Like playing Minesweeper
What if using Skills?
Main Skill (Overall Process):
# E-commerce Customer Service Process
When customer inquires:
1. Understand customer intent
2. Call corresponding sub-Skill based on intent:
- Price inquiry → price-inquiry.skill
- Order status → order-status.skill
- Complaint handling → handle-complaint.skill
- Refund request → process-refund.skill
- ...
3. Log conversation
4. If customer issue unresolved, escalate to human
Sub-Skills (Specific Processes):
# handle-complaint.skill
Detailed steps for handling complaints...
# order-status.skill
Detailed steps for checking order status...
Each Skill is simple and clear, combined to handle complex business.
This is functional programming thinking: decompose complex problems into simple modules.
VIII. MCP Protocol: Standardized Atomic Capabilities
One more key point: MCP (Model Context Protocol).
MCP Tools = Standardized Atomic Capabilities
# MCP Tools that Skills Can Call
## Database Operations
- database.query: Query data
- database.insert: Insert data
## File Operations
- filesystem.read: Read file
- filesystem.write: Write file
## Third-party Integrations
- slack.send_message: Send message
- email.send: Send email
- payment.process: Process payment
Skills describe in natural language how to combine these MCP tools.
AI understands these descriptions and intelligently calls the appropriate tools.
It's like:
- MCP tools are "LEGO bricks" (standardized components)
- Skills are "building instructions" (natural language describing how to combine)
- AI is the "smart builder" (understands instructions, builds flexibly)
IX. Advice for Dify Users
If you're still using Dify, I'm not saying you should abandon it immediately.
But I suggest you think:
Are you describing processes, or drawing flowcharts?
- If the former, Skills might suit you better
- If the latter, ask yourself "why draw"
How many nodes does your Workflow have?
- Less than 10: Still maintainable
- 10-50: Starting to hurt
- 50+: Consider refactoring to Skills
Do you spend more time developing or debugging?
- If debugging, it might be the tool's problem, not yours
Try describing your process in natural language
- If you can describe it clearly, you don't need graphics
- If you can't describe it clearly, drawing won't help either
X. Conclusion: Return to Nature, That's the Future
Humans have used natural language to transfer knowledge and processes for thousands of years.
The Industrial Revolution invented flowcharts because machines don't understand human language.
In the AI era, LLMs understand human language.
Why do we still use flowcharts?
Claude Agent SDK's Skill mechanism returns us to the most natural way:
Describe what you want in natural language, AI understands and executes intelligently.
No need to learn programming, learn platforms, draw flowcharts.
If you can speak, you can build Agents.
This isn't technological regression, it's returning to human primitive behavior patterns.
This isn't a tool upgrade, it's a paradigm shift.
From "humans adapt to machines" to "machines understand humans".
This is what the AI era should look like.
What do you think? Share your thoughts in the comments.
If you're still struggling with whether to learn Dify's 147 node configurations, my advice is:
Don't struggle. Write a .md file, describe what you want in human language, let AI handle the rest.
Comments
No comments yet. Be the first to comment!
Related Tools
Claude Agent SDK
github.com/anthropics/anthropic-sdk-python
Official AI agent development toolkit from Anthropic. Supports Python and TypeScript with powerful features including tool calling, code execution, file operations, and MCP integration.
Dify
dify.ai
Dify is a production-ready open-source agentic workflow development platform, integrating visual workflows, RAG pipelines, agent capabilities, and model management. With 125K+ GitHub Stars, it helps developers rapidly build AI-native applications.
Agent Browser
github.com/vercel-labs/agent-browser
Headless browser automation CLI for AI agents. Fast Rust CLI with Node.js fallback, designed for seamless integration with AI workflows.
Related Articles
Complete Guide to Claude Skills - 10 Essential Skills Explained
Deep dive into Claude Skills extension mechanism, detailed introduction to ten core skills and Obsidian integration to help you build an efficient AI workflow
Claude Code's Next Frontier: Not Code, But Your Local Obsidian Knowledge Base
Explore how Obsidian + Claude Code transforms from a knowledge management tool into your private AI assistant. Complete guide including obsidian-skills, Claudian plugin, Claudesidian template, and best practices for achieving both data privacy and AI capabilities.
Skills + Hooks + Plugins: How Anthropic Redefined AI Coding Tool Extensibility
An in-depth analysis of Claude Code's trinity architecture of Skills, Hooks, and Plugins. Explore why this design is more advanced than GitHub Copilot and Cursor, and how it redefines AI coding tool extensibility through open standards.