Why Smart People Still Struggle with AI

Recently, I was chatting with a friend outside the tech circle who wanted to learn about Clawbot/OpenClaw. The constant buzz on social media had given him FOMO, and he was eager to understand what this new tool was all about.

This reminded me of a year ago when I had recommended using an AI IDE for his daily document and project management tasks, but he still hadn’t taken any action.

When I asked him about the reason in the context of Clawbot, his answer was quite interesting: "I feel like I’m not yet capable of ‘training’ the AI properly, and I don’t know how to give it clear instructions, so I haven’t used it."

His response suddenly made me realize something: perhaps many people struggle to use AI effectively not because they can’t learn clever and sophisticated prompts, but because they fundamentally misunderstand AI tools.


All the tools we’ve invented in the past, whether physical tools like wrenches, screws, and engines, or digital tools like Office, Photoshop, and browsers, fall under the category of deterministic tools. The common feature of these tools is that the process is visible, and the results are predictable. You input a formula, apply a filter, or search for a specific keyword, and the system strictly follows preset logic to return a fixed result.

But AI tools are the complete opposite. Their working process is a black box, and the results are unpredictable—it’s a probabilistic "game."

Everyone using AI tools has experienced frustration. Because when you try to apply the experience of using deterministic tools to control probabilistic tools, expecting a deterministic result under the halo of "AI can do anything," you’re easily discouraged by the randomly fluctuating outputs of AI.

Using AI tools is like playing a navigation planning game.

If we use currently popular AI Agent tools like Claude Code, Antigravity, or Cursor, we’ll find they act like experienced guides. You tell them the destination, and they try to autonomously plan the route, choose transportation, and even book the itinerary. Even so, they might still lead you astray due to outdated information or logical leaps.

However, most users are only dealing with a basic Chatbot. Asking a Chatbot for directions is like asking a complete stranger who knows nothing about you. For a black-box model with no spatial awareness, no knowledge of your current location, and no understanding of your budget or time constraints, such instructions are disastrous. It can only rely on probability to "guess" how you want to get there, or even hallucinate and "fabricate" a shortcut that doesn’t exist.

This is why many people feel that AI often talks nonsense. Users throw a "vague wish" into a "probabilistic black box" but expect to get a "deterministic solution."


Besides misjudging the nature of the tool, my friend’s use of the word "training" revealed another cognitive misconception: anthropomorphism.

He treated AI as a "person" or "assistant" that requires communication,磨合, and even nurturing. Under this mindset, he believed that using AI requires advanced communication skills or the ability to write complex prompts like magic spells. This expectation sets an extremely high psychological barrier, leading to reluctance and difficulty.

Web-based Chatbots are difficult to use because the pure dialogue mode lacks factual anchors. The longer the context, the higher the probability of the model hallucinating. You cannot "train" a model that predicts the next token based on probability to become more "memorable" or "understanding." Moreover, as the number of dialogue rounds increases, the model’s hallucination problem worsens.

The very clear limitation of large models is that they cannot perceive time or prioritize needs. In other words, their working space is chaotic, and all conversations in the context are equally important to the model. So, the more the user "trains" it, the more likely it is to "get lost."

Since the essence of AI is an unpredictable black box, the key to obtaining usable results lies not in improving "communication skills" but in applying physical constraints.

Those AI tools that generate high productivity (such as AI IDEs) are effective not because the underlying model is smarter, but mainly because they introduce engineering structures. They use the file system to provide physical boundary constraints; retrieval to offer more accurate context; toolchain CoT to limit output paths; Tool Use to restrict output formats; and multi-agent systems to enhance attention.

These objective structures act like scaffolding, supporting the stability of AI. Users don’t need to "persuade" or "train" it as they would with a person; they just need to throw relevant files into it, and the AI can work within the defined boundaries.

Thus, the reliability of AI Agents stems from external constraints and engineered context management, with little relation to the user’s linguistic rhetorical skills.


From social media shares about Clawbot/OpenClaw, we can see one impressive "Wow" moment after another. But on the other hand, there are many more "unspoken failures"—frustrating moments caused by complicated configurations or decision-making errors.

Many people even purchase Mac minis or invest significant effort in setting up complex Sandbox environments just to get these tools to work.

Compared to the majority who blindly jump into the pit due to FOMO, those who actually gain positive feedback from such software are relatively few. At this stage, when AI Agents are still quite rudimentary, blind hardware investments and following trends are not rational and may even lead to various security risks due to Agent malfunctions.

Even if there were truly useful Agent tools, could you really use them well? After all, the core reason you can’t use AI effectively is that you’re still stuck in the old-era mindset of tools.

When AI starts taking on more "navigation planning" and "automated execution," your work focus will inevitably shift—from being a hands-on executor swinging a sledgehammer to becoming a decision-maker responsible for value judgment and goal management.

Change your mindset, let go of the obsession with "training," and stop treating it like a "person." When you learn to "configure" it rather than "command" it, you’ve truly crossed the cognitive threshold of the AI era.