
During the Spring Festival, I successfully deployed and launched my personal website thinkitlab.org, while a year ago, my first vibe coding project Realtime-Caption took a full four months to complete.
Although the scale of the projects is not the same—the former is a website, the latter a Windows desktop application—the actual vibe coding experience was indeed worlds apart.
When working on Realtime-Caption, I first forked an open-source ASR backend project from GitHub and then did vibe coding based on that project. At that time, I was like a porter, constantly switching between the web browser and the editor, teetering between a burst blood vessel and a heart attack, with 80% of my time spent playing whack-a-mole.
This year, however, it took me just three weeks to complete my personal website, deploy it to Cloudflare, and successfully launch automatic LLM translation and SEO keyword generation.
Many people talk about how smart AI has become, but in my view, what truly makes us faster are the advancements in engineering.
The flaws of AI still exist, but the progress of tools increasingly constrains AI, making these flaws less fatal.
The Certainty of Flaws

During the development of thinkitlab.org, I confirmed a fact: AI's flaws will persist for a long time.
It will still confidently spout nonsense, still lack a sense of time, and still be unable to judge the priority level of requirements.
It will continuously produce various bugs in specific niche areas, get stuck in the mud of old problems within the same conversation, and of course, overhaul the entire framework just to handle minor UI details. This is a reality we need to accept; it won't disappear anytime soon.
But at the same time, tools are beginning to constrain the model through the environment. Since it hallucinates, the IDE provides an automatically running test environment that validates code the moment it's generated. Since it lacks a sense of time, give it a file system and Git history so it knows where it currently stands. Since it can't judge the priority of requirements, force it to plan and create to-do lists.
In the past, we were always waiting for a more perfect model. Now I understand that for a long time to come, interaction design should be built upon these flaws. This kind of reinforcement work is where the real application value lies.
The Certainty of Components
Through this short three-week development cycle, I found that the core components of a useful AI application have become very well-defined.
First is information organization capability. The problems AI can handle highly depend on the quality of information it receives. We need to provide information to the model on demand, or capture those multi-dimensional, scattered pieces of information, structure and organize them, and then provide them.
During development, the IDE doesn't send all the project information to the LLM for processing at once. Instead, it first retrieves the directory, locates specific files, reads the code within those files, and extracts code snippets based on the requirements.
Only by automating information orchestration for specific scenarios and needs can we truly solve everyone's personalized problems.
Second is memory management capability. The IDE can perceive the global architecture of the entire project. It remembers previous modification logic and knows the dependencies between different files.
By establishing a complete code index and association mapping, it forces the model to remember previous modification logic and the dependencies between different files.
And like Google's Antigravity, it lists a complete walkthrough, combined with to-do lists to form complete project management, keeping AI constrained within a limited requirement framework.
This "external memory" and framework constraint provided by tools ensure that AI doesn't go astray in complex projects due to forgetting the context.
Finally, frontend rendering capability. This is what I felt most deeply this time. We can't keep communicating with AI solely through text; that interaction burden is too heavy.
During development, the IDE uses GenUI technology to directly render a preview interface. Going a step further, it integrates browser-use capabilities to simulate actual user operations in the background for testing. This ability hides obscure code logic behind the scenes, delivering definite visual results and operational feedback, allowing us to validate AI's logic in reverse by observing the real running state.
This instant rendering and automated verification are the core guarantees of efficiency improvement.
The Certainty of Unconscious Automatic Execution
The recent popularity of projects like OpenClaw is because it validates another certainty: unconscious automatic execution.
We don't need, nor should we become, questioning experts. After all, the ability to fully describe a fact is a scarce commodity. For the vast majority of people, the burden of asking questions is too heavy.
The success of OpenClaw lies in its demonstration of a possibility for "bootstrapping." It possesses a real-time feedback mechanism based on environmental perception. Once deviations, errors, or changes in information are detected during execution, it can immediately adjust its strategy automatically, achieving self-correction and response within a closed loop (even if it does so by continuously looping queries and stacking markdown).
This automatic execution interaction experience is the truly unconscious experience users expect. We don't need to continuously monitor AI's working status, write lengthy, complex instructions thousands of words long, or learn various "magic operations." We just need to set a task objective, such as "help me operate my social media account," and AI can proactively observe the environment, make autonomous decisions, and complete the work (even if it might delete all your email records).
This shift from "user questioning" to "AI automatic closed loop" has gained so much popularity, proving that this is a definite trend in AI applications.
The changes in AI over the past year have made me realize that in the short term, improvements in intelligence will be difficult to achieve leaps like from GPT-3.5 to GPT-4o. The foreseeable future holds more progress in engineering.
And how much AI can improve depends on how well we can leverage these certain tool constructions to weaken its known flaws.
We should amplify AI's strengths by focusing on information organization, memory management, rendering capabilities, and automatic execution, creating a tightly constrained operating environment for it.
Leveraging these certain capabilities well is the definite direction where product people can strive to create value.