
Recently, I helped a friend build an automated tool for gathering bidding leads. His requirement was actually quite simple: based on the company's business scope, search for relevant bidding information in search engines every day and filter out potentially valuable leads.
He had been doing this manually, spending a significant amount of time each day. So, he hoped for an automated tool to replace this repetitive labor.
Initially, I also thought this was a very simple requirement. Web scraping technology is already mature, and an AI IDE should be able to write the project. Moreover, since we were already using AI, why not add some LLM capabilities, such as automatically parsing company materials to generate keywords, and incorporating LLM evaluation and analysis into the process.
So, I quickly got to work.
First, I discussed the requirements with Gemini, wrote a requirements document, and created an interaction specification. Then, I placed both documents in the project root directory, allowing Codex to automatically generate a development plan based on the documents and implement it step by step. Since this tool is a standalone application using Playwright for browser automation—not a large-scale web scraper—it was relatively easy to implement and compliant.
After a couple of days of troubleshooting, I finally completed the first version. Users could upload company materials, the LLM would parse the information and generate keywords, and then call Playwright to execute searches in search engines. Users could also upload company qualifications, and the system would use the LLM to evaluate and match each piece of bidding information captured.
But the results were not ideal.
Many portal websites' results couldn't be drilled down further, capturing only a title. The result list was mixed with a large amount of news content, and some bidding information was even outdated. After repeated discussions with AI, I tried strengthening filtering rules and switching from portal website list pages to targeted scraping, but the effect was still mediocre.
Staring at the hideously ugly interface again, I quickly realized where the problem lay.

I asked AI to help me implement a tool, but what I ended up with was a tool from the old era.
Although I had never done web scraping before, I knew that writing rules, cleaning data, and maintaining site adaptations were inherently complex tasks. Yet, I already had AI and was still solving problems in the traditional way.
So, my problem wasn't technical; it was in my own way of thinking.
My product mindset was still stuck in the past, treating AI merely as a "faster programmer." But AI isn't just for writing tools; it is part of the tool itself.
So, in the second phase, I began adjusting my approach, reducing rules and structured processes, and shifting toward semantic processing.
While solutions like browser-use are more intelligent, their token consumption is too high. Instead, I chose to add a layer of simple intelligent decision-making on top of Playwright, giving the program some understanding capability during browsing and clicking.
At the same time, I also rethought the true goal of this project. Its goal wasn't to be "better" but to "replace." It only needed to replace my friend's daily manual search process, freeing him from spending time on repetitive tasks and allowing him to focus on more critical matters. As long as this was achieved, it would already save him 90% of the time he spent on this task.
Looking back, many of the things I did earlier were actually unnecessary.
For example, more complex keyword generation and more detailed data evaluation. These tasks didn't need AI to complete for him because he had been using his own keywords for searches and could quickly judge which information was valuable, indicating he already had the capability in this area. What truly needed to be replaced was the search process itself.
Overdeveloping these analytical capabilities actually led me away from the core of the problem.
People are always the main actors, and tools are merely extensions of people.
This experience also made me rethink a bigger issue.
Some scenarios indeed require AI to participate in value judgments, such as when I use AI to write code, as many technical issues are beyond my capabilities. But most of the time, people don't need to completely outsource their value judgment to AI.
The tool for my friend is still under development, but this story of making mistakes can end here.
Just like many companies implementing AI customer service to reduce costs or even to block responsibility, the original purpose of the customer service role was to soothe consumer emotions, collect genuine feedback, and improve products and services. It was meant to be a channel connecting businesses and consumers, but many companies later turned it into a "firewall."
Now, with AI, there's an opportunity for re-division of labor. Repetitive, mechanical, and process-driven communication can be handed over to AI, while human customer service can focus more on understanding consumers, solving problems, and optimizing products. Let professions return to their value rationality.
Technology should not diminish human value. What it truly needs to take over are those dead-end processes that make people more mechanical.