When using large model tools, we often focus on what they output, but overlook what they "see." As more and more large models gain internet search capabilities, their answers increasingly rely on the quality and coverage of online information. What we can do is use the right prompts to help them "see better."
This article introduces two simple but practical methods:
- Multilingual questioning (to allow the model to gain more perspectives)
- Defining information sources (to reduce noise and focus on credible content)
The Blind Spot of Default Internet Search: Chinese Search = Chinese Answers
When you ask a large model a question in Chinese with "internet search" enabled, it typically mimics the behavior of an ordinary user: searching for Chinese content in Chinese.
This may sound fine, but it actually hides a huge waste of information.
We know that large models are trained on multiple languages; they understand data in English, Japanese, German, and even Arabic. But if you always ask questions only in Chinese, it will only retrieve Chinese web pages — you're paying for the model, yet it's essentially just acting as a "Chinese Baidu" for you. Isn't that a loss?
Multilingual Questioning: One Question, Multiple Language Solutions
Each language has its own media ecosystem and expression style:
- Chinese information tends to focus on market trends and news updates;
- English information is more concentrated on cutting-edge technology, research papers, and industry investments;
- Japanese may emphasize detailed execution and product refinement.
When you want a comprehensive, multi-perspective answer, you might try this method:
- Ask the question in Chinese and let the model search online to answer once;
- Then ask the same question in English and let the model search English web pages;
- Finally, ask it to integrate the findings and output the final answer in Chinese.
For example:
Prompt (English):
Please explain the recent development of humanoid robots in the past 6 months. Use English sources only. Then, combine Chinese and English findings, and write a final summary in Chinese.
This questioning method can greatly enrich the model's reference sources, allowing it to synthesize multilingual information before outputting to you, resulting in more depth and credibility.
Defining Information Sources: Teaching the Model to "Be Picky"
If you ask questions in Chinese, you'll find that large models often cite less reliable information sources, such as CSDN (which frequently features spliced articles, ad content, or low-quality reposts).
The solution is simple: add "information source filtering" requirements to the prompt.
Example:
Prompt:
Explain the technological developments in humanoid robots over the past six months. #When generating the answer, do not use information from CSDN.
The effect of this is: although the model may still "see" these sources during online searches, it will actively avoid them in its answers, choosing higher-quality sources (such as official reports, tech media, international data, etc.) to generate the answer.
Similarly, you can also reverse the approach:
Prompt:
Please explain the recent developments in humanoid robots over the past six months based on sources such as IEEE Spectrum and MIT Technology Review.
This method helps you pull information from specific sources to get answers that better meet your needs.
Recommended Usage (Practical Workflow)
- Before asking: Consider whether the question is worth a multilingual answer? Is it prone to information pollution?
- Set search strategy:
- If concerned about one-sided information ➜ Use Chinese and English bilingual questioning + integration;
- If concerned about information pollution ➜ Add #filter sources instruction;
- If trusting certain media ➜ Add #specify sources instruction;
- Final output integration summary ➜ Chinese summary, to be used as the final conclusion.
The "Search Power" of Large Models is Controllable
Large models are not black boxes. As long as you understand their "visual" logic — what they see, how they choose, and how they speak — you can intervene in their input path, rather than just waiting for them to give you an answer.
From a product manager's perspective, this is essentially an action to optimize the data source input end, with a simple goal: to make the model "see accurately, see more, and see cleanly."
You don't need to understand technology; you just need to master the right way of asking questions to unleash the true value of large models.