According to a study by the University of Washington, almost every second private investor uses AI tools such as ChatGPT to find out about shares or prepare investment decisions. But can we trust artificial intelligence? Researchers at HHL Leipzig Graduate School of Management, together with colleagues from Austria, have investigated where ChatGPT gets its information from. The results are clear.
The scientists analyzed over 2,500 queries on 20 listed companies from September to November 2025. A team of 20 people asked questions about annual financial statements, managers' salaries and sustainability issues. They evaluated more than 24,000 sources. In 85 percent of cases, ChatGPT drew directly on the company's own content such as annual reports. The key finding is that the file format is decisive. Annual reports in HTML format appear three times more frequently in ChatGPT's responses than those in PDF format.
Quality depends on the format
A fact check of 200 ChatGPT responses reveals the weaknesses. Only 63 percent of the AI statements are completely correct. A quarter remain incomplete and 20 percent contain incorrect information. The difference between the formats is considerable. HTML reports lead to correct results in 71 percent of cases, PDF reports in only 54 percent.
"This makes it all the more important for companies not to put any technical hurdles in the way of AI," warns Eloy Barrantes from the Vienna-based consulting agency nexxar. ChatGPT relies on external sources due to a lack of machine-readable data, which increases the risk of disinformation. Prof. Dr. Henning Zülch from HHL Leipzig summarizes: "Those who prepare their data in an AI-friendly way help determine what potential investors read about them."
Further information in the study report.
Information on the study by the University of Washington.