Marwala Tshilidzi. "Never Assume That the Accuracy of Artificial Intelligence Information Equals the Truth," United Nations University, UNU Centre, 2024-07-18

"Real-world consequence"

This disparity is more than just a theoretical issue; it has real-world consequences. AI systems are increasingly used in hiring decisions, performance evaluations and promotions. If these systems rely solely on accurate but incomplete data, they risk reinforcing biases and ignoring critical human factors, resulting in unfair or ineffective decisions.
The mean square error (MSE), widely used in AI training, is the root of the problem of why AI confuses truth and accuracy. The MSE is a standard metric AI uses to assess prediction accuracy. While MSE is adequate for evaluating continuous numerical predictions, it is not good at assessing discrete or abstract concepts such as truth.

Marwala Tshilidzi. "Never Assume That the Accuracy of Artificial Intelligence Information Equals the Truth," United Nations University, UNU Centre, 2024-07-18

Craig Forman "The Rise of Artificial History" (May 15, 2024)

" AI is creating fake history and its a problem"

When misused, these tools have the ability to dramatically undermine reality. Technologists and scholars are already using the phrase “Liar’s Dividend,” the inverse ability for political actors to deny real content as a deepfake to avoid accountability. Now, an even deeper and more dangerous erosion of the truth is emerging. We call this “artificial history” — the ability for generative AI to create highly credible but entirely fictional synthetic history.

Marwala Tshilidzi. "Never Assume That the Accuracy of Artificial Intelligence Information Equals the Truth," United Nations University, UNU Centre, 2024-07-18

" Critical for a true assessment "

Amazon’s AI-powered recruitment tool, which was created to make hiring easier by evaluating CVs and selecting candidates. This tool analysed historical data to identify patterns among successful hires at the company. However, it was later discovered that the AI system was sexist and biased against female candidates. This bias arose because the historical data used to train the AI primarily comprised male applicants’ CVs, highlighting gender disparities in the tech industry. As a result, the AI incorrectly downgraded CVs that used terms more commonly found on women’s resumés, such as “women’s chess club captain”, even though these terms did not indicate a lack of qualification. This example shows how AI accuracy, based on historical data, does not reflect the truth that women and men are equally talented.

University Of Maryland Jan 13, 2026

" It cannot accurately produce its sources"

Currently, if you ask an AI to cite its sources, the results it gives you are very unlikely to be where it is actually pulling this information. In fact, neither the AI nor its programmers can truly say where in its enormous training dataset the information comes from. As of summer 2023, even an AI that provides real footnotes is not providing the places information is from, just an assortment of webpages and articles that are roughly related to the topic of the prompt. If prompted, the AI will provide the exact same answer but footnote different sources.

University Of Maryland Jan 13, 2026