AI Eats Its Own Tail? How ChatGPT Is Draining Wikipedia While Relying on It
Nov 2, 2025
The rise of ChatGPT is reshaping how people seek knowledge online, drawing readers away from Wikipedia for quick, digestible summaries. Yet, Wikipedia remains a cornerstone of factual data that fuels AI models themselves. This growing irony raises urgent questions about the sustainability of human-driven knowledge in an AI-dominated era.
Selective Shift: Wikipedia traffic dips most in topics where ChatGPT offers similar short-form answers, while niche or complex entries remain resilient.
Feedback Risk: Declining contributions threaten the richness of Wikipedia, potentially degrading future AI models that rely on it for training.
Human-AI Balance: Without stronger digital literacy and incentives for human editors, both Wikipedia and the AIs trained on it risk falling into a cycle of homogenization and lower-quality knowledge.
In a rather peculiar turn of events, the rise of ChatGPT and LLM models, has led to a decline in the usage of Wikipedia, at least for some topics, which is one of the major sources of training data for LLMs. It raises intriguing questions about the future of online knowledge-sharing platforms. As more individuals turn to AI for quick answers, the collaborative and curated nature of Wikipedia might face challenges in retaining its user base. A new study from Columbia Business School, MIT, and Dartmouth reveals that the launch of ChatGPT in late 2022 has begun diverting readers away from Wikipedia, particularly for articles that closely mimic the AI's conversational summaries.
A Paradox of LLMs Degrading Their Source
Wikipedia is the backbone of much of the internet’s factual infrastructure. It’s also a primary training source for large language models like ChatGPT. If usage and contributions decline, the quality and breadth of Wikipedia may suffer. And that could, paradoxically, degrade future generations of the LLMs that are drawing people away from it.
Additionally, reliance on LLMs could lead to a homogenization of information, where diverse perspectives may become overshadowed by the dominant narratives present in the training data. Furthermore, the implications for fact-checking and the accuracy of information become even more critical, as users might inadvertently accept AI-generated responses as definitive answers without further verification. This shift might prompt a need for educational initiatives that emphasize digital literacy and critical thinking in an increasingly AI-driven information landscape.
While overall Wikipedia traffic has held steady or even grown in some areas, the research highlights a selective shift: users are increasingly turning to large language models (LLMs) like ChatGPT for quick, digestible explanations on straightforward topics, potentially threatening the long-term health of the crowdsourced encyclopedia.
The study analyzed Wikipedia articles by comparing their content to outputs generated by ChatGPT on the same subjects. Articles deemed "similar" – those offering factual overviews or summaries that align with ChatGPT's responses – experienced a noticeable drop in page views post-launch. For instance, topics like basic science concepts or historical facts, which ChatGPT can handle efficiently, saw reduced readership, while niche or complex entries remained largely unaffected. This "heterogeneous effect" suggests users are strategically choosing tools based on the task: opting for ChatGPT's interactive style for simple queries but sticking with Wikipedia for deeper, less AI-replicable content.
Editing activity showed a milder decline in these similar articles, though the data was less conclusive. Researchers noted that while contributions didn't plummet across the board, the trend raises alarms about Wikipedia's sustainability as a volunteer-driven platform. "If people stop engaging with and contributing to Wikipedia, its knowledge base could weaken over time," said Hannah Li, a lead author from Columbia Business School. Broader analyses support this, with some studies finding no overall drop in Wikipedia metrics like page views or unique visitors after ChatGPT's debut, but a slower growth rate in languages where the AI is available.
Interestingly, ChatGPT's web traffic has surged dramatically, overtaking Wikipedia's monthly visits by mid-2025, reaching 5.5 billion in June compared to Wikipedia's declining figures. This shift underscores how AI is reshaping online information habits, with users favoring conversational interfaces over traditional encyclopedic formats.
Broader Implications for Human-AI Interaction
The research extends beyond Wikipedia, spotlighting emerging patterns in hybrid human-AI decision-making. In fields like education, healthcare, and business, people are blending AI suggestions with human judgment, but the study warns of risks if reliance on tools like ChatGPT erodes traditional knowledge sources. For example, AI can flag risks or provide insights, yet humans must calibrate their trust to avoid overdependence. Experts emphasize the need for explainable AI systems that integrate diverse knowledge bases to foster transparent collaboration.
Long-Term Effects on Building Future LLM Models
If this trend persists, it could have profound consequences for developing future LLMs, which heavily rely on Wikipedia as a training data source. A decline in human contributions – driven by reduced visibility and engagement – might lead to outdated or less diverse Wikipedia content, degrading the quality of data used to train AI models. This creates a feedback loop: as users bypass Wikipedia for AI, fewer edits occur, potentially causing "model collapse," where LLMs trained on increasingly AI-influenced or stagnant data produce lower-quality outputs over generations.
Simulations in related research show that if Wikipedia incorporates more LLM-generated content, it could inflate benchmarks for tasks like machine translation while reducing effectiveness in applications like retrieval-augmented generation (RAG). Experts warn that without sustained human input, Wikipedia's role as a "backbone of the internet's factual infrastructure" could erode, indirectly harming the very AIs that draw from it. To mitigate this, some suggest Wikipedia should adapt by positioning itself as essential for AI training, encouraging contributions through targeted campaigns and AI-assisted tools that support rather than replace editors.
Admissions Open - January 2026

