Text, Truth, & Turing: AI’s Creative Revolution & Questions on Reasoning

A hand reaching towards glowing “AI” letters, symbolizing technology and innovation.

AI models can churn out dazzling poetry and news stories, but is this real creativity—or just remixing the past? As AI-generated text and art flood our media landscape, the boundaries of creativity, journalism, and truth face unprecedented challenges. What happens when machines “imitate” so well that the line between authentic journalism and automated output begins to blur?

  • AI creativity is a remix, not reinvention: Large language models generate content by piecing together linguistic patterns from massive datasets. While impressive, this is not true human creativity—it lacks intentionality and self-reflection, sparking debates over whether AI “creates” or simply “extrudes” text.
  • AI transforms, but does not replace, journalism: AI tools are revolutionizing newsrooms by boosting efficiency in tasks like reporting, fact-checking, and data analysis. However, these tools raise profound concerns about job displacement, ethical standards, bias, and the erosion of investigative depth—core pillars for holding power accountable.
  • The Turing Test is showing its age: Although conversational AIs are more convincing than ever, readers increasingly spot AI-generated output, highlighting the limitations of Turing’s decades-old “Imitation Game.” The test equates surface-level coherence with thinking, exposing anthropocentric biases and missing the deeper dimensions of human consciousness and creativity

Large Language Models (LLMs) seem to have dazzled us with their ability to spew out poetry, write perfect emails, analyze scientific research papers, create paintings mimicking the style of every known artist, and yet questions keep surfacing around whether these tools are actually thinking, reasoning, or merely ‘text extruding’, and remixing past works of creative professionals to produce art?

As we progress along the path of Artificial Intelligence (AI) generated creative output, these questions become highly relevant, and are  germane to the future trajectory of human creativity. What does this mean for the future of journalistic enquiry, that is supposed to hold power accountable, as these text extruding tools infiltrate journalism?

Skeptics are now questioning how linking together word patterns calculated from massive amounts of training data can be described as journalism, or for that matter remixing past artistic works passed off as creativity? Authors like Emily Bender, and Alex Hanna in their latest book ‘The AI Con’ label these AI generated works as a ‘kind of pollution’ when synthetic text and art spills over into the creative ecosystem.

Is AI Failing the Turing Test

The other intriguing aspect is that readers and viewers are now able to figure out, to a very large extent, content that has been generated by AI. Is AI, therefore, failing the Turing test? In October 1950 issue of the British quarterly Mind, Alan Turing published a 28-page paper titled “Computing Machinery and Intelligence.” It was recognized almost instantly as a landmark. What captured our imagination in that paper, was Turing’s proposed test for determining whether a computer is thinking — an experiment he calls the Imitation Game, but which is now known as the Turing Test.

The test calls for an interrogator to question a hidden entity, which is either a computer or another human being. The questioner must then decide, based solely on the hidden entity’s answers, whether he had been interrogating a man or a machine. If the interrogator cannot distinguish computers from humans any better than he can distinguish, say, men from women by the same means of interrogation, then we have no good reason to deny that the computer that deceived him was thinking. And the only way a computer could imitate a human being that successfully, Turing implies, would be to actually think like a human being.

However, what we find today is that people are able to identify AI generated responses even when camouflaged as created by humans. Turing’s test is now being critiqued on the argument that the Imitation Game rests on an unproven premise – equating conversational competence with thinking – while ignoring humanity’s inherent anthropocentric bias.

Unlike machines, humans are granted “thinking” status by default based on biological membership (Homo sapiens), not performance: even incoherent humans retain this attribution (explained as impairment rather than non-thinking). This exposes the test’s circular logic, as it demands machines to mimic human conversational patterns to earn recognition as thinkers, reinforcing the very bias that ties cognition to human form and interaction. By prioritizing surface-level imitation over deeper philosophical rigor, the test inadvertently upholds the superficial criteria it claims to transcend.

The question whether AI can really think or not, when put to a popular LLM tool Perplexity provides a rather striking response: ’AI’s abilities are rooted in pattern recognition, statistical analysis, and vast language data, rather than conscious awareness or subjective understanding. AI follows logic chains, makes inferences, and adapts to context at scale, but it lacks true self-awareness, intentionality, or understanding of meaning as humans experience it.’

AI’s Elusive Productivity Gains

AI-driven productivity gains as propagated by techno-optimists are also being critiqued today. The Silicon Valley ethos—build machines that “think” and outcompete humans—shapes most AI research today. Acemoglu calls this the “Turing vision,” an obsessive chase for autonomy and cost-cutting. But the empirical record shows that technologies designed to replace people sap job growth and depress wages outside the elite tech echelons. In U.S. manufacturing, every robot introduced wiped out an average of 6.2 regional jobs, per MIT research. More worryingly, the World Bank estimates 85% of jobs in Africa and 77% in China are highly susceptible to automation’s axe.

Nevertheless, the long-term potential of AI is great, but the short-term returns are unclear according to McKinsey. Over the next three years, 92 percent of companies plan to increase their AI investments. But while nearly all companies are investing in AI, only 1 percent of leaders call their companies “mature” on the deployment spectrum, meaning that AI is fully integrated into workflows and drives substantial business outcomes. In the meantime the promise of AI will be tested through a rigorous dose skepticism and pragmatism.

Leave us a Comment