Plato Was Right: Why AI Still Can’t Think Like a Human

AI promised a revolution—but CEOs are discovering a harsh truth: After billions spent, robots still can’t read a clock. Here’s why the ROI is missing… and what Plato knew 2,400 years ago.
Amidst claims about the intense impact of artificial intelligence (AI) on business, dark forecasts about impending job losses, erasure of innumerable roles, and the dawn of Singularity, being just months away, there is a harsh reality-check that is hitting CXOs of major organizations. Companies, that have spent billions in AI projects, infrastructure, and have been firing staff, in the anticipation of productivity and efficiency gains from technology, are now facing restive stakeholders who are demanding tangible returns—but the money isn’t materializing.
AI to Increase Annual Productivity by Only 0.5%
But realizing its full value remains elusive as yet. Even with the widespread implementation of AI programs across industries, only 26% of companies have developed the necessary set of capabilities to move beyond proofs of concept and generate tangible value, according to new research by Boston Consulting Group (BCG). The Nobel -winning MIT economist, Daron Acemoglu in a seminal research published towards late 2024, has sounded an alarm; “AI will produce a “modest increase” in GDP between 1.1 to 1.6 percent over the next 10 years, with a roughly 0.05 percent annual gain in productivity.” This is also an opportunity to find the critical tasks that AI can be taught, to deliver real-world business transformations. Humans, continue to excel at navigating complex challenges which AI stumbles over.
How Do We Know What We Know
The answer to this productivity paradox, when one does not notice in returns in productivity and efficiency increases in proportion to the investments in technology, can be traced back to the thoughts of Plato, the Greek philosopher around 369 BC when he asked the profound question: ‘How do we know what we know?” Plato, in one of his famous dialogues, theorized about the relationship between knowledge and experience and explains how it is possible to know something that one has never been explicitly taught. The Greek philosopher believed that we possess innate ideas that precede any knowledge that we gain through experience. AI, on the other hand, does not have this tacit knowledge, and therein lies the challenge. AI’s knowledge is like a library without a librarian—full of information but no intuition to navigate it.
Machines are simply finding it a challenging task as now to learn how humans learn, despite mastering how to win at Alpha Go several years ago. Thousand of years ago philosophers realized the fundamental truth; “Learning is just ‘recollecting’ what our souls already knew.” Plato’s theory is that we are born with knowledge of perfect and eternal ideas (he called these ‘Forms’). So, learning is just recalling what we already knew.
AI Still Can’t Read Clocks
New research, from the University of Edinburgh has revealed that AI stumbles over some tasks most humans can do with ease such as reading an analogue clock or figuring out the day on which a date will fall. AI may be able to write code, generate lifelike images, create human-sounding text and even pass exams (to varying degrees of success) yet it routinely misinterprets the position of hands on everyday clocks and fails at the basic arithmetic needed for calendar dates.
After all the hype over AI, the value is hard to find. CEOs have authorized investments, hired talent, and launched pilots—but only 22% of companies have advanced beyond the proof-of-concept stage to generate some value, and only 4% are creating substantial value, according to new BCG research.
Humans posses a vast repository of tacit knowledge of how the world works that often exceeds our explicit understanding” accounts for many of the challenges for computerization and automation over the past five decades.
Automation requires exactness to instruct computers. Yet tacit knowledge can’t be conveyed propositionally—hence machines often fail.
Machines cannot provide successful outcomes in many cases: they have explicit knowledge (raw data) but nevertheless, do not know how to use such knowledge to understand the task as whole. This discrepancy between human reasoning and AI learning algorithms makes it difficult to automate tasks that demand common sense, flexibility, adaptability and judgment — human intuitive knowledge.
Humans Excel at Navigating Complex Challenges
1. AI’s ROI Remains Elusive Despite Massive Investments
Despite billions spent on AI initiatives, only 4% of companies see substantial returns, with most stuck in pilot stages. The gap between hype and real-world value exposes a productivity paradox—where tech investments aren’t translating to measurable efficiency gains.
2. AI Fails at “Human” Tasks—Because It Lacks Tacit Knowledge
AI excels in narrow domains (coding, data crunching) but stumbles on basic human tasks (reading clocks, calendar math) due to its inability to replicate innate, experiential knowledge. Plato’s ancient insight—that true understanding requires more than data—explains why automation struggles with judgment and adaptability.
3. Deployment Challenges Are Systemic, Not Just Technical
Real-world AI adoption faces recurring roadblocks: poor data quality, regulatory hurdles, and misaligned organizational processes. Fixes require both tools and cultural shifts—proving that success depends on rethinking how businesses integrate AI, not just the tech itself.
In a research paper titled: “Challenges in Deploying Machine Learning: a Survey of
Case Studies” authored by Andrei Paleyes, Raoul-Gabriel Urma, and Neil Laurence of, University of Cambridge, United Kingdom, underscore that deploying ML in real-world settings is not merely a matter of choosing a learning algorithm and hitting “train”—it requires navigating complex challenges at every step of the pipeline, from sourcing and curating messy, distributed data to ensuring models remain reliable, fair, and secure once embedded in production systems.
By systematically curating case studies across domains, the authors highlight that many obstacles—poor data discoverability, lack of high-variance labeled data, the high economic and environmental costs of training, regulatory compliance, and evolving user expectations—are not isolated to niche scenarios but recur across industries.
Crucially, the paper distinguishes between “tool-based” fixes (e.g., managed ML platforms, automated data-drift detectors) and “holistic” shifts in architecture and process (e.g., Data-Oriented Architectures, integrated versioning for datasets/models), arguing that lasting improvements demand both focused tooling and fundamental realignment of how organizations treat data and model artifacts. One of the major reasons why ROI from AI investments remains a challenge is because of its inability to learn the way humans do. AI’s prowess in narrow domains (coding, translation, high‐level reasoning), it still falters on “everyday” tasks—closing a pop-up, reading a clock, resolving simple text ambiguities—that humans perform almost reflexively.