Perplexity CEO Sounds a Wake-Up Call: Why Human Judgment Still Shapes the Future
Perplexity CEO Aravind Srinivas warns that cloud-based AI and costly data centres may lose relevance as intelligence shifts to on-device systems.
In a broad discussion with tech writer and entrepreneur Prakhar Gupta, Srinivas laid out his argument of AI that diverges sharply from today’s expectations. While many in the industry are racing to build ever-larger cloud-based systems backed by multi-billion-dollar data centres, he believes that future breakthroughs will come from on-device AI, small, energy-efficient, personalised models that run locally on phones and laptops, and that don’t require constant server connections.
At the same time, Srinivas emphasised a boundary that, in his view, AI still cannot cross. Machines may solve problems, but they do not decide what is worth solving. That human spark of curiosity and judgement, he argues, remains uniquely human — and central to guiding AI’s real-world impact.
AI Can Help, But Humans Decide What Matters
Srinivas opened with a strong distinction: “AI could help humans solve an existing problem, but it is very different from AI solving it autonomously,” he said. For now, AI systems excel at optimisation and pattern matching, but they do not initiate questions or determine priorities on their own.He said this because machines don’t have curiosity. Humans do. People decide which questions are worth asking, whether in science, philosophy, or business. AI can help answer those questions, but it doesn’t decide what matters in the first place.
Srinivas told Gupta that this human edge will persist even as AI becomes more capable. “Did AI pose a question and try to go solve it? No,” he noted. “It’s the human who identifies the problem in the first place.”
At a time when many organisations are turning to AI to automate workflows and analyse complex systems, this distinction matters. Machines may reshape how work gets done — but humans will still set the agenda.
How AI Is Moving Closer to the User
Perhaps the most striking part of Srinivas’s remarks was his take on infrastructure. Today’s AI landscape is dominated by massive data centres: high-performance, energy-intensive facilities run by tech giants that host powerful models and process massive workloads. But Srinivas warns those centres may not be the long-term future of AI.“The biggest threat to a data centre,” he said, “is if the intelligence can be packed locally on a chip that’s running on the device.” That would eliminate the need to perform inference — the step where a model interprets input and generates output — on remote servers.
In practical terms, this means powerful language models and other AI systems could run directly on smartphones, laptops, wearables, and even Internet-of-Things devices. Not only could this reduce latency and energy use, it would also shift control of data and intelligence back to users themselves.
Much of today’s AI relies on centralised computing power partly because local hardware has lacked the necessary performance. But advances in specialised AI chips — notably from companies like Nvidia, Qualcomm, Apple and other silicon designers — are quickly narrowing that gap. When models can operate efficiently “on-device,” companies won’t need to funnel every query through costly cloud systems.
Privacy, Speed and Personalised AI
According to Srinivas, the benefits of on-device AI extend beyond cost savings for tech companies. Because data stays on the user’s own hardware, privacy could be significantly improved. Users wouldn’t have to send their personal data to faraway servers anymore. Things like messages, files, or search history would stay on the phone or laptop itself, cutting down the chances of that data being leaked, hacked, or used without permission.Srinivas also talked about a future where your own device carries what he calls a “digital brain”. In simple terms, it would be an assistant that learns how you work — the apps you use, the tasks you repeat, and the way you like things done — without uploading that information to the internet. Over time, it would feel less like a generic tool and more like something built around you.
Think of it as an assistant that knows your daily routine. It remembers what you usually ask, spots patterns in your work, and helps without needing constant instructions. This isn’t just a quicker version of Siri or Google Assistant. It’s software that lives on your device, adjusts as you do, and keeps your data with you instead of on someone else’s servers.
Running AI directly on the device would also make it much faster. There’s no waiting for a request to travel to a remote data centre and back. The response happens instantly on the same machine you’re using. That speed matters for things like live translation, voice commands, or analysing data on the fly, where even small delays can break the experience.
Rethinking Centralised AI Infrastructure
Srinivas’s remarks challenge the assumption that the future of AI lies with ever-larger, more expensive data centres — a key strategic bet of companies like Google, Meta and Microsoft. These firms are spending heavily on infrastructure to train and operate large models that power everything from search to virtual assistants.If advanced AI can run locally, however, the economic and technological rationale for centralised computing could weaken. The shift could lead to a more decentralised AI ecosystem, lowering barriers for smaller organisations and individual developers to build advanced systems without depending on cloud platforms.
Still, Srinivas acknowledged that this future has not yet arrived. To date, no widely available model combines high performance with the energy efficiency and compact size needed to run entirely on personal devices. When that breakthrough happens, he said, the industry will be in for a “very interesting” transformation.
Human Brains vs. Machine Efficiency
Srinivas also pointed to a basic but striking comparison: how little power the human brain uses compared with modern AI systems. A human brain runs on about 20 watts — roughly the same as a small light bulb — yet it can handle complex thinking, learning, and decision-making. By contrast, the data centres that run today’s AI models consume enormous amounts of electricity, often measured in megawatts.
That gap highlights more than just a hardware problem. It reflects deeper differences between how humans and machines think. Even the most advanced AI systems don’t have intuition, emotional understanding, or a natural sense of context. Humans rely on those traits every day to decide what actually matters — something machines still can’t do on their own.
The Future of Work and Human-AI Collaboration
Looking ahead, he also said personalised that AI could change how people work and learn, much like smartphones once did. Just as phones put computing power into everyday life, AI that runs directly on devices could give millions — even billions — of people access to advanced tools without needing expensive infrastructure.
That shift could make powerful technology easier to access, narrowing the gap between large organisations and individuals, and between richer and poorer markets. Still, Srinivas was clear on one point that humans will stay in charge. People will continue to decide which problems are worth tackling — and how AI should be used to solve them.
Core Findings
Aravind Srinivas underlined that AI excels at solving defined problems but cannot autonomously identify what problems matter, a distinctly human capacity.
He warned that on-device AI could challenge the dominance of centralised data centres, shifting infrastructure and control back to individuals’ devices.
Localised intelligence could boost privacy, speed, and personalisation, creating “digital brains” tailored to individual users.
The power balance of humans and machines will depend not just on technical capability, but on how people decide what matters, Srinivas said.

