We are entering the era of agentic AI, where machines do not just generate answers but take decisions and actions to achieve goals. Let’s find out how.
Over the past fifty years, computing went through five distinct epochs. Each transformed how humans interacted with technology, created new leaders, and reshaped industries. These changes were not gradual but sharp turning points.
In the early 1980s, personal computing put processing power on individual desks. For the first time, a knowledge worker could directly manipulate data, create documents, and run applications without waiting for a centralised mainframe team to allocate resources.
The internet revolution of the 1990s broke the physical isolation of these machines, linking them into a global fabric of information, commerce, and collaboration. Borders blurred, and markets globalised overnight.
In the early 2000s, cloud computing changed the economics of IT. Companies no longer had to purchase and maintain massive infrastructure. Computing became elastic; you could spin up a data centre’s worth of capability in minutes and pay only for what you used.
By the 2010s, mobile computing had shifted the centre of gravity again. The smartphone was no longer just a phone — it was a powerful, connected computer in the pocket of nearly every adult on Earth, changing everything from banking and entertainment to public health monitoring.
And now, in the 2020s, we are in the midst of the generative and agentic AI revolution. This is the first computing paradigm where the machine is not simply executing pre-defined instructions from a human programmer, but is capable of learning, reasoning, and acting on its own to achieve goals.
Why is this shift different?
Previous computing shifts were primarily about increasing capability like more speed, more storage, and more connectivity. This shift is about autonomy.
In the past, a machine’s value was limited by how quickly and accurately a human could tell it what to do. Generative AI has broken that bottleneck. An AI system can now interpret intent from natural language, use external tools, access data, and decide the best course of action to achieve an outcome.
The leap from generative to agentic AI is especially important. A generative AI system may be able to produce a detailed report if you ask for one. But an agentic AI could:
- Search for the latest market and regulatory data.
- Analyse trends and run simulations.
- Prepare the report with visualisations.
- Distribute it to relevant stakeholders.
- Monitor feedback and update the analysis as conditions change.
We have moved from a ‘type-and-reply’ relationship to one where AI behaves like a junior colleague or one who can research, draft, and even take follow-up action without constant prompting.
India’s AI opportunity
India is uniquely positioned in this global AI moment. We have the digital public infrastructure like the Aadhaar, UPI, and India Stack, which has proven capable of scaling to over a billion people. We have a young, technically skilled workforce. And we have a market large enough to be self-sustaining while still being globally competitive.
However, AI adoption here is playing out at two speeds.
The government track
The public sector has moved decisively. The national AI strategy has been laid out. Mission-mode projects in agriculture, healthcare, and education are underway. State-level initiatives are exploring AI for crop yield forecasting, traffic management, and multilingual citizen services.
The enterprise track
Many private sector companies are still cautious. Proof-of-concept projects abound, but full-scale production deployments are rare. Concerns about cost, skill gaps, integration complexity, and unclear ROI slow momentum.
Caution in this era is a competitive disadvantage. AI-native companies can innovate faster, operate leaner, and personalise at scale. Once those dynamics set in, catching up with them is exponentially harder.
The semiconductor foundation
Every AI breakthrough you read about, be it ChatGPT generating coherent essays, Stable Diffusion creating photorealistic art, or DeepMind solving protein folding, rests on advances in semiconductor technology.
The journey began with general-purpose CPUs in the early days. Over time, the need for highly parallel processing, particularly for graphics and AI workloads, led to the rise of GPUs. Today, specialised AI accelerators handle the massive tensor operations required for training large neural networks.
Intel’s strategy is not to champion one architecture over another, but to enable AI everywhere:
- CPUs optimised with advanced vector extensions (AVX512) and advanced matrix extensions for AI workloads.
- Discrete GPUs designed for deep learning.
- Dedicated AI accelerators for high-efficiency inference.
Coupled with software like OpenVINO, Intel makes it possible to run AI on everything from a low-power edge device in a rural clinic to a hyperscale data centre handling petabytes of training data.
Just as the skill of the driver in Formula 1 (AI algorithms) is limited by the engine’s capabilities (hardware), the next leaps in AI performance will come as much from semiconductor innovation as from algorithmic advances.
From generative to agentic AI: Real-world examples
A generative AI tool in retail can produce product descriptions for an e-commerce site based on bullet points from the merchandising team.
An agentic AI, in contrast, can:
- Pull sales data from the last quarter.
- Identify which product categories need a marketing push.
- Create tailored descriptions for those products.
- Generate accompanying social media creatives.
- Schedule posts based on historical engagement data.
- Monitor performance and adjust the campaign in real time.
In manufacturing, generative AI may draft a maintenance manual. Agentic AI could monitor sensor data across a factory, predict equipment failures, schedule repairs, order replacement parts, and update manuals dynamically.
Key Indian AI use cases emerging today
Agriculture
Satellite imagery, IoT sensors, and weather models combine through AI to predict yields, detect pest infestations early, and advise farmers on optimal planting schedules.
Healthcare
AI-assisted diagnostics; telemedicine platforms that triage patients based on urgency; and personalised treatment recommendations using patient history.
Education
Adaptive learning platforms that adjust difficulty level based on student progress, available in multiple Indian languages.
Public services
Automatic translation of government communications into 22 official languages; AI chatbots handling citizen queries at scale.
Each of these not only improves efficiency but extends reach, which is critical in a country of 1.4 billion where resources are often stretched.
The role of open source in AI democratisation
The future of AI cannot be monopolised by a handful of tech giants. Open source frameworks and tools ensure that innovation is distributed, talent pipelines are broad, and localised solutions can emerge.
Intel contributes actively to open source AI ecosystems optimising PyTorch, TensorFlow, and integrating with repositories like Hugging Face. The company’s OpenVINO toolkit allows developers to deploy models across CPUs, GPUs, and accelerators without rewriting code.
Through techniques like model quantisation and pruning, Intel makes it possible for powerful AI models to run on modest hardware. This is essential for rural, edge, and embedded applications where high-end GPUs are impractical.
Security, safety, and trust
As AI systems become more capable, the stakes of misuse, whether intentional or accidental, rise. Intel addresses this in multiple layers.
Hardware root of trust
Secure boot and encryption to protect data even while in use.
Confidential computing
Workloads are isolated in secure enclaves, shielding sensitive data.
Bias detection
Tools to evaluate and reduce unfairness in model outputs.
Auditability
Governance frameworks to track decisions and ensure compliance with laws like India’s Digital Personal Data Protection Act.
The skills shift
Building AI applications, especially agentic ones, requires a different skillset from traditional software engineering. Beyond prompt engineering, teams must master:
Orchestration
Coordinating multiple models and APIs to achieve complex goals.
Memory management
Maintaining context across tasks and time.
Tool integration
Allowing AI to call external systems safely.
Continuous feedback
Iteratively improving models based on user and system feedback.
The end of SaaS as we know it
As Satya Nadella put it: “SaaS is dead.”
The static, one-size-fits-all nature of traditional SaaS is giving way to AI-native systems that shape themselves to each user’s needs in real time. Tomorrow’s business applications will feel less like software products and more like collaborative team members, anticipating needs, initiating actions, and learning continuously.
The road ahead
For India to lead in AI, we must:
Invest in talent
From primary education in computational thinking to advanced AI research.
Build open ecosystems
Avoiding lock-in, encouraging competition, and enabling interoperability.
Focus on ethics
Ensuring fairness, transparency, and accountability from design phase onwards.
Align hardware and workloads
Co-designing chips and software for optimal AI performance.
AI is the most powerful tool humanity has ever built. In the right hands, it can solve problems once thought intractable such as climate modelling, drug discovery, and literacy at scale. For India, this is the time to pilot aggressively, scale what works, and build not just for our market, but for the world. History does not remember those who were merely ready. It remembers those who acted.
The article is based on the AI DevCon seminar ‘AI Inside For A New Era’, featuring insights from Anand Kulkarni, customer engineering lead at Intel. It has been transcribed and curated by Vidushi Saxena, journalist at OSFY.














































































