AI feels powerful, yet most teams struggle because they cannot define what intelligence they really need. But there are ways to address this challenge.
For anyone working with AI, the blinding speed of growth is a constant struggle. The field never slows down, and staying updated becomes a daily task.
I think I can share a personal experience which may help you relate to the effect this kind of speed has on all of us. Back in 2003, let’s say someone bought a bike, registered it, and fixed the number plate with a white background and yellow font. A week later, the government would change the rule — the number plate must now be white with black lettering. Two weeks later, the rule would change again. With this constant change, at some point, some people decided to put every possible format on one number plate so that, no matter what rule came next, they were technically covered. This was a common problem for anyone who got their driving licences during 2003-2005.
I, too, went through this confusion when I got my licence in 2005. And this is precisely what developing AI applications feels like today.
If you are an AI developer, you already know the biggest pain point. The field is changing so fast that even a small break can set you behind exponentially. You return to your team after a small holiday, and within minutes, someone says, “You are out of date; something has changed.” This is how fast the stacks, algorithms and platforms are evolving.
Transformation itself is not new. The industry has seen the birth of the internet, cloud computing, mobility, and even earlier phases of AI. But what makes this phase different is that AI’s scope is undefined. Leaders across industries use the phrase: “everything everywhere.” Unlike cloud or internet, both of which have clear functional boundaries, AI has no obvious box. What exactly should AI do for a business? Where does it stop? How do we define the scope?
This lack of definition is the root problem. AI is everywhere, but organisations do not know what “everywhere” should mean for them.
To solve this, I work with four principles that help create clarity: Define, Target, Scale, and Grow.
Define
The first principle is to define the level of intelligence required in a solution. We must be deliberate here because not every problem needs an LLM or a deep learning model. The principle can be handled on three levels.
Good old-fashioned AI (GOFAI)
At its core, GOFAI is a rule-based system. The logic sits in the code. If the business rule changes, we simply edit the rule. GOFAI remains extremely useful, practical, and the right answer for many use cases. There is no need to complicate everything.
Machine learning
We use ML when the system needs to learn from patterns rather than rules. This is where training data, predictions, supervised and unsupervised learning come in.
Complex AI
This includes deep learning, dynamic models, advanced architectures and LLMs. While AI needs to evolve, setting limits is just as important. At each stage, simply keep evaluating the question ‘Is the current level of complexity strictly necessary to resolve the customer’s problem?’ Often, the honest answer is ‘no’.
Developers must therefore pause and choose the minimum intelligence level that meets the requirement. Nothing more. Nothing less.
Target
The second principle is about targeting the business problem the right way. AI application development looks like the traditional SDLC, but there is one major difference. In software development, we design, code, test, deploy and maintain. But AI adds a permanent, mandatory last step: refinement. This is not the same as iterative development. Refinement is a continuous loop built into the system itself. Feedback from customers and users must be collected, analysed and fed back into the model regularly. Without refinement, an AI product decays quickly.
This means two things:
- A feedback mechanism is a non-functional requirement. It must exist either inside the code or as a direct customer touchpoint.
- Developers need a top-down view—from business case to deployment—because AI is no longer an isolated IT delivery. It affects the business model, not just the software.
In earlier years, we did not revisit business cases frequently. In AI, we must. Needs evolve. Models drift. Use cases change direction. Developers can no longer focus solely on implementation; we need to understand the charter, the model, and the end-to-end refinement cycle.
Scale
The third principle is about scaling. Traditionally, we designed modules based on functions: Module 1 performs function A; module 2 performs function B. Testing followed the same pattern.
But AI development today requires a shift from functional thinking to service-level development. Why? Because every module now calls external services—LLMs, APIs, cloud platforms—not once, but many times in a single workflow. We no longer live in a world where only one or two modules connect externally. Now, every module does.
This is where microservices become essential. If your organisation uses GPT today and switches to another provider next year, how will your system adapt? If a model upgrades, will you redo your entire codebase?
With microservices, a change in one service can flow across the organisation without rewriting everything. This is the power of thinking in terms of services rather than functions.
Cloud providers have also evolved. Earlier, we worked with infrastructure-as-a-service. Today, we rely heavily on the platform-as-a-service model — APIs, ML services, and LLM endpoints. So our architecture must match the service mindset.
Grow
The fourth principle is growth. To move forward responsibly, we must understand where we stand. Gartner’s AI maturity model describes five levels:
1. Awareness: Employees understand AI basics.
2. Active: Teams experiment with POCs, hackathons and low-hanging use cases.
3. Operational: AI improves efficiency through internal operations. This is true for most organisations today.
4. Product: AI powers products delivered to customers with accuracy and reliability.
5. Sentinel: Fully autonomous, zero-human-touch decision-making systems. Autonomous driving is an example.
Importantly, organisations do not move through these levels sequentially. You cannot finish Level 1 and then begin Level 2. Instead, activities across all levels must run in parallel. For instance, a company can run an operational AI system while parallelly creating awareness programmes for responsible AI. Growth is not linear; it is layered.
- Awareness increases with company-wide training programmes, expert lectures, and AI meets.
- Active exploration comes from hackathons and quick POCs.
- Operational impact usually comes through central teams or centres of excellence that identify use cases and implement them.
- Product-level confidence comes with stronger computing, streamlined pipelines and model governance.
- Sentinel level is the long-term vision and represents the highest form of autonomous intelligence.
So why is AI suddenly at an inflection point? We have had forms of AI since the early 1900s. The term itself appeared in 1842 and was formally coined in 1956. We have seen expert systems, rule-based systems, games such as Pac-Man and chess, and early healthcare applications. But the real acceleration began around 2011 with recommendation systems and the rise of machine learning and deep learning.
Today’s inflection point is driven by four clear trends.
Data explosion
We collect data, knowingly and unknowingly, from mobile phones, vehicles, applications, and sensors. We have sufficient data to experiment, and algorithms can now generate synthetic data as well.
Cloud computing
Earlier, computing meant connecting to a mainframe somewhere else, often with delays and limitations. Today, a few clicks provide 4GB, 8GB, or 32GB of GPU power instantly. Cloud platforms have made high-end computing accessible to every developer.
Improved algorithms
New AI models appear constantly. But the real reason for this speed is that everything is open source. This is no longer enterprise controlled. Developers everywhere contribute to global progress.
Open contribution
The open source ecosystem is what accelerates this industry. Every improvement becomes available to everyone. This is why the field feels like it is moving at lightning speed.
AI is everywhere. AI is everything. But unless we define our own box, we will be overwhelmed.
No organisation progresses linearly, and no developer can afford to work without understanding the business and refinement cycle. AI demands a different way of thinking — structured, strategic and continuously evolving.
This article is based on the session titled ‘Beyond Automation – Strategic AI Integration with GenAI and LLM’ by Pradeeba P., delivery manager, Thoughtworks, at AI DevCon in Bengaluru. It has been transcribed and curated by Apurba Sen, senior journalist at the EFY Group.













































































