From Assisting to Acting: The Evolution of AI
We often think of AI as a recent breakthrough. That is not accurate. What we are witnessing today is the outcome of a long, uneven trajectory, what I describe as a 50-year crawl followed by a 5-year sprint.
From the 1950s to around 2020, progress was slow, constrained by rigid programming and human-defined rules. AI systems could only operate within boundaries we explicitly set. The Turing Test, introduced in 1950, framed the central question, “can machines exhibit intelligence indistinguishable from humans?” That question shaped decades of research, but progress remained incremental. We saw cycles of optimism and disappointment, what we now call AI winters, where expectations exceeded capability.

Thillai Raj Ramanathan
Professor of Practice, UNIMY &
CTO BAC Education
However, this slow crawl laid the foundation. Each milestone mattered, i.e. early neural ideas, expert systems, symbolic reasoning, and eventually machine learning. The turning point came when we moved away from telling machines what to do, towards allowing them to learn patterns from data.
2012 was a critical inflection point. With ImageNet and advances in deep learning, machines began to “see.” This was not just an improvement; it was a structural shift. Instead of rule based systems, we had models learning from vast datasets, supported by GPU computing power and improved algorithms. Error rates in image recognition dropped dramatically in a
very short time. This is what I refer to as the “big bang” of modern AI.
Then came 2017, the transformer architecture. This fundamentally changed how machines process information. Instead of reading sequentially, AI could process entire contexts simultaneously. It could understand relationships across words, sentences, and documents at once. This is the architecture behind all modern large language models. Without it, tools like
ChatGPT would not exist.
From there, the acceleration is clear. In 2022, generative AI entered mainstream adoption. What took decades before now happens in months. Reaching 100 million users took years for most platforms. AI achieved this in a fraction of that time. This is not just adoption but it is a signal of how deeply integrated AI is becoming in everyday workflows.
At the same time, AI capability has expanded beyond narrow tasks. We moved from simple games like tic-tac-toe to systems that defeat world champions in chess and Go. These are not just symbolic victories. They demonstrate the ability of machines to operate in highly complex, uncertain environments.
More importantly, AI is transitioning from assisting to acting. Earlier systems supported human decision-making. Today, we are seeing the rise of agentic AI, where systems that can plan, execute, and adapt. These agents can schedule meetings, draft communications, analyse data, and make decisions with minimal human intervention. The shift is subtle but significant in the sense that AI is no longer just a tool you use; it is becoming a system you manage.
We are also entering the phase of embodied intelligence. AI is moving beyond the screen into the physical world, i.e. factories, hospitals, and homes. When machines can interpret both language and physical environments, automation becomes far more powerful and pervasive.
At the frontier, AI is contributing to scientific discovery. Protein structure prediction, for example, has advanced at a pace previously considered impossible, leading to breakthroughs recognised at the highest levels. This signals a shift from AI as a productivity tool to AI as a knowledge generator.
However, alongside this progress, there are important considerations. The speed of advancement raises questions around governance, ethics, and control. If AI systems can act autonomously, where does accountability sit? If models are trained on vast datasets, how do we ensure reliability and bias mitigation? These are not peripheral issues but they are central
to how AI will be deployed, particularly in domains like law.
This brings us to application. One practical direction is building domain-specific AI systems. For example, a privacy-first, locally hosted legal assistant. Such a system integrates a local language model, a controlled knowledge base, and secure interfaces like WhatsApp. It processes inputs, transcribes voice, retrieves relevant legal context, and generates responses,
all within a governed environment. This is not theoretical. The components already exist.
The implication is clear. The question is no longer whether AI will transform industries. It already is. The real question is how we design, govern, and integrate these systems responsibly. AI is not replacing human expertise. It is reshaping how expertise is applied, scaled, and managed.

More for You
Write for Rights 2026: Advocating Justice Together
ULaw Open Day at BAC: Mastering Advocacy and the BPC Journey
An exclusive visit to Rahmat, Lim & Partners
Courtroom Tales of Ethics, Advocacy, & Domestic Violence with The Hon. Professor...