Artificial intelligence is often discussed as a future problem. In spite of the recent rise in debates surrounding the safety of AI models there is a complacence, a feeling that AI governance is a thing for the future—something we will need to regulate once it becomes sufficiently advanced. This idea is misleading. The systems already in use today reveal not only how capable AI has become, but also where its limitations, risks, and governance challenges are already visible. Looking closely at AI systems today—language models, image generators, audio tools, robotics, and defence platforms—offers a more grounded way to think about AI governance. We can examine what these systems actually do and where they fail.
Language Models: When Fluency Substitutes for Understanding
Large language models, like ChatGPT, excel at producing fluent, coherent text. They can write poems, summarise technical concepts and explain complex ideas. However, fluency does not guarantee correctness, even when the output appears logical and well structured. How so?
When asked to perform precise tasks—such as calculate “13943642 × 1343”—ChatGPT fails to produce a correct answer. It gives the correct steps and yet a produces error in its result. This is not carelessness; it is a structural limitation. You see LLMs like ChatGPT are essentially trained to predict the next word, which leads it to generate beautiful coherent responses for us. However, mathematical operations are not just a prediction of the next numeral and therefore, LLMs fail at this task.
The governance concern here is not simply technical accuracy. It is epistemic. As these systems become embedded in education, writing, research, and decision-making, people increasingly rely on them without independent verification. Over time, this risks shifting human behaviour from critical engagement to passive acceptance—especially in domains where users lack the expertise to judge correctness.
Creative AI and the Erosion of Default Trust
Text-to-image and text-to-audio models demonstrate a different kind of power: the ability to produce artefacts that feel authentic, personal, and intentional. Image generation systems, like ChatGPT, Gemini, Sora, can create highly realistic visuals, even when we as uses provide poor or unclear prompts. Audio models, like Udio or NotebookLLM, can generate music, speech, and conversational formats that closely mimic human expression.
These AI tools have clear accessibility benefits, particularly for users with visual impairments, cognitive differences, or learning preferences that favour audio or video. However, the fact that these models introduces additional framing and content, highlights the need for careful auditing in educational contexts. Their capabilities are impressive, but they also challenge long-standing assumptions about trust in media.
If images, voices, and conversations can be convincingly generated or altered, then authenticity can no longer be assumed. Consent, ownership, and identity—concepts that underpin legal and social systems—become harder to define and enforce. The risk is not merely misinformation, but a gradual erosion of confidence in what can be taken at face value.
From a governance perspective, this raises questions about personality rights, attribution, auditing standards, and the obligations of platforms that deploy such tools at scale.
Embodied AI: From Abstraction to Physical Presence
Robotics brings AI out of the abstract and into the physical world. Unlike software systems, robots interact with space, objects, and people.
Current humanoid robots can perform basic tasks: navigating spaces, picking up and placing objects and responding to verbal commands. Their movements are often slow and constrained, yet they signal a shift toward machines that combine perception, language, and action.
As these systems improve, questions of responsibility become unavoidable. When a robot makes a mistake, who is accountable—the manufacturer, the operator, the designer, or the system itself? Governance frameworks that work for software do not automatically translate to embodied systems, especially when they operate alongside humans.
AI in Defence: Capability Without Consensus
Nowhere are governance gaps more stark than in military applications of AI. Systems that integrate satellite data, real-time surveillance, autonomous navigation, and targeting are real and in operation.
These systems are often framed in terms of deterrence, efficiency, and risk reduction – the objective being the greater good of people. Yet such framing obscures fundamental questions: who authorises their use, how targets are classified, and what safeguards exist against misuse, escalation, or error.
Autonomous or semi-autonomous weapons systems pose challenges that go beyond national regulation. Alliances shift, technologies proliferate, and tools developed for one context can be repurposed in another. Existing international frameworks were not designed for systems that can perceive, decide, and act at machine speed.
The absence of shared norms and enforceable constraints in this domain is not a future problem—it is a present one.
Why Governance Must Start with What Exists
What emerges from examining today’s AI systems is not a single catastrophic risk, but a pattern: capability advancing faster than our social, legal, and institutional responses.
AI governance cannot be postponed until systems are “more advanced,” nor can it be reduced to technical safety alone. It must account for how people use these systems, how much they trust them, and how power is redistributed when decision-making is partially automated.
Effective policy in this space requires careful observation, interdisciplinary thinking, and clear communication. It requires resisting both hype and panic, and instead focusing on the specific ways AI is already reshaping human behaviour, labour, creativity, and conflict. The challenge is not to predict a distant future, but to govern the present responsibly—before today’s defaults become tomorrow’s inevitabilities.
