Singapore has launched a new model of artificial intelligence (AI) governance framework – a world-first guide for enterprises to deploy agentic AI responsibly.
Unveiled at the World Economic Forum in Davos, Switzerland, on January 22 by Minister for Digital Development and Information Josephine Teo, the new Model AI Governance Framework for Agentic AI (MGF for Agentic AI) was developed by Singapore’s Infocomm Media Development Authority (IMDA). This first-of-its-kind framework for reliable and safe agentic AI deployment builds upon the governance foundations of Model Governance Framework for AI introduced in 2020. The new model provides guidance to organisations on how to deploy agents responsibly, recommending technical and non-technical measures to mitigate risks, while emphasising that humans are ultimately accountable. Initiatives such as the MGF for Agentic AI support the responsible development, deployment and use of AI, so that its benefits can be enjoyed by all in a trusted and safe manner. This is in line with Singapore’s practical and balanced approach to AI governance, where guardrails are put in place, while providing space for innovation.
Unlike traditional and generative AI, AI agents can reason and take actions to complete tasks on behalf of users. This allows organisations to automate repetitive tasks, such as those related to customer service and enterprise productivity, and drive sectoral transformation by freeing up employees’ time to undertake higher-value activities.
However, as AI agents may have access to sensitive data as well as the ability to make changes to their environment, such as updating a customer database or making a payment, their use introduces potential new risks, for example, unauthorised or erroneous actions. The increased capability and autonomy of agents also create challenges for effective human accountability, such as greater automation bias, or the tendency to overly trust an automated system that has performed reliably in the past. It is therefore crucial to understand the risks agentic AI could pose and ensure that organisations implement the necessary governance measures to harness agentic AI responsibly, including maintaining meaningful human control and oversight over agentic AI.
The MGF for Agentic AI offers a structured overview of the risks of agentic AI and emerging best practices in managing these risks. It is targeted at organisations looking to deploy agentic AI, whether through developing AI agents inhouse or using third-party agentic solutions.
The Framework provides organisations with guidance on technical and non-technical measures they need to put in place to deploy agents responsibly, across four dimensions:
In developing the Framework, IMDA incorporated feedback from both the government sector agencies and private sector organisations. “As the first authoritative resource addressing the specific risks of agentic AI, the MGF fills a critical gap in policy guidance for agentic AI,” said April Chin, Co-Chief Executive Officer, Resaro. “The Framework establishes critical foundations for AI agent assurance. For example, it helps organisations define agent boundaries, identify risks, and implement mitigations such as agentic guardrails.”
As the Framework is a living document, IMDA welcomes all feedback from interested parties to refine it, as well as submission of case studies that demonstrate how agentic AI can be responsibly deployed. More information and case studies are available at Model AI Governance Framework for Agentic AI.
The MGF for Agentic AI is the latest initiative introduced by Singapore to build a global ecosystem where AI is trusted and reliable. Singapore is working with other countries though our AI Safety Institute (AISI), and leading the ASEAN Working Group on AI Governance (WG-AI) to develop a trusted AI ecosystem within ASEAN, while fostering collaboration among Southeast Asian nations. Closer to home, initiatives such as the MGFs, AI Verify toolkit and Starter Kit for Testing of LLM-Based Applications for Safety and Reliability, have formed important stepping stones towards the goal of building a trustworthy AI ecosystem internationally.