News Image

Journey To Trustworthy And Secure AI

Building An AI-Resilient Singapore
BY CYBER SECURITY AGENCY OF SINGAPORE


Artificial intelligence (AI) promises significant benefits in terms of productivity and decision-making. Not only are the benefits economic and social, there are value-adds in terms of security. Even as malicious actors may abuse AI to empower their attacks, we are making efforts to ensure that our cyber defenders can also harness AI to fend them off.

AI enables new possibilities for cybersecurity, with solutions that bring greater agility, speed, and accuracy. This can help cyber defenders level the playing field. By efficiently handling tedious routine cybersecurity tasks, analysing large volumes of system logs, and automatically patching vulnerable systems, AI can help relieve operator workload and allow them to focus on higher-value work. The Cyber Security Agency of Singapore (CSA) and Government Technology Agency (GovTech) have taken early steps to review how AI can be used to accelerate our cybersecurity operations at the national level.

We have also used AI to accelerate anti-scam operations. The Singapore Police Force (SPF) and GovTech are using AI to accelerate and expand SPF’s operations to detect and block scam websites. AI can help with a preliminary assessment of the potential threat posed by a given website, thus reducing the load on each police officer.

SINGAPORE’S APPROACH TO ADDRESSING AI RISKS

However, just like any other software, the adoption of new AI systems can introduce new risks or exacerbate existing ones for organisations. As Singapore journeys deeper into a future powered by AI, we have to ensure that the output of AI models is accurate, reliable, and will not harm users or systems. This allows us to maximise the benefits of AI, ensuring that AI serves the public good and contributes to our nation’s economic and security interests.

To support this, we are investing in AI safety and security efforts to ensure that we can address emerging risks of AI, and foster a trusted AI environment that protects users and facilitates innovation.

Nationally, we have launched the AI Verify Foundation, which harnesses the expertise of the global open source community to promote the development of responsible AI testing tools and capabilities. This will give users and enterprises more assurance that AI systems can meet the needs of companies and regulators, regardless of their jurisdiction. This effort is supported by more than 60 members in the Foundation, including organisations such as IBM, Google, Sony, Deloitte, DBS and SIA, who share our interest in protecting AI.

Another initiative is the Model AI Governance Framework for Generative AI (MGF-GenAI), which sets out best practices for stakeholders to manage the risks posed to users. Launched in January 2024, it articulates emerging principles, concerns, and technological developments relevant to the governance of GenAI and provides a starting point to manage emerging GenAI risks. Security is a core element in MGF-GenAI, and the framework provides guidance on how to address new threat vectors to security, which may arise through GenAI models.

CSA will continue to drive efforts to uplift the security baseline for AI. We are working with industry and international partners to develop guidelines, standards and tools that will support system owners and adopters to make informed decisions about their adoption and deployment of AI. In November 2023 and January 2024, CSA had contributed to, and co-sealed international guidelines with the UK, US, Australia and other key partners. For scams involving the use of deepfakes, SPF and CSA have also issued advisories to the public, to raise public awareness about the risks of deepfake scams, how to identify them, and what to do next. CSA is actively engaging the industry to co-create solutions for AI security, and to develop and refine our AI security standards and support national capability development in AI security.

GUIDELINES ON SECURING AI SYSTEMS

To raise awareness of the risks and support system owners in adopting AI, CSA is working with industry and international partners to develop guidelines (Guidelines on Securing AI Systems) and a companion guide (Companion Guide on Securing AI Systems) to provide practical advice and recommendations on how to secure AI systems throughout the life cycle. Drafts of the two documents were released for public consultation, which ended on 15 Sept 2024.

The Guidelines on Securing AI Systems seeks to offer guidance to system owners on securing AI throughout its life cycle. These guidelines are meant to provide evergreen principles to raise awareness of adversarial attacks and other threats that could compromise AI behaviour and system security, and guide system owners on implementation of security controls and best practices to protect AI systems against potential risks, including existing cybersecurity risks such as supply chain attacks, and novel risks such as Adversarial Machine Learning.

To support system owners, CSA is working with AI and cybersecurity practitioners to develop a Companion Guide on Securing AI Systems. This is designed as a community-driven resource to complement the Guidelines on Securing AI Systems, and can be updated more frequently as the technology evolves. It is not mandatory nor prescriptive. It curates practical measures and controls, drawing from industry and academia, as well as advice from resources such as the MITRE ATLAS database and OWASP Top 10 for Machine Learning and GenAI. We hope this will be a useful reference for system owners in navigating this developing space.

CONCLUSION

As AI technology continues to advance and become more widely adopted across all industries in Singapore, we need to be clear-eyed about the risks and opportunities that AI can bring.

Together, the government, industry, academia, and public must chart the course of our journey towards becoming AI-resilient. We must collaborate closely to finetune our approaches to securing the adoption of AI, to ensure that AI remains safe, secure and trustworthy.


This article was first published on the Singapore Computer Society (SCS) website on 30 August 2024. Reproduced with permission.

All views expressed by contributors are their own and do not necessarily reflect the views of SCS.

Loading spinner