MCI - Ministry of Communication and Information of the Republic of Singapore

07/03/2024 | Press release | Distributed by Public on 07/03/2024 20:17

Opening Address by SMS Janil Puthucheary at the AiSP AI Security Summit

Assistant Secretary Mr. Tam Huynh,

AiSP Members,

Ladies and gentlemen,

  1. Good morning. I am happy to be joining you today for AiSP's first AI Security Summit.

  2. Over the past couple of years, AI has proliferated rapidly and been deployed in a wide variety of spaces. This has significantly impacted the threat landscape. We know that this rapid development and adoption of AI has exposed us to many new risks.

  3. This includes adversarial machine learning, which allows attackers to compromise the function of the model.

    a. A well-known example is how researchers at MIT were able to trick AI to think that a 3D-printed turtle was a rifle, even if the turtle was viewed from different angles.

    b. Researchers at McAfee were also able to compromise Tesla's former AI system, Mobileye, by making small changes to the speed limit signs that the AI had been taught to recognise.

    c. This class of risks is relatively new and we need to do more to understand them better. Both public and private entities, including the Government Technology Agency of Singapore, have been developing capabilities to simulate such attacks on AI systems, to understand better how they can affect the security of AI. And by doing so, this will help us to put the right safeguards in place.

  4. AI is also vulnerable to classic cyber threats, including those to data privacy. In particular, the widespread adoption of AI has led to a growth in the threat surface for data to be exposed, exfiltrated, or damaged.

    a. Some of you may have heard how 38 terabytes of data were accidentally exposed by Microsoft AI researchers, who were trying to share an AI dataset. The files included private keys, passwords, and more than 30,000 messages sent by users, on Microsoft Teams.

    b. In other types of attacks - persistent attacks - ChatGPT can also be manipulated to reproduce its training data, which may contain sensitive information - like names, addresses, and phone numbers.

  5. Incidents like this undermine public trust and confidence that AI models are safe, secure, and reliable.

    a. Without trust, individuals and organisations might fear that tools will produce incorrect, inconsistent, or harmful output.

    b. This, in turn, affects whether the industry can maximise the benefits of AI, and whether we can leverage the use of artificial intelligence to drive further growth in the digital economy and society in Singapore.

  6. I am heartened to see that industry players, including AiSP and its partners, are leading discussions on how we can make AI more secure. We can all play a part in fostering a trusted AI environment that protects users and systems, while facilitating growth and innovation.

  7. On the government front, the Cyber Security Agency of Singapore has been working with industry partners and foreign counterparts to develop clear guidelines that system owners should use, when making decisions on the approach to adopt AI.

    a. For example, in Nov 2023, CSA co-sealed the "guidelines for secure AI system development", which was developed with the UK's National Cyber Security Centre (NCSC) and US's Cybersecurity and Infrastructure Security Agency (CISA).

  8. I am pleased to announce that CSA will also be releasing its "Technical Guidelines for Securing AI Systems" for public consultation this month.

    a. This set of voluntary guidelines are intended to complement existing resources on the security of AI, and provide practical measures that system owners in Singapore can use to address potential risks to systems and users.

    b. We invite all members of the ecosystem, including Members of AiSP, and international partners, to provide their feedback on how we can improve the guidelines. We understand that AI is used in a wide range of contexts, and across multiple use-cases. We want to ensure that these guidelines are practical and useful.

    c. CSA will release more details on the public consultation in the following weeks. Together, we can provide a useful reference for security professionals looking to enhance the security of their AI tools.

  9. I hope that industry partners and professionals will continue to do their part to ensure that AI tools and systems are kept safe and secure against malicious threats, even as techniques evolve.

    AI for Cybersecurity

  10. In parallel to these efforts, many organisations are also thinking about how to secure ourselves against attacks that are driven by the misuse of AI.

    a. Many of us are concerned about how generative AI can be misused to generate convincing, personalised emails that trick our users into clicking phishing links and attachments. Threat actors can also create convincing deepfakes and trick our users into believing misinformation and disinformation.

    b. Specific to cybersecurity, we have seen the use and the rise of dark AI like WormGPT, which shows that AI can be used to create sophisticated malware. These threats may be difficult for existing systems to detect.

  11. The concern is international - for example, in Jun 2024, Deep Instinct reported that 97% of cybersecurity experts surveyed in the US were concerned that their organisations would suffer a security incident, caused by malicious use of AI.

  12. It is natural that we are concerned about how AI can be misused. However, it is just as important for us to consider how AI can be a force for the good of the cybersecurity sector. Just as threat actors integrate this technology into their operations, defenders need to learn to master the benefits that AI can bring to their work.

  13. Many of us have seen how AI can be a valuable force multiplier for security operations. If used properly, AI can help defenders identify risks at greater speed, scale, and precision, which can help us to address risks more quickly. This can help us to make our teams more efficient, and effective, as we defend against cyber threats.

  14. Even for more sophisticated threats, AI can help to level the playing field. We have already seen an increase in the use of machine-learning algorithms in solutions to detect anomalies, or to mount an autonomous response to potential threats. I look forward to hearing how the industry can use AI to improve the range of cybersecurity tools we have today, and how this can help us to gain a decisive advantage.

    Launch of AI Special Interest Group

  15. AI is still an evolving, emerging technology, and we will continue to see a growth in the number of use cases in these next few years. At the same time, we will discover new risks, which will need to be managed. We will need to strike a careful balance between these two priorities, to ensure that we innovate in a safe domain.

  16. And in doing so, our tech professionals need to stay up to date on how this technology develops and evolves - especially for those of us who work in trust, safety and cybersecurity.

    a. This will help us to make good recommendations on how AI is adopted, and how we can manage the known risks.

    b. It will also help us shape the wider conversation on how AI should be developed, in line with the principles of safety and security.

  17. Today, AiSP will be launching its AI Special Interest Group.

    a. This group will provide a platform for members to discuss AI developments, exchange key insights and experiences, and share their knowledge with the community.

    b. Members can use this platform to discuss how the cybersecurity sector can continue to ensure the digital domain is trusted, and secure, while co-existing and co-developing with AI.

    c. These will be critical topics as AI becomes an integral part of digital infrastructure.

  18. Interested members can reach out to the AiSP Secretariat for details on how to join the SIG. I wish them the best in their future conversations.

    Conclusion

  19. We should all play our part in ensuring that AI can continue to be safe, and secure. This will affect the confidence of our organisations and users as they try to make full use of what AI can offer.

    a. For those who need more guidance, CSA's guidelines will be a useful place to start. Please keep an eye out for details on how the public can access the guidelines in the coming weeks, and how to provide your feedback to CSA.

  20. At the same time, we can be aware of the opportunities that AI can bring for cybersecurity. We can keep ourselves abreast of developments in this space, and advocate for the adoption of AI tools that prove to be effective against our adversaries.

  21. We can also maintain a network of experts and peers that we can consult, especially as the threat landscape develops. AiSP members can start with the AI SIG, which will be chaired by Mr. Tam Hyunh.

  22. I wish you a series of fruitful discussions at the conference today. Thank you very much.