Dentons US LLP

09/11/2024 | News release | Distributed by Public on 09/11/2024 11:49

Guidance on the Ethical Development and Use of AI

September 11, 2024

Hong Kong guidance on the ethical development and use of AI

AI has experienced exponential growth in recent years creating an AI arms race involving organisations around the world racing to implement it into their processes with promises of increased efficiency, productivity, insight and ideas. As with any cutting-edge technology, those on the veritable bleeding edge must be cognisant of the potential risks in the use of such technology and should take steps to mitigate those risks to ensure the sustainable development of that technology.

Many will be familiar with the case of US attorneys having cited cases to a court, which turned out to be imaginary and the result of hallucinations of the AI software used to generate the legal submissions. A less innocuous situation has been the propensity of AI to further human biases including prejudices based on race and gender. At its most extreme, pundits have warned of the AI apocalypse - movies such as The Terminator, 2001: A Space Odyssey and I, Robot are examples of cautionary tales of unregulated and uncontrolled development of AI. There is clearly a need to ensure that these risks are mitigated and safeguards introduced to ensure that AI is safe, unbiased and its content founded in fact.

To raise awareness of the potential risks associated with AI and to give suggestions on how best to address such risks, the Hong Kong Office of the Privacy Commissioner for Personal Data (PCPD) has recently published "Guidance on the Ethical Development and Use of Artificial Intelligence"1 to guide organisations on the responsible development and use of AI. The guidance emphasises the importance of data stewardship and ethical principles with an emphasis on compliance with the Personal Data (Privacy) Ordinance (PDPO) and data privacy principles.

Key components of the guidance

  1. Data stewardship values
    • Being respectful: Organisations should respect the dignity, autonomy and rights of individuals in data processing.
    • Being beneficial: AI should provide benefits to stakeholders while minimising harm.
    • Being fair: Decisions made by AI systems should be fair, without unjust bias or discrimination.
  2. Ethical principles for AI
    • Accountability: Organisations are responsible for their AI systems and must ensure transparency and ethical use.
    • Human oversight: There should always be a human in control, especially in high-risk scenarios.
    • Transparency and interpretability: Clear communication about AI systems and their decision-making processes is crucial.
    • Data privacy: Compliance with the PDPO and protection of personal data are mandatory.
    • Fairness, beneficial AI, reliability and security: AI systems should be reliable, secure and used to benefit society.
  3. Practical guidelines
    • AI strategy and governance: Organisations should have a clear AI strategy and a governance structure that includes senior management and interdisciplinary teams. The AI strategy should clearly outline the organisation's objectives for AI, ethical principles and acceptable uses. It must align with the organisation's vision and mission while establishing an AI governance structure. This structure includes a governance committee responsible for overseeing the development, implementation and monitoring of AI systems, ensuring compliance with ethical standards and internal policies.
    • Risk assessment and human oversight: Organisations should conduct risk assessments to identify potential risks and establish appropriate levels of human oversight. For example, an AI system that processes large volumes of personal data requires close human oversight to ensure data protection and ethical considerations.
    • Development and management of AI systems: This includes data preparation, model development and continuous monitoring to ensure the systems' effectiveness and reliability. For example, an AI system used in healthcare to analyse patient data and assist in diagnosis must be continuously monitored for accuracy and updated with the latest medical research to maintain its reliability and effectiveness.
    • Communication and engagement: Transparency with stakeholders about the use of AI systems is crucial for building trust.

Other jurisdictions

In Europe, the Artificial Intelligence Act came into effect on 1 August 2024 and adopts a risk-based approach in its regulation, particularly placing guardrails on aspects which present unacceptable risk, including the banning of emotional behavioural analysis in the workplace or schools.

In the Mainland, the focus has been on preventing the use of AI to infringe upon individuals' privacy and various regulations have been introduced to prevent this.

Conclusion

Other major jurisdictions have yet to introduce comprehensive AI regulations so it is comforting to note that Hong Kong is among the cutting edge in issuing this guidance. Although it lacks the force of law, the guidance will raise awareness of the risks and give the public confidence that the AI systems developed and in use are safe and their use will not infringe on individuals' existing rights.

The key takeaway from the guidance document is the emphasis on the ethical and sustainable development and use of AI systems. The guidance outlines ethical principles like accountability, transparency and human oversight to ensure AI systems are developed and managed responsibly. It also stresses the need for comprehensive risk assessments, robust security measures and continuous monitoring to maintain the reliability and effectiveness of AI systems while protecting individuals' rights and privacy.

At this stage in the development of AI, human intervention and oversight is still necessary to safeguard against the risks of AI, particularly hallucinations or prejudice. Hopefully, humans can keep AI in check and avoid the type of AI apocalypse depicted in film and media. However, one is reminded of a scene in I, Robot where Will Smith's character asks a robot: "Human beings have dreams. Even dogs have dreams, but not you, you are just a machine. An imitation of life. Can a robot write a symphony? Can a robot turn a canvas into a beautiful masterpiece?" to which the robot responds: "Can you?"

Acknowledgements to Legal Assistant Jacky Cheung for research and contribution to this article.

  1. "Guidance on the Ethical Development and Use of Artificial Intelligence"