10/31/2024 | News release | Distributed by Public on 10/31/2024 01:20
From discriminating against race in loan applications to making errors in medical diagnoses, unwanted bias in artificial intelligence (AI) systems can have significant consequences. On the flip side, intended bias can be highly useful such as in screening for risk profiles in loan applications or introducing a bias towards a gender or race to make up for known societal biases such as in industries where there is underrepresentation.
So how does bias occur? AI systems learn from data, and thus can end up reflecting biases that already exist in society or occur due to errors in the design of the system, or poor-quality data.
Learned biases in a system can then amplify or exacerbate the problem, potentially favouring or disregarding groups of people, objects, concepts or outcomes. To complicate things more, removing one kind of bias could create others. All of which can lead to poor results. As AI systems are often used to help with decision making, unwanted biases are, therefore, problematic.
The errors that result from unwanted bias can also erode trust in AI systems and reduce the potential benefits that this new technology can bring. Which is why the joint IEC and ISO committee for AI, SC 42, has just developed a standard to help.
ISO/IEC TS 12791 outlines the steps that can be taken to treat unwanted bias when developing or even using AI systems. It can particularly help to treat unwanted bias in machine learning systems that conduct classification and regression tasks.
It outlines aspects such as which stakeholders need to be considered, the needs of those stakeholders, data sources, testing and evaluation. It also provides various techniques to address unwanted bias such as algorithmic, training and data techniques. It is based on ISO/IEC TR 24027 which provides methods and techniques for measuring and assessing bias,
Adam Leon Smith, project leader of the standard said identifying and treating bias is an active area of research and is increasingly complex as AI systems evolve.
"One of the key challenges is determining what bias is actually needed or what is negative and unwanted," he said.
"Age-profiling can be considered unacceptable when it comes to job applications, for example, but important when evaluating medical treatments. Other biases can creep in from existing societal biases and become challenging to recognize, particularly where multiple AI applications are used. Preventing and treating unwanted bias is possible, and extremely important, in order to allow AI technology to provide the many benefits to society that it promises."
SC 42 develops international standards for AI, taking a holistic approach to consider the entire AI ecosystem. It looks at technology capability and non-technical requirements, such as business, regulatory and policy requirements, application domain needs, and ethical and societal concerns.
The committee organizes regular workshops on AI to discuss emerging trends, technology, requirements and applications as well as the role of standards. They bring together innovators at the frontier of AI development from diverse locations, sectors and backgrounds involved in research, deployment, standardization, startups, applications and oversight.
Hear more from Wael William Diab about SC 42 in this video.
The next workshop will be held on 9 and 11 December. Learn more.