AI auditing is the research and practice of assessing, mitigating, and assuring an algorithm’s safety, legality, and ethics. Audits form part of the overall assurance of AI algorithms, which also encompasses certification and insurance of algorithms.
AI audits can occur at any point in the lifecycle of a system, from design and development through to deployment, and can focus on a particular component of an AI system, or can be a holistic evaluation of the entire system.
The key risks that can be audited for are:
Audits can be carried out for all or several of these verticals, or can be narrowed to focus just on a single vertical. In fact, many emerging laws in this space focus impose requirements for bias audits, leaving audits of other verticals as optional.
A bias audit is an evaluation of an AI system that seeks to determine whether it results in unequitable outcomes based on subgroup membership, or if it treats individuals belonging to a particular subgroup differently based on their subgroup membership. Therefore, bias can be conceptualized as being measured in terms of unequal treatment and unequal impact.
Depending on the definition of bias being used for the audit, the approach can vary. For example, a bias audit focusing on unfair treatment might examine whether particular features within a model are correlated with subgroup membership, or whether some features are weighted differently for different subgroups, meaning different models are effectively used for different subgroups.
On the other hand, when bias audits focus on unequal outcomes, audits might examine whether the rate at which a certain subgroup is designated to the positive condition is significantly different to the rate of another subgroup or whether groups have an equal likelihood of being designated to the positive condition.
There may also be differences in the accuracy of models for different subgroups, which may affect outcomes. For example, there are significant disparities in the accuracy of commercial facial recognition tools based on skin tone and sex, and the accuracy of transcription software varies by factors such as accent and race.
Bias audits could also examine the data used to train models to evaluate whether the data is representative of all groups the AI system is intended to interact with, or whether certain groups might be over or under represented, which could result in unfair treatment or unfair impact if the system is not optimized for some subgroups.
AI audits play a crucial role in identifying and managing a spectrum of risks associated with the deployment of AI technologies. These risks, if not properly managed, can lead to significant damages for organizations.
For example, inaccurate or biased AI systems can lead to poor decision-making, impacting customer satisfaction and business outcomes. Moreover, non-compliance with regulatory standards can result in significant legal and financial repercussions. By conducting AI audits, organizations proactively address these challenges, ensuring their AI systems are both effective and compliant.
Broadly, there are three types of risks that bias audits can help to protect against:
While bias audits are important for compliance with existing equal opportunity laws, there are also specific laws that have been introduced and enforced that specifically require bias audits, particularly in the HR Tech sector. Consequently, not performing a bias audit if these laws apply to a system is its own legal and compliance risk.
The laws that require bias audits are:
Moreover, Colorado’s SB21-169, which prohibits unfair discrimination in insurance from the use of algorithms, predictive models, and external consumer data and information sources, is in the process of having specific rules for different types of insurance and insurance practices developed. As part of this process, a regulation for algorithm and predictive quantitative testing regulation for detecting unfair discrimination in underwriting practices in life insurance has been proposed, which essentially requires (internal) bias audits of predictive models, algorithms, and data sources used in life insurance underwriting.
With bias audits increasingly becoming a legal requirement, not conducting one can lead to direct legal liability, and can also result in reputational and financial damage.
Conducting bias audits, whether or not legally required, can mitigate risks at a number of levels and improve trust and confidence in a system.
Bias can’t be mitigated if you do not know that it exists, so bias audits are a vital first step towards ensuring that AI systems do not treat users unfairly or result in unfair outcomes; if an audit identifies that there is bias associated with a system, then this is a great starting point to work back from.
For example, if a bias audit that looked at the outcome of a system found that it did not result in equal outcomes for different subgroups, then this allows for further investigation into the source of bias. This could take the form of examining whether the training data represents different subgroups adequately, and taking steps to rectify this and retrain the model if this is not the case.
Furthermore, if features are weighted differently for different groups or if the model does not fit different subgroups equally well and therefore treats them differently, then the model could be retrained to use a common set of predictors relevant to multiple groups, where the same model is used for each group.
Moreover, if a bias audit of a model finds that certain features are correlated with subgroup membership, then steps could be taken to investigate how removing or lessening the influence of these predictors affects the performance of the model and whether this rectifies subgroup differences.
On the other hand, if a bias audit does not find evidence of bias, then recommendations could be issued on how to continuously monitor the system to prevent future possible instances of bias, and the strengths in the approach for one system could serve as a learning resource when creating another system.
In short, bias audits serve as an essential first step for bias mitigation and ongoing monitoring to prevent potential future bias, and can reduce the risk associated with an AI system.
While many laws mandating bias audits, such as Local Law 144, require annual bias audits, best practice generally considers this the minimum frequency at which bias audits should be conducted. Instead, if any significant updates are made to the system in the period between annual bias audits, additional post-hoc bias audits should also be conducted.
However, while this applies to systems that have already been deployed, those that are still in development should undergo bias audits at each of the following stages:
Integrating AI audits into existing risk management frameworks is crucial for organizations to effectively oversee the unique risks posed by AI technologies. This integration ensures that AI-related risks are systematically identified, assessed, and mitigated as part of the organization's overall risk strategy. Key concepts for successful integration include:
By incorporating these concepts, organizations can create a holistic risk management approach that encompasses the unique challenges of AI. This not only enhances the safety and reliability of AI systems but also aligns them with broader organizational risk management objectives.
Preparing for an AI audit requires gathering comprehensive information about your organization's AI systems and understanding the relevant regulatory landscape. An accurate inventory of all AI applications, including those not immediately obvious (known as 'dark AI'), is essential. Key information to prepare includes:
Bias audits will complement this information with:
Having this information at hand not only streamlines the audit process but also ensures a thorough and effective evaluation. Understanding the regulatory context is particularly crucial, as it guides the audit towards compliance with specific legal and ethical standards.
Act now to get an AI bias audit - Schedule a call to start your bias audit journey and minimize the risk of financial penalties.