Audit your AI systems for bias

Measure and mitigate bias across your AI inventory

What is an AI Audit?

AI auditing is the research and practice of assessing, mitigating, and assuring an algorithm’s safety, legality, and ethics. Audits form part of the overall assurance of AI algorithms, which also encompasses certification and insurance of algorithms.

AI audits can occur at any point in the lifecycle of a system, from design and development through to deployment, and can focus on a particular component of an AI system, or can be a holistic evaluation of the entire system.

The key risks that can be audited for are:

  • Privacy – the risk that a system is vulnerable to leaking personal or critical data
  • Explainability – the risk that a system might not be understandable to deployers and end users
  • Efficacy – the risk that a system does not perform well for its intended use case
  • Robustness – the risk that a system is vulnerable to changes or attempts to attack or manipulate it
  • Bias – the risk that a system is treats individuals or groups unfairly or results in unfair outcomes for particular individuals or groups

Audits can be carried out for all or several of these verticals, or can be narrowed to focus just on a single vertical. In fact, many emerging laws in this space focus impose requirements for bias audits, leaving audits of other verticals as optional.

What is a bias audit?

A bias audit is an evaluation of an AI system that seeks to determine whether it results in unequitable outcomes based on subgroup membership, or if it treats individuals belonging to a particular subgroup differently based on their subgroup membership. Therefore, bias can be conceptualized as being measured in terms of unequal treatment and unequal impact.

Depending on the definition of bias being used for the audit, the approach can vary. For example, a bias audit focusing on unfair treatment might examine whether particular features within a model are correlated with subgroup membership, or whether some features are weighted differently for different subgroups, meaning different models are effectively used for different subgroups.

On the other hand, when bias audits focus on unequal outcomes, audits might examine whether the rate at which a certain subgroup is designated to the positive condition is significantly different to the rate of another subgroup or whether groups have an equal likelihood of being designated to the positive condition.

There may also be differences in the accuracy of models for different subgroups, which may affect outcomes. For example, there are significant disparities in the accuracy of commercial facial recognition tools based on skin tone and sex, and the accuracy of transcription software varies by factors such as accent and race.

Bias audits could also examine the data used to train models to evaluate whether the data is representative of all groups the AI system is intended to interact with, or whether certain groups might be over or under represented, which could result in unfair treatment or unfair impact if the system is not optimized for some subgroups.

Why are Bias Audits Important and What Risks Do They Manage?

AI audits play a crucial role in identifying and managing a spectrum of risks associated with the deployment of AI technologies. These risks, if not properly managed, can lead to significant damages for organizations.

For example, inaccurate or biased AI systems can lead to poor decision-making, impacting customer satisfaction and business outcomes. Moreover, non-compliance with regulatory standards can result in significant legal and financial repercussions. By conducting AI audits, organizations proactively address these challenges, ensuring their AI systems are both effective and compliant.

Broadly, there are three types of risks that bias audits can help to protect against:

  • Legal risks – existing nondiscrimination laws apply to AI, and non-compliance with them can result in legal action and complaints
  • Financial risks – violations of non-discrimination and other relevant laws can result in hefty fines, having a significant financial impact on organization
  • Reputational risks – instances of bias can reduce trust in a system or even an entire organization, which can, in turn, result in additional financial impacts

Legal requirements for bias audits

While bias audits are important for compliance with existing equal opportunity laws, there are also specific laws that have been introduced and enforced that specifically require bias audits, particularly in the HR Tech sector. Consequently, not performing a bias audit if these laws apply to a system is its own legal and compliance risk.

The laws that require bias audits are:

  • NYC Local Law 144 (enforced from 5 July 2023) – requires annual independent, impartial bias audits of automated employment decision tools used for making hiring or promotion decisions in New York City and imposes transparency and notification obligations
  • NY AB567 (proposed) – requires annual, impartial bias audits of automated employment decision tools used in New York and imposes transparency obligations
  • New York S7623 (proposed) – Restricts electronic monitoring and requires in-depth impartial, independent bias audits of automated employment decision tools and reporting of results
  • New Jersey S1588 (proposed) – requires vendors of automated employment decision tools to provide annual impartial audits of their tools at no additional cost to employers
  • Pennsylvania HB1729 (proposed) – requires independent, impartial bias audits of automated employment decision tools and imposes notification and transparency obligations

Moreover, Colorado’s SB21-169, which prohibits unfair discrimination in insurance from the use of algorithms, predictive models, and external consumer data and information sources, is in the process of having specific rules for different types of insurance and insurance practices developed. As part of this process, a regulation for algorithm and predictive quantitative testing regulation for detecting unfair discrimination in underwriting practices in life insurance has been proposed, which essentially requires (internal) bias audits of  predictive models, algorithms, and data sources used in life insurance underwriting.

With bias audits increasingly becoming a legal requirement, not conducting one can lead to direct legal liability, and can also result in reputational and financial damage.

Conducting bias audits, whether or not legally required, can mitigate risks at a number of levels and improve trust and confidence in a system.

How Can AI Audits Help with Mitigating Bias?

Bias can’t be mitigated if you do not know that it exists, so bias audits are a vital first step towards ensuring that AI systems do not treat users unfairly or result in unfair outcomes; if an audit identifies that there is bias associated with a system, then this is a great starting point to work back from.

For example, if a bias audit that looked at the outcome of a system found that it did not result in equal outcomes for different subgroups, then this allows for further investigation into the source of bias. This could take the form of examining whether the training data represents different subgroups adequately, and taking steps to rectify this and retrain the model if this is not the case.

Furthermore, if features are weighted differently for different groups or if the model does not fit different subgroups equally well and therefore treats them differently, then the model could be retrained to use a common set of predictors relevant to multiple groups, where the same model is used for each group.

Moreover, if a bias audit of a model finds that certain features are correlated with subgroup membership, then steps could be taken to investigate how removing or lessening the influence of these predictors affects the performance of the model and whether this rectifies subgroup differences.

On the other hand, if a bias audit does not find evidence of bias, then recommendations could be issued on how to continuously monitor the system to prevent future possible instances of bias, and the strengths in the approach for one system could serve as a learning resource when creating another system.

In short, bias audits serve as an essential first step for bias mitigation and ongoing monitoring to prevent potential future bias, and can reduce the risk associated with an AI system.

How often should AI Audits be performed?

While many laws mandating bias audits, such as Local Law 144, require annual bias audits, best practice generally considers this the minimum frequency at which bias audits should be conducted. Instead, if any significant updates are made to the system in the period between annual bias audits, additional post-hoc bias audits should also be conducted.

However, while this applies to systems that have already been deployed, those that are still in development should undergo bias audits at each of the following stages:

  • Data and Task Setup to ensure that data pipelines are well structured and well-designed and that they do not present an opportunity for bias to enter at the first stage in the development process
  • Feature pre-processing  to  examine whether there is potential for bias in the feature space
  • Model selection to evaluate whether choosing one model over another has any implications for bias
  • Post-processing and reporting to make sure there are built-in mechanisms for detecting and reporting bias

How can Bias Audits be Integrated into Existing Risk Management Frameworks?

Integrating AI audits into existing risk management frameworks is crucial for organizations to effectively oversee the unique risks posed by AI technologies. This integration ensures that AI-related risks are systematically identified, assessed, and mitigated as part of the organization's overall risk strategy. Key concepts for successful integration include:

  • Risk Identification: Pinpointing specific risks associated with AI systems.
  • Impact Assessment: Evaluating the potential consequences of AI-related risks.
  • Control Implementation: Establishing measures to mitigate identified risks.
  • Continuous Monitoring: Regularly reviewing AI systems for emerging risks.
  • Compliance Alignment: Ensuring AI audits are in sync with regulatory requirements.

By incorporating these concepts, organizations can create a holistic risk management approach that encompasses the unique challenges of AI. This not only enhances the safety and reliability of AI systems but also aligns them with broader organizational risk management objectives.

What Information do I Need to Prepare For An AI Bias Audit?

Preparing for an AI audit requires gathering comprehensive information about your organization's AI systems and understanding the relevant regulatory landscape. An accurate inventory of all AI applications, including those not immediately obvious (known as 'dark AI'), is essential. Key information to prepare includes:

  • AI Inventory: A detailed list of all AI systems and applications in use, including information about the intended purpose and outcomes of each application.
  • Data Sources and Usage: Information on the data sets used for training and operating AI systems. If the audit is examining the input data, then direct access to this data may also be required.
  • Model specifications: Information on the type of model used and documentation of any decisions that led to choosing a particular model. If the audit is also examining the model itself, then access the model weights and parameters etc will also be required.

Bias audits will complement this information with:

  • Regulatory frameworks: Knowledge of applicable (AI-related) regulations, laws, and standards against which to audit the system.
  • Evaluation metrics: Expertise in the right metrics to use when, based on regulatory frameworks and industry best practices.

Having this information at hand not only streamlines the audit process but also ensures a thorough and effective evaluation. Understanding the regulatory context is particularly crucial, as it guides the audit towards compliance with specific legal and ethical standards.

Act now to get an AI bias audit - Schedule a call to start your bias audit journey and minimize the risk of financial penalties.

By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.