AI Auditing

AI Auditing

AI Auditing

AI Risk & Control Matrix

Our open source framework mapping 84 AI risks to practical controls, regulatory requirements, and audit considerations across 12 domains. Built for internal auditors, risk managers, and compliance professionals who need to actually govern AI systems rather than just talk about it.

1. Overview

The Bloch.ai Open Source AI Risk and Control Matrix

A concrete list of AI risks and example controls built from experience.

An open source matrix of 84 AI risks mapped to expected controls, regulatory requirements, and audit considerations across 12 domains, from governance and data to generative AI and systemic risk.

Built for:

  • Internal auditors who need testable controls

  • Risk and compliance teams who need a defensible view of AI risk

  • Executives who want assurance that AI isn’t a black box

Download it here: Bloch AI Risk and Control Matrix

Why have we done this? Well, we have had a lot of exposure to a lot of frameworks.

Most of them are useless.

The typical AI governance framework tells you that AI has risks. It uses words like "responsible" and "ethical" and "trustworthy" without ever explaining what those words mean in practice. It gives you principles when you need procedures. It sounds impressive in a board presentation but falls apart the moment someone asks "so what do we actually do?"

This matrix exists to change this.

The Bloch AI Risk and Control Matrix is 84 specific risks mapped to concrete controls, regulatory requirements, and practical audit considerations. It covers governance, data, models, security, ethics, transparency, generative AI, deployment, monitoring, procurement, compliance, and systemic risks. Each risk tells you what can go wrong, what good controls look like, which deployment models are affected, and which regulations care about it.

It is not perfect. It is not comprehensive. It reflects our experience and perspective, which means it has blind spots. Some controls will not fit your context. Some risks will not apply to your organisation. Some regulatory interpretations may differ from your lawyers' views.

There may be the odd error.

Sorry.

But it is a starting point that actually helps you do something, rather than another set of principles to file away and forget or need an expensive SME just to read and interpret. We have found most IT auditors should be able to run with most of this without SME support (if you do want support though please do shout :))

It is released under the Apache 2.0 License. Take it, use it, adapt it, build on it. If you find problems or have improvements let us know via email or GitHub.

Disclaimer
Important: This matrix is provided on an “as is” basis, without any representations or warranties of any kind, whether express or implied. It is a general information and reference tool and does not constitute legal, regulatory, accounting, or other professional advice.
The risk descriptions, expected controls and regulatory mappings reflect Bloch.ai’s interpretation at the time of writing. They are indicative only and may be incomplete, out of date, or differ from the views of regulators or your legal advisers. We take no responsibility for their accuracy.
You remain solely responsible for assessing your organisation’s risks, designing and operating appropriate controls, and ensuring compliance with all applicable laws and regulations. If you require advice on any of these matters, you should obtain it from a suitably qualified professional.
All standards referenced are copyright their respective owners. This matrix provides only high-level, paraphrased mappings and does not reproduce the underlying standards.
This matrix is made available under the Apache License, Version 2.0. By using it, you agree to be bound by the terms of that licence.

1. Overview

The Bloch.ai Open Source AI Risk and Control Matrix

A concrete list of AI risks and example controls built from experience.

An open source matrix of 84 AI risks mapped to expected controls, regulatory requirements, and audit considerations across 12 domains, from governance and data to generative AI and systemic risk.

Built for:

  • Internal auditors who need testable controls

  • Risk and compliance teams who need a defensible view of AI risk

  • Executives who want assurance that AI isn’t a black box

Download it here: Bloch AI Risk and Control Matrix

Why have we done this? Well, we have had a lot of exposure to a lot of frameworks.

Most of them are useless.

The typical AI governance framework tells you that AI has risks. It uses words like "responsible" and "ethical" and "trustworthy" without ever explaining what those words mean in practice. It gives you principles when you need procedures. It sounds impressive in a board presentation but falls apart the moment someone asks "so what do we actually do?"

This matrix exists to change this.

The Bloch AI Risk and Control Matrix is 84 specific risks mapped to concrete controls, regulatory requirements, and practical audit considerations. It covers governance, data, models, security, ethics, transparency, generative AI, deployment, monitoring, procurement, compliance, and systemic risks. Each risk tells you what can go wrong, what good controls look like, which deployment models are affected, and which regulations care about it.

It is not perfect. It is not comprehensive. It reflects our experience and perspective, which means it has blind spots. Some controls will not fit your context. Some risks will not apply to your organisation. Some regulatory interpretations may differ from your lawyers' views.

There may be the odd error.

Sorry.

But it is a starting point that actually helps you do something, rather than another set of principles to file away and forget or need an expensive SME just to read and interpret. We have found most IT auditors should be able to run with most of this without SME support (if you do want support though please do shout :))

It is released under the Apache 2.0 License. Take it, use it, adapt it, build on it. If you find problems or have improvements let us know via email or GitHub.

Disclaimer
Important: This matrix is provided on an “as is” basis, without any representations or warranties of any kind, whether express or implied. It is a general information and reference tool and does not constitute legal, regulatory, accounting, or other professional advice.
The risk descriptions, expected controls and regulatory mappings reflect Bloch.ai’s interpretation at the time of writing. They are indicative only and may be incomplete, out of date, or differ from the views of regulators or your legal advisers. We take no responsibility for their accuracy.
You remain solely responsible for assessing your organisation’s risks, designing and operating appropriate controls, and ensuring compliance with all applicable laws and regulations. If you require advice on any of these matters, you should obtain it from a suitably qualified professional.
All standards referenced are copyright their respective owners. This matrix provides only high-level, paraphrased mappings and does not reproduce the underlying standards.
This matrix is made available under the Apache License, Version 2.0. By using it, you agree to be bound by the terms of that licence.

1. Overview

The Bloch.ai Open Source AI Risk and Control Matrix

A concrete list of AI risks and example controls built from experience.

An open source matrix of 84 AI risks mapped to expected controls, regulatory requirements, and audit considerations across 12 domains, from governance and data to generative AI and systemic risk.

Built for:

  • Internal auditors who need testable controls

  • Risk and compliance teams who need a defensible view of AI risk

  • Executives who want assurance that AI isn’t a black box

Download it here: Bloch AI Risk and Control Matrix

Why have we done this? Well, we have had a lot of exposure to a lot of frameworks.

Most of them are useless.

The typical AI governance framework tells you that AI has risks. It uses words like "responsible" and "ethical" and "trustworthy" without ever explaining what those words mean in practice. It gives you principles when you need procedures. It sounds impressive in a board presentation but falls apart the moment someone asks "so what do we actually do?"

This matrix exists to change this.

The Bloch AI Risk and Control Matrix is 84 specific risks mapped to concrete controls, regulatory requirements, and practical audit considerations. It covers governance, data, models, security, ethics, transparency, generative AI, deployment, monitoring, procurement, compliance, and systemic risks. Each risk tells you what can go wrong, what good controls look like, which deployment models are affected, and which regulations care about it.

It is not perfect. It is not comprehensive. It reflects our experience and perspective, which means it has blind spots. Some controls will not fit your context. Some risks will not apply to your organisation. Some regulatory interpretations may differ from your lawyers' views.

There may be the odd error.

Sorry.

But it is a starting point that actually helps you do something, rather than another set of principles to file away and forget or need an expensive SME just to read and interpret. We have found most IT auditors should be able to run with most of this without SME support (if you do want support though please do shout :))

It is released under the Apache 2.0 License. Take it, use it, adapt it, build on it. If you find problems or have improvements let us know via email or GitHub.

Disclaimer
Important: This matrix is provided on an “as is” basis, without any representations or warranties of any kind, whether express or implied. It is a general information and reference tool and does not constitute legal, regulatory, accounting, or other professional advice.
The risk descriptions, expected controls and regulatory mappings reflect Bloch.ai’s interpretation at the time of writing. They are indicative only and may be incomplete, out of date, or differ from the views of regulators or your legal advisers. We take no responsibility for their accuracy.
You remain solely responsible for assessing your organisation’s risks, designing and operating appropriate controls, and ensuring compliance with all applicable laws and regulations. If you require advice on any of these matters, you should obtain it from a suitably qualified professional.
All standards referenced are copyright their respective owners. This matrix provides only high-level, paraphrased mappings and does not reproduce the underlying standards.
This matrix is made available under the Apache License, Version 2.0. By using it, you agree to be bound by the terms of that licence.

2. Quick Start

Get Using It in Ten Minutes

You do not need to read all 84 risks before this becomes useful. Here is how to get value straight away.

Grab the Excel file here

Download it from the link below. Excel format means you can filter, sort, highlight, and add columns. In my experience this is how risk frameworks actually get used.

Work out your deployment model

This matters more than most people realise. Are you calling APIs from OpenAI or Anthropic or Google? Running open source models on your own servers? Training custom models on proprietary data? The risks differ significantly depending on which of these applies to you.

The matrix flags each risk with Y (yes, this applies), P (partially applies), or N (not really relevant) for three deployment patterns: API/SaaS, On-Premises/Open Source, and Fine-Tuned/Custom. A firm that only uses ChatGPT through the API faces different challenges than one building credit scoring models from scratch.

Pick your starting point

If you are an auditor, start with whatever domain matches your engagement scope. Governance risks are 1-12. Data protection risks are 13-20. Model risks are 21-37. You do not need to boil the ocean. Don't think you have to audit all of these. If you are looking for somewhere to start, use the Governance section.

If you are in risk management, start with whatever keeps you awake at night. Worried about bias? Look at the Ethics domain. Concerned about regulatory compliance? Jump to Compliance. Nervous about that generative AI pilot? The GenAI domain has five risks that probably apply.

Compare expected controls to reality

For each risk that matters to you, read the Expected Controls column. It describes what good may look like. Not perfection, but competent practice. Then look at what your organisation actually has in place. The gap between those two things is either a finding, a risk acceptance, or a remediation project. Or a suggestion for an expected control that we will gratefully add to the matrix.

Check the regulatory angle

The last column maps each risk to specific regulations across UK, EU, US, and Canada. If you need to justify why something matters, or prioritise based on regulatory exposure, that column gives you ammunition.

This is the hardest bit for us to get right, so we don't promise it. So get yourselves comfortable and if there is an error, please let us know.

That is genuinely it. Ten minutes to get oriented, then you have a structured way to think about AI risks rather than starting from scratch or relying on vibes.

2. Quick Start

Get Using It in Ten Minutes

You do not need to read all 84 risks before this becomes useful. Here is how to get value straight away.

Grab the Excel file here

Download it from the link below. Excel format means you can filter, sort, highlight, and add columns. In my experience this is how risk frameworks actually get used.

Work out your deployment model

This matters more than most people realise. Are you calling APIs from OpenAI or Anthropic or Google? Running open source models on your own servers? Training custom models on proprietary data? The risks differ significantly depending on which of these applies to you.

The matrix flags each risk with Y (yes, this applies), P (partially applies), or N (not really relevant) for three deployment patterns: API/SaaS, On-Premises/Open Source, and Fine-Tuned/Custom. A firm that only uses ChatGPT through the API faces different challenges than one building credit scoring models from scratch.

Pick your starting point

If you are an auditor, start with whatever domain matches your engagement scope. Governance risks are 1-12. Data protection risks are 13-20. Model risks are 21-37. You do not need to boil the ocean. Don't think you have to audit all of these. If you are looking for somewhere to start, use the Governance section.

If you are in risk management, start with whatever keeps you awake at night. Worried about bias? Look at the Ethics domain. Concerned about regulatory compliance? Jump to Compliance. Nervous about that generative AI pilot? The GenAI domain has five risks that probably apply.

Compare expected controls to reality

For each risk that matters to you, read the Expected Controls column. It describes what good may look like. Not perfection, but competent practice. Then look at what your organisation actually has in place. The gap between those two things is either a finding, a risk acceptance, or a remediation project. Or a suggestion for an expected control that we will gratefully add to the matrix.

Check the regulatory angle

The last column maps each risk to specific regulations across UK, EU, US, and Canada. If you need to justify why something matters, or prioritise based on regulatory exposure, that column gives you ammunition.

This is the hardest bit for us to get right, so we don't promise it. So get yourselves comfortable and if there is an error, please let us know.

That is genuinely it. Ten minutes to get oriented, then you have a structured way to think about AI risks rather than starting from scratch or relying on vibes.

2. Quick Start

Get Using It in Ten Minutes

You do not need to read all 84 risks before this becomes useful. Here is how to get value straight away.

Grab the Excel file here

Download it from the link below. Excel format means you can filter, sort, highlight, and add columns. In my experience this is how risk frameworks actually get used.

Work out your deployment model

This matters more than most people realise. Are you calling APIs from OpenAI or Anthropic or Google? Running open source models on your own servers? Training custom models on proprietary data? The risks differ significantly depending on which of these applies to you.

The matrix flags each risk with Y (yes, this applies), P (partially applies), or N (not really relevant) for three deployment patterns: API/SaaS, On-Premises/Open Source, and Fine-Tuned/Custom. A firm that only uses ChatGPT through the API faces different challenges than one building credit scoring models from scratch.

Pick your starting point

If you are an auditor, start with whatever domain matches your engagement scope. Governance risks are 1-12. Data protection risks are 13-20. Model risks are 21-37. You do not need to boil the ocean. Don't think you have to audit all of these. If you are looking for somewhere to start, use the Governance section.

If you are in risk management, start with whatever keeps you awake at night. Worried about bias? Look at the Ethics domain. Concerned about regulatory compliance? Jump to Compliance. Nervous about that generative AI pilot? The GenAI domain has five risks that probably apply.

Compare expected controls to reality

For each risk that matters to you, read the Expected Controls column. It describes what good may look like. Not perfection, but competent practice. Then look at what your organisation actually has in place. The gap between those two things is either a finding, a risk acceptance, or a remediation project. Or a suggestion for an expected control that we will gratefully add to the matrix.

Check the regulatory angle

The last column maps each risk to specific regulations across UK, EU, US, and Canada. If you need to justify why something matters, or prioritise based on regulatory exposure, that column gives you ammunition.

This is the hardest bit for us to get right, so we don't promise it. So get yourselves comfortable and if there is an error, please let us know.

That is genuinely it. Ten minutes to get oriented, then you have a structured way to think about AI risks rather than starting from scratch or relying on vibes.

3. The 12 Risk Domains

What the Matrix Covers

The matrix is eighty-four risks organised into twelve domains. Here is what each one is about and why it matters.

Governance (12 risks)

This is the foundation that everything else builds on. Strategy, policy, roles, responsibilities, accountability, audit trails, training, human oversight. Boring but essential.

Governance comes first because organisations that get this wrong tend to get everything else wrong too. If nobody owns AI decisions, if there is no policy framework, if senior leadership is not engaged, then your technical controls are theatre. You are going through the motions without actually governing anything.

Every one of these risks applies regardless of deployment model. Whether you built the AI or bought it, someone in your organisation needs to be accountable for what it does.

Data (8 risks)

Privacy and data protection, but specifically the AI flavours of those problems. Models that memorise training data and spit it back out. Re-identification attacks on supposedly anonymised datasets. Staff pasting confidential information into ChatGPT. Processing personal data without proper lawful basis. Forgetting that automated decisions trigger specific rights under GDPR.

If you work with personal data, this domain will give your DPO plenty to think about.

Model (17 risks)

The largest domain, and deliberately so. AI is just a mathematical model, despite what many people think. Models degrade over time. Data distributions shift. Systems that worked in testing fail in production. Risk assessments get done once and never updated. Documentation is incomplete or missing. Change management is an afterthought. Nobody plans for decommissioning until a system needs to be switched off urgently.

This is where PRA SS1/23 and SR 11-7 expectations live. If you are in financial services, this domain matters a lot.

Security (5 risks)

Standard cybersecurity frameworks were not designed for AI systems. Adversarial attacks that fool models with carefully crafted inputs. Prompt injection that hijacks system behaviour. Model extraction where attackers steal your intellectual property by querying the API enough times. AI-specific incident response that your SOC probably has not planned for.

Five risks, but they represent genuine gaps in most organisations' security posture.

Ethics (13 risks)

This is where it gets contentious. Environmental impact of training runs. Physical safety when AI controls real-world systems. Financial harm from bad recommendations. Psychological manipulation. Discrimination, both direct and through proxies. Historical bias baked into training data and amplified at scale.

The domain also includes specific regulatory requirements: EEOC guidance on AI in hiring, FCA Consumer Duty obligations, adverse action notice requirements under US consumer protection law, SEC concerns about conflicts of interest in AI-driven advice.

Some people think ethics is fluffy. Regulators increasingly disagree, and your customers most certainly do also.

Transparency (1 risk)

Just one risk in this domain, but it is a big one. Black box decisions that cannot be explained to the people affected by them, or to the regulators asking questions, or to the internal oversight functions trying to provide assurance.

Explainability requirements vary enormously depending on use case, jurisdiction, and who is asking. But the underlying principle is consistent: if you cannot explain how a decision was made, you have a problem.

Generative AI (5 risks)

These risks barely existed three years ago. Models that provide information useful for creating weapons. Hallucinations presented with complete confidence. Citations to sources that do not exist. Content that is dangerous, illegal, or deeply inappropriate generated on demand.

Most AI governance frameworks have not caught up with generative AI. This domain tries to address that gap.

Deployment (2 risks)

Two risks that cause disproportionate pain. Shadow AI, where people across the organisation adopt tools without anyone sanctioning them, creating governance nightmares. And cloud cost overruns, where AI consumption scales faster than anyone budgeted for.

Neither is glamorous. Both are common.

Monitoring (4 risks)

Governance without monitoring is a point-in-time exercise that decays immediately. Internal audit coverage of AI systems. Management review processes that actually review something. Learning from incidents rather than just closing them. Following through on identified issues rather than letting them languish.

If nobody is watching, your controls are theoretical.

Procurement (3 risks)

Third-party risks specific to AI. Not knowing the provenance of models you are buying. Depending on a supply chain that concentrates around a handful of providers. Getting locked into vendors in ways that become expensive to escape.

These risks hit hardest for API consumers, but they affect anyone using third-party components.

Compliance (10 risks)

Sector-specific regulatory requirements that go beyond general AI governance. Algorithmic trading rules. SR 11-7 model risk management. The growing patchwork of US state AI laws. Insurance regulatory guidance. EU AI Act prohibited practices and high-risk system requirements. Transparency disclosures.

Systemic (4 risks)

The risks that keep prudential regulators awake at night and run the risk of financial catastrophe. Material errors in credit risk models that affect balance sheets. Integration failures where AI systems meet legacy infrastructure. Market correlation when everyone uses similar models. Pro-cyclical amplification where AI systems make market conditions worse.

These are macro-level concerns, but they start with individual organisations making individual decisions.

3. The 12 Risk Domains

What the Matrix Covers

The matrix is eighty-four risks organised into twelve domains. Here is what each one is about and why it matters.

Governance (12 risks)

This is the foundation that everything else builds on. Strategy, policy, roles, responsibilities, accountability, audit trails, training, human oversight. Boring but essential.

Governance comes first because organisations that get this wrong tend to get everything else wrong too. If nobody owns AI decisions, if there is no policy framework, if senior leadership is not engaged, then your technical controls are theatre. You are going through the motions without actually governing anything.

Every one of these risks applies regardless of deployment model. Whether you built the AI or bought it, someone in your organisation needs to be accountable for what it does.

Data (8 risks)

Privacy and data protection, but specifically the AI flavours of those problems. Models that memorise training data and spit it back out. Re-identification attacks on supposedly anonymised datasets. Staff pasting confidential information into ChatGPT. Processing personal data without proper lawful basis. Forgetting that automated decisions trigger specific rights under GDPR.

If you work with personal data, this domain will give your DPO plenty to think about.

Model (17 risks)

The largest domain, and deliberately so. AI is just a mathematical model, despite what many people think. Models degrade over time. Data distributions shift. Systems that worked in testing fail in production. Risk assessments get done once and never updated. Documentation is incomplete or missing. Change management is an afterthought. Nobody plans for decommissioning until a system needs to be switched off urgently.

This is where PRA SS1/23 and SR 11-7 expectations live. If you are in financial services, this domain matters a lot.

Security (5 risks)

Standard cybersecurity frameworks were not designed for AI systems. Adversarial attacks that fool models with carefully crafted inputs. Prompt injection that hijacks system behaviour. Model extraction where attackers steal your intellectual property by querying the API enough times. AI-specific incident response that your SOC probably has not planned for.

Five risks, but they represent genuine gaps in most organisations' security posture.

Ethics (13 risks)

This is where it gets contentious. Environmental impact of training runs. Physical safety when AI controls real-world systems. Financial harm from bad recommendations. Psychological manipulation. Discrimination, both direct and through proxies. Historical bias baked into training data and amplified at scale.

The domain also includes specific regulatory requirements: EEOC guidance on AI in hiring, FCA Consumer Duty obligations, adverse action notice requirements under US consumer protection law, SEC concerns about conflicts of interest in AI-driven advice.

Some people think ethics is fluffy. Regulators increasingly disagree, and your customers most certainly do also.

Transparency (1 risk)

Just one risk in this domain, but it is a big one. Black box decisions that cannot be explained to the people affected by them, or to the regulators asking questions, or to the internal oversight functions trying to provide assurance.

Explainability requirements vary enormously depending on use case, jurisdiction, and who is asking. But the underlying principle is consistent: if you cannot explain how a decision was made, you have a problem.

Generative AI (5 risks)

These risks barely existed three years ago. Models that provide information useful for creating weapons. Hallucinations presented with complete confidence. Citations to sources that do not exist. Content that is dangerous, illegal, or deeply inappropriate generated on demand.

Most AI governance frameworks have not caught up with generative AI. This domain tries to address that gap.

Deployment (2 risks)

Two risks that cause disproportionate pain. Shadow AI, where people across the organisation adopt tools without anyone sanctioning them, creating governance nightmares. And cloud cost overruns, where AI consumption scales faster than anyone budgeted for.

Neither is glamorous. Both are common.

Monitoring (4 risks)

Governance without monitoring is a point-in-time exercise that decays immediately. Internal audit coverage of AI systems. Management review processes that actually review something. Learning from incidents rather than just closing them. Following through on identified issues rather than letting them languish.

If nobody is watching, your controls are theoretical.

Procurement (3 risks)

Third-party risks specific to AI. Not knowing the provenance of models you are buying. Depending on a supply chain that concentrates around a handful of providers. Getting locked into vendors in ways that become expensive to escape.

These risks hit hardest for API consumers, but they affect anyone using third-party components.

Compliance (10 risks)

Sector-specific regulatory requirements that go beyond general AI governance. Algorithmic trading rules. SR 11-7 model risk management. The growing patchwork of US state AI laws. Insurance regulatory guidance. EU AI Act prohibited practices and high-risk system requirements. Transparency disclosures.

Systemic (4 risks)

The risks that keep prudential regulators awake at night and run the risk of financial catastrophe. Material errors in credit risk models that affect balance sheets. Integration failures where AI systems meet legacy infrastructure. Market correlation when everyone uses similar models. Pro-cyclical amplification where AI systems make market conditions worse.

These are macro-level concerns, but they start with individual organisations making individual decisions.

3. The 12 Risk Domains

What the Matrix Covers

The matrix is eighty-four risks organised into twelve domains. Here is what each one is about and why it matters.

Governance (12 risks)

This is the foundation that everything else builds on. Strategy, policy, roles, responsibilities, accountability, audit trails, training, human oversight. Boring but essential.

Governance comes first because organisations that get this wrong tend to get everything else wrong too. If nobody owns AI decisions, if there is no policy framework, if senior leadership is not engaged, then your technical controls are theatre. You are going through the motions without actually governing anything.

Every one of these risks applies regardless of deployment model. Whether you built the AI or bought it, someone in your organisation needs to be accountable for what it does.

Data (8 risks)

Privacy and data protection, but specifically the AI flavours of those problems. Models that memorise training data and spit it back out. Re-identification attacks on supposedly anonymised datasets. Staff pasting confidential information into ChatGPT. Processing personal data without proper lawful basis. Forgetting that automated decisions trigger specific rights under GDPR.

If you work with personal data, this domain will give your DPO plenty to think about.

Model (17 risks)

The largest domain, and deliberately so. AI is just a mathematical model, despite what many people think. Models degrade over time. Data distributions shift. Systems that worked in testing fail in production. Risk assessments get done once and never updated. Documentation is incomplete or missing. Change management is an afterthought. Nobody plans for decommissioning until a system needs to be switched off urgently.

This is where PRA SS1/23 and SR 11-7 expectations live. If you are in financial services, this domain matters a lot.

Security (5 risks)

Standard cybersecurity frameworks were not designed for AI systems. Adversarial attacks that fool models with carefully crafted inputs. Prompt injection that hijacks system behaviour. Model extraction where attackers steal your intellectual property by querying the API enough times. AI-specific incident response that your SOC probably has not planned for.

Five risks, but they represent genuine gaps in most organisations' security posture.

Ethics (13 risks)

This is where it gets contentious. Environmental impact of training runs. Physical safety when AI controls real-world systems. Financial harm from bad recommendations. Psychological manipulation. Discrimination, both direct and through proxies. Historical bias baked into training data and amplified at scale.

The domain also includes specific regulatory requirements: EEOC guidance on AI in hiring, FCA Consumer Duty obligations, adverse action notice requirements under US consumer protection law, SEC concerns about conflicts of interest in AI-driven advice.

Some people think ethics is fluffy. Regulators increasingly disagree, and your customers most certainly do also.

Transparency (1 risk)

Just one risk in this domain, but it is a big one. Black box decisions that cannot be explained to the people affected by them, or to the regulators asking questions, or to the internal oversight functions trying to provide assurance.

Explainability requirements vary enormously depending on use case, jurisdiction, and who is asking. But the underlying principle is consistent: if you cannot explain how a decision was made, you have a problem.

Generative AI (5 risks)

These risks barely existed three years ago. Models that provide information useful for creating weapons. Hallucinations presented with complete confidence. Citations to sources that do not exist. Content that is dangerous, illegal, or deeply inappropriate generated on demand.

Most AI governance frameworks have not caught up with generative AI. This domain tries to address that gap.

Deployment (2 risks)

Two risks that cause disproportionate pain. Shadow AI, where people across the organisation adopt tools without anyone sanctioning them, creating governance nightmares. And cloud cost overruns, where AI consumption scales faster than anyone budgeted for.

Neither is glamorous. Both are common.

Monitoring (4 risks)

Governance without monitoring is a point-in-time exercise that decays immediately. Internal audit coverage of AI systems. Management review processes that actually review something. Learning from incidents rather than just closing them. Following through on identified issues rather than letting them languish.

If nobody is watching, your controls are theoretical.

Procurement (3 risks)

Third-party risks specific to AI. Not knowing the provenance of models you are buying. Depending on a supply chain that concentrates around a handful of providers. Getting locked into vendors in ways that become expensive to escape.

These risks hit hardest for API consumers, but they affect anyone using third-party components.

Compliance (10 risks)

Sector-specific regulatory requirements that go beyond general AI governance. Algorithmic trading rules. SR 11-7 model risk management. The growing patchwork of US state AI laws. Insurance regulatory guidance. EU AI Act prohibited practices and high-risk system requirements. Transparency disclosures.

Systemic (4 risks)

The risks that keep prudential regulators awake at night and run the risk of financial catastrophe. Material errors in credit risk models that affect balance sheets. Integration failures where AI systems meet legacy infrastructure. Market correlation when everyone uses similar models. Pro-cyclical amplification where AI systems make market conditions worse.

These are macro-level concerns, but they start with individual organisations making individual decisions.

4. For Internal Auditors

Using the Matrix for AI Audit

Internal audit is increasingly expected to provide assurance over AI systems. That expectation often arrives without corresponding budget, training, or methodology. Audit teams are told to "cover AI" without much guidance on what that means in practice.

This matrix was built partly because of the same conversations happening repeatedly with audit teams trying to figure out where to start.

Scoping engagements

The twelve domains give you a structure for thinking about audit coverage. You do not need to cover everything in one engagement. A governance-focused review looks at risks 1-12. A data protection audit focuses on risks 13-20. A model risk review for a bank prioritises risks 21-37 and 71-80.

What you should not do is assume that "AI audit" is a single thing. The risks are too diverse for a generic approach.

Building audit programmes

The Expected Controls column describes what competent practice looks like for each risk. Turn those descriptions into audit procedures: does this control exist, is it designed to address the risk, and is it actually working?

The Auditor Notes column adds practical considerations. Some risks manifest differently depending on deployment model. Some have specific gotchas worth knowing about. The column tries to capture the things that would have been useful to know earlier.

Professional scepticism

Organisations will tell you they "just use APIs" as if that means AI governance does not apply to them. It does not work that way. Most governance risks apply regardless of deployment model. Data risks around confidential information disclosure apply especially to API usage because data leaves the controlled environment.

Push back on deployment model being used as an excuse for limited scope.

Writing findings

The Regulatory Traceability column gives you specific citations for your findings. "AI governance arrangements are inadequate" is weak. "AI governance arrangements do not meet FCA SYSC requirements for clear allocation of responsibilities" is stronger. Reference specific regulatory expectations and your findings carry more weight.

Dealing with second line

Risk and compliance functions may be using different AI frameworks. ISO 42001, NIST AI RMF, vendor-specific frameworks, internally developed approaches. The regulatory mapping helps translate between them. You should be testing the same underlying risks even if the terminology differs. Once again, this is the hardest bit for us to get right, so we don't promise it. So get yourselves comfortable and if there is an error, please let us know.

Reporting upwards

Audit committees want to understand AI risk exposure without drowning in detail. The domain structure works well for this. Report coverage by domain. Summarise findings by domain. Flag which domains represent highest residual risk. That gives the committee something actionable.

4. For Internal Auditors

Using the Matrix for AI Audit

Internal audit is increasingly expected to provide assurance over AI systems. That expectation often arrives without corresponding budget, training, or methodology. Audit teams are told to "cover AI" without much guidance on what that means in practice.

This matrix was built partly because of the same conversations happening repeatedly with audit teams trying to figure out where to start.

Scoping engagements

The twelve domains give you a structure for thinking about audit coverage. You do not need to cover everything in one engagement. A governance-focused review looks at risks 1-12. A data protection audit focuses on risks 13-20. A model risk review for a bank prioritises risks 21-37 and 71-80.

What you should not do is assume that "AI audit" is a single thing. The risks are too diverse for a generic approach.

Building audit programmes

The Expected Controls column describes what competent practice looks like for each risk. Turn those descriptions into audit procedures: does this control exist, is it designed to address the risk, and is it actually working?

The Auditor Notes column adds practical considerations. Some risks manifest differently depending on deployment model. Some have specific gotchas worth knowing about. The column tries to capture the things that would have been useful to know earlier.

Professional scepticism

Organisations will tell you they "just use APIs" as if that means AI governance does not apply to them. It does not work that way. Most governance risks apply regardless of deployment model. Data risks around confidential information disclosure apply especially to API usage because data leaves the controlled environment.

Push back on deployment model being used as an excuse for limited scope.

Writing findings

The Regulatory Traceability column gives you specific citations for your findings. "AI governance arrangements are inadequate" is weak. "AI governance arrangements do not meet FCA SYSC requirements for clear allocation of responsibilities" is stronger. Reference specific regulatory expectations and your findings carry more weight.

Dealing with second line

Risk and compliance functions may be using different AI frameworks. ISO 42001, NIST AI RMF, vendor-specific frameworks, internally developed approaches. The regulatory mapping helps translate between them. You should be testing the same underlying risks even if the terminology differs. Once again, this is the hardest bit for us to get right, so we don't promise it. So get yourselves comfortable and if there is an error, please let us know.

Reporting upwards

Audit committees want to understand AI risk exposure without drowning in detail. The domain structure works well for this. Report coverage by domain. Summarise findings by domain. Flag which domains represent highest residual risk. That gives the committee something actionable.

4. For Internal Auditors

Using the Matrix for AI Audit

Internal audit is increasingly expected to provide assurance over AI systems. That expectation often arrives without corresponding budget, training, or methodology. Audit teams are told to "cover AI" without much guidance on what that means in practice.

This matrix was built partly because of the same conversations happening repeatedly with audit teams trying to figure out where to start.

Scoping engagements

The twelve domains give you a structure for thinking about audit coverage. You do not need to cover everything in one engagement. A governance-focused review looks at risks 1-12. A data protection audit focuses on risks 13-20. A model risk review for a bank prioritises risks 21-37 and 71-80.

What you should not do is assume that "AI audit" is a single thing. The risks are too diverse for a generic approach.

Building audit programmes

The Expected Controls column describes what competent practice looks like for each risk. Turn those descriptions into audit procedures: does this control exist, is it designed to address the risk, and is it actually working?

The Auditor Notes column adds practical considerations. Some risks manifest differently depending on deployment model. Some have specific gotchas worth knowing about. The column tries to capture the things that would have been useful to know earlier.

Professional scepticism

Organisations will tell you they "just use APIs" as if that means AI governance does not apply to them. It does not work that way. Most governance risks apply regardless of deployment model. Data risks around confidential information disclosure apply especially to API usage because data leaves the controlled environment.

Push back on deployment model being used as an excuse for limited scope.

Writing findings

The Regulatory Traceability column gives you specific citations for your findings. "AI governance arrangements are inadequate" is weak. "AI governance arrangements do not meet FCA SYSC requirements for clear allocation of responsibilities" is stronger. Reference specific regulatory expectations and your findings carry more weight.

Dealing with second line

Risk and compliance functions may be using different AI frameworks. ISO 42001, NIST AI RMF, vendor-specific frameworks, internally developed approaches. The regulatory mapping helps translate between them. You should be testing the same underlying risks even if the terminology differs. Once again, this is the hardest bit for us to get right, so we don't promise it. So get yourselves comfortable and if there is an error, please let us know.

Reporting upwards

Audit committees want to understand AI risk exposure without drowning in detail. The domain structure works well for this. Report coverage by domain. Summarise findings by domain. Flag which domains represent highest residual risk. That gives the committee something actionable.

5. For Risk Managers

Using the Matrix for Enterprise Risk Management

AI risk does not fit neatly into traditional risk taxonomies. It is partly operational risk, partly technology risk, partly conduct risk, partly strategic risk. It touches legal, compliance, data protection, third-party management. Existing risk frameworks struggle to accommodate it.

This matrix gives you a structured way to identify AI-specific risks without rebuilding your entire risk framework.

Knowing what you have

You cannot assess risks for AI systems you do not know about. The first step is always inventory. What AI systems exist across the organisation? Who owns them? What do they do? What data do they process? What decisions do they inform?

The matrix helps here because it gives you categories to probe. When someone says they have no AI, ask about the twelve domains. Do they use any third-party APIs? Any automated decision-making? Any predictive models? Any chatbots? Often "we don't have AI" really means "we don't call it AI."

Working through the risks

Once you know what exists, walk through the domains. Which risks apply to your context? Not all 84 will be relevant. A firm with no customer-facing AI has limited Consumer Duty exposure. A firm not subject to US banking regulation can deprioritise SR 11-7 concerns.

But be honest with yourself.

  • Shadow AI means risks exist that you have not sanctioned.

  • The inventory you have is probably incomplete.

Assessing controls

For each relevant risk, compare what you have against what the matrix describes as expected. Document the gaps. Some gaps will be acceptable given your risk appetite. Others will need remediation. The matrix does not tell you which is which. That is your judgement call.

Integrating with existing frameworks

You probably already have an operational risk framework, a technology risk framework, a third-party risk framework. The AI risks in this matrix need to connect to those existing structures, not replace them.

The ISO 42001 and NIST AI RMF mappings help with integration. If your organisation has adopted either of those standards, you can map the matrix risks to your existing control structure.

I've said it before so I will say it again…. This is the hardest bit for us to get right, so we don't promise it. So get yourselves comfortable and if there is an error, please let us know.

Challenging the business

When business units propose AI initiatives, use the matrix to structure your challenge. Which risks does this introduce? What controls are planned? How does this affect aggregate risk exposure? Who is accountable if something goes wrong?

The domain structure gives you a consistent vocabulary for these conversations across different initiatives and business areas.

5. For Risk Managers

Using the Matrix for Enterprise Risk Management

AI risk does not fit neatly into traditional risk taxonomies. It is partly operational risk, partly technology risk, partly conduct risk, partly strategic risk. It touches legal, compliance, data protection, third-party management. Existing risk frameworks struggle to accommodate it.

This matrix gives you a structured way to identify AI-specific risks without rebuilding your entire risk framework.

Knowing what you have

You cannot assess risks for AI systems you do not know about. The first step is always inventory. What AI systems exist across the organisation? Who owns them? What do they do? What data do they process? What decisions do they inform?

The matrix helps here because it gives you categories to probe. When someone says they have no AI, ask about the twelve domains. Do they use any third-party APIs? Any automated decision-making? Any predictive models? Any chatbots? Often "we don't have AI" really means "we don't call it AI."

Working through the risks

Once you know what exists, walk through the domains. Which risks apply to your context? Not all 84 will be relevant. A firm with no customer-facing AI has limited Consumer Duty exposure. A firm not subject to US banking regulation can deprioritise SR 11-7 concerns.

But be honest with yourself.

  • Shadow AI means risks exist that you have not sanctioned.

  • The inventory you have is probably incomplete.

Assessing controls

For each relevant risk, compare what you have against what the matrix describes as expected. Document the gaps. Some gaps will be acceptable given your risk appetite. Others will need remediation. The matrix does not tell you which is which. That is your judgement call.

Integrating with existing frameworks

You probably already have an operational risk framework, a technology risk framework, a third-party risk framework. The AI risks in this matrix need to connect to those existing structures, not replace them.

The ISO 42001 and NIST AI RMF mappings help with integration. If your organisation has adopted either of those standards, you can map the matrix risks to your existing control structure.

I've said it before so I will say it again…. This is the hardest bit for us to get right, so we don't promise it. So get yourselves comfortable and if there is an error, please let us know.

Challenging the business

When business units propose AI initiatives, use the matrix to structure your challenge. Which risks does this introduce? What controls are planned? How does this affect aggregate risk exposure? Who is accountable if something goes wrong?

The domain structure gives you a consistent vocabulary for these conversations across different initiatives and business areas.

5. For Risk Managers

Using the Matrix for Enterprise Risk Management

AI risk does not fit neatly into traditional risk taxonomies. It is partly operational risk, partly technology risk, partly conduct risk, partly strategic risk. It touches legal, compliance, data protection, third-party management. Existing risk frameworks struggle to accommodate it.

This matrix gives you a structured way to identify AI-specific risks without rebuilding your entire risk framework.

Knowing what you have

You cannot assess risks for AI systems you do not know about. The first step is always inventory. What AI systems exist across the organisation? Who owns them? What do they do? What data do they process? What decisions do they inform?

The matrix helps here because it gives you categories to probe. When someone says they have no AI, ask about the twelve domains. Do they use any third-party APIs? Any automated decision-making? Any predictive models? Any chatbots? Often "we don't have AI" really means "we don't call it AI."

Working through the risks

Once you know what exists, walk through the domains. Which risks apply to your context? Not all 84 will be relevant. A firm with no customer-facing AI has limited Consumer Duty exposure. A firm not subject to US banking regulation can deprioritise SR 11-7 concerns.

But be honest with yourself.

  • Shadow AI means risks exist that you have not sanctioned.

  • The inventory you have is probably incomplete.

Assessing controls

For each relevant risk, compare what you have against what the matrix describes as expected. Document the gaps. Some gaps will be acceptable given your risk appetite. Others will need remediation. The matrix does not tell you which is which. That is your judgement call.

Integrating with existing frameworks

You probably already have an operational risk framework, a technology risk framework, a third-party risk framework. The AI risks in this matrix need to connect to those existing structures, not replace them.

The ISO 42001 and NIST AI RMF mappings help with integration. If your organisation has adopted either of those standards, you can map the matrix risks to your existing control structure.

I've said it before so I will say it again…. This is the hardest bit for us to get right, so we don't promise it. So get yourselves comfortable and if there is an error, please let us know.

Challenging the business

When business units propose AI initiatives, use the matrix to structure your challenge. Which risks does this introduce? What controls are planned? How does this affect aggregate risk exposure? Who is accountable if something goes wrong?

The domain structure gives you a consistent vocabulary for these conversations across different initiatives and business areas.

6. For Compliance

Using the Matrix for Regulatory Compliance

AI regulation is a moving target. New requirements keep arriving. Existing rules are being reinterpreted. Enforcement is increasing. Keeping track of what applies and demonstrating compliance is a significant challenge.

The matrix helps by connecting specific risks to specific regulatory requirements.

Finding your obligations

The Regulatory Traceability column maps each risk to requirements across UK, EU, US, and Canada. Filter to your jurisdictions and you have a focused view of what regulators expect.

The mapping is detailed. UK references include FCA rules, PRA supervisory statements, Bank of England discussion papers, ICO guidance. EU references cover the AI Act, GDPR, sectoral directives. US references span federal banking guidance, SEC requirements, FTC authority, and the growing mess of state laws. Canadian references include PIPEDA, OSFI guidance, and provincial requirements.

Preparing for the EU AI Act

If you operate in the EU or serve EU customers, the AI Act is coming whether you are ready or not. Prohibited practices apply from February 2025. High-risk system requirements apply from August 2026.

The Compliance domain (risks 71-80) addresses AI Act requirements specifically. Use those risks to assess where you stand and what work remains.

Dealing with existing regulation

Most AI compliance obligations today derive from existing rules applied to new technology. FCA SYSC covers AI governance. GDPR Article 22 covers automated decision-making. SR 11-7 covers AI models. Consumer Duty covers AI-driven customer outcomes.

The matrix traces these connections explicitly. When regulators ask how you govern AI, you can explain how your existing compliance framework addresses AI-specific risks.

Gathering evidence

The Expected Controls column describes what regulators expect to see. Policy documentation. Training records. Risk assessments. Impact assessments. Approval records. Monitoring outputs. If you cannot produce these things when asked, you have a compliance gap regardless of whether the underlying controls exist.

Engaging with regulators

When discussing AI in regulatory meetings, the matrix provides common vocabulary. You can frame your approach using recognised domains, reference established standards like ISO 42001 and NIST AI RMF, and demonstrate awareness of specific regulatory expectations.

That said, the matrix is not a compliance shield. It reflects interpretation of requirements at the time of writing. Regulators may interpret obligations differently. Your legal team should validate any compliance claims.

6. For Compliance

Using the Matrix for Regulatory Compliance

AI regulation is a moving target. New requirements keep arriving. Existing rules are being reinterpreted. Enforcement is increasing. Keeping track of what applies and demonstrating compliance is a significant challenge.

The matrix helps by connecting specific risks to specific regulatory requirements.

Finding your obligations

The Regulatory Traceability column maps each risk to requirements across UK, EU, US, and Canada. Filter to your jurisdictions and you have a focused view of what regulators expect.

The mapping is detailed. UK references include FCA rules, PRA supervisory statements, Bank of England discussion papers, ICO guidance. EU references cover the AI Act, GDPR, sectoral directives. US references span federal banking guidance, SEC requirements, FTC authority, and the growing mess of state laws. Canadian references include PIPEDA, OSFI guidance, and provincial requirements.

Preparing for the EU AI Act

If you operate in the EU or serve EU customers, the AI Act is coming whether you are ready or not. Prohibited practices apply from February 2025. High-risk system requirements apply from August 2026.

The Compliance domain (risks 71-80) addresses AI Act requirements specifically. Use those risks to assess where you stand and what work remains.

Dealing with existing regulation

Most AI compliance obligations today derive from existing rules applied to new technology. FCA SYSC covers AI governance. GDPR Article 22 covers automated decision-making. SR 11-7 covers AI models. Consumer Duty covers AI-driven customer outcomes.

The matrix traces these connections explicitly. When regulators ask how you govern AI, you can explain how your existing compliance framework addresses AI-specific risks.

Gathering evidence

The Expected Controls column describes what regulators expect to see. Policy documentation. Training records. Risk assessments. Impact assessments. Approval records. Monitoring outputs. If you cannot produce these things when asked, you have a compliance gap regardless of whether the underlying controls exist.

Engaging with regulators

When discussing AI in regulatory meetings, the matrix provides common vocabulary. You can frame your approach using recognised domains, reference established standards like ISO 42001 and NIST AI RMF, and demonstrate awareness of specific regulatory expectations.

That said, the matrix is not a compliance shield. It reflects interpretation of requirements at the time of writing. Regulators may interpret obligations differently. Your legal team should validate any compliance claims.

6. For Compliance

Using the Matrix for Regulatory Compliance

AI regulation is a moving target. New requirements keep arriving. Existing rules are being reinterpreted. Enforcement is increasing. Keeping track of what applies and demonstrating compliance is a significant challenge.

The matrix helps by connecting specific risks to specific regulatory requirements.

Finding your obligations

The Regulatory Traceability column maps each risk to requirements across UK, EU, US, and Canada. Filter to your jurisdictions and you have a focused view of what regulators expect.

The mapping is detailed. UK references include FCA rules, PRA supervisory statements, Bank of England discussion papers, ICO guidance. EU references cover the AI Act, GDPR, sectoral directives. US references span federal banking guidance, SEC requirements, FTC authority, and the growing mess of state laws. Canadian references include PIPEDA, OSFI guidance, and provincial requirements.

Preparing for the EU AI Act

If you operate in the EU or serve EU customers, the AI Act is coming whether you are ready or not. Prohibited practices apply from February 2025. High-risk system requirements apply from August 2026.

The Compliance domain (risks 71-80) addresses AI Act requirements specifically. Use those risks to assess where you stand and what work remains.

Dealing with existing regulation

Most AI compliance obligations today derive from existing rules applied to new technology. FCA SYSC covers AI governance. GDPR Article 22 covers automated decision-making. SR 11-7 covers AI models. Consumer Duty covers AI-driven customer outcomes.

The matrix traces these connections explicitly. When regulators ask how you govern AI, you can explain how your existing compliance framework addresses AI-specific risks.

Gathering evidence

The Expected Controls column describes what regulators expect to see. Policy documentation. Training records. Risk assessments. Impact assessments. Approval records. Monitoring outputs. If you cannot produce these things when asked, you have a compliance gap regardless of whether the underlying controls exist.

Engaging with regulators

When discussing AI in regulatory meetings, the matrix provides common vocabulary. You can frame your approach using recognised domains, reference established standards like ISO 42001 and NIST AI RMF, and demonstrate awareness of specific regulatory expectations.

That said, the matrix is not a compliance shield. It reflects interpretation of requirements at the time of writing. Regulators may interpret obligations differently. Your legal team should validate any compliance claims.

7. Deployment Models

Why It Matters How You Deploy AI

One of the biggest mistakes in AI governance is treating all AI the same regardless of how it is deployed. A firm calling the OpenAI API faces fundamentally different risks than one training custom models on proprietary data. The controls that make sense differ. The regulatory exposure differs. The skills you need internally differ.

The matrix flags each risk with applicability ratings for three deployment patterns.

API and SaaS

You consume AI through third-party services. You send requests, you get responses. The model itself is a black box. You have no control over architecture, training data, or how inference works.

This is the most common deployment pattern and often the most misunderstood from a risk perspective.

People assume that using APIs means they have outsourced AI risk to the vendor. They have not. They have outsourced some technical risks but retained most governance, data, ethics, and compliance risks. If the vendor's model discriminates against protected groups, you are still the one deploying a discriminatory system. If staff paste confidential data into the API, you are responsible for that data breach.

Risks that fully apply for API consumers include most governance risks, data confidentiality risks, ethics and bias risks, and compliance risks based on what decisions you make using AI outputs.

Risks that partially apply include model monitoring (you can observe outputs but not internal behaviour), audit trails (you can log what you send and receive but nothing else), and security (your attack surface differs from organisations running infrastructure).

On-Premises and Open Source

You run models on infrastructure you control. This might be open source models like Llama deployed internally, or commercial models licensed for self-hosting. Data stays within your environment.

This pattern gives you more control but more responsibility. You own the full lifecycle: deployment, monitoring, maintenance, updates, eventual retirement. You cannot blame a vendor when the model degrades or produces unexpected outputs.

All governance risks apply. All model lifecycle risks apply. All security risks apply since you own the attack surface. Most compliance risks apply based on your jurisdiction and use case.

Some risks apply less. Vendor data use is not a concern when data never leaves your environment. Supply chain risks differ since you are not dependent on SaaS availability.

Fine-Tuned and Custom

You build or significantly modify AI models. This includes fine-tuning foundation models on your own data, training models from scratch, or developing novel approaches.

Everything that applies to on-premises deployments also applies here, plus additional risks around training data provenance, model architecture decisions, and development methodology. You bear full responsibility. There is no vendor to share accountability with.

This pattern requires the most internal capability: data scientists, MLOps engineers, model validators, specialist infrastructure. Most organisations should not attempt it unless they have genuine need and appropriate resources.

Using the filter

The deployment model filter helps you prioritise. If you only consume APIs, do not spend equal time on all 84 risks. Focus on the subset that applies to your situation. But stay aware of the others in case your deployment approach evolves.

7. Deployment Models

Why It Matters How You Deploy AI

One of the biggest mistakes in AI governance is treating all AI the same regardless of how it is deployed. A firm calling the OpenAI API faces fundamentally different risks than one training custom models on proprietary data. The controls that make sense differ. The regulatory exposure differs. The skills you need internally differ.

The matrix flags each risk with applicability ratings for three deployment patterns.

API and SaaS

You consume AI through third-party services. You send requests, you get responses. The model itself is a black box. You have no control over architecture, training data, or how inference works.

This is the most common deployment pattern and often the most misunderstood from a risk perspective.

People assume that using APIs means they have outsourced AI risk to the vendor. They have not. They have outsourced some technical risks but retained most governance, data, ethics, and compliance risks. If the vendor's model discriminates against protected groups, you are still the one deploying a discriminatory system. If staff paste confidential data into the API, you are responsible for that data breach.

Risks that fully apply for API consumers include most governance risks, data confidentiality risks, ethics and bias risks, and compliance risks based on what decisions you make using AI outputs.

Risks that partially apply include model monitoring (you can observe outputs but not internal behaviour), audit trails (you can log what you send and receive but nothing else), and security (your attack surface differs from organisations running infrastructure).

On-Premises and Open Source

You run models on infrastructure you control. This might be open source models like Llama deployed internally, or commercial models licensed for self-hosting. Data stays within your environment.

This pattern gives you more control but more responsibility. You own the full lifecycle: deployment, monitoring, maintenance, updates, eventual retirement. You cannot blame a vendor when the model degrades or produces unexpected outputs.

All governance risks apply. All model lifecycle risks apply. All security risks apply since you own the attack surface. Most compliance risks apply based on your jurisdiction and use case.

Some risks apply less. Vendor data use is not a concern when data never leaves your environment. Supply chain risks differ since you are not dependent on SaaS availability.

Fine-Tuned and Custom

You build or significantly modify AI models. This includes fine-tuning foundation models on your own data, training models from scratch, or developing novel approaches.

Everything that applies to on-premises deployments also applies here, plus additional risks around training data provenance, model architecture decisions, and development methodology. You bear full responsibility. There is no vendor to share accountability with.

This pattern requires the most internal capability: data scientists, MLOps engineers, model validators, specialist infrastructure. Most organisations should not attempt it unless they have genuine need and appropriate resources.

Using the filter

The deployment model filter helps you prioritise. If you only consume APIs, do not spend equal time on all 84 risks. Focus on the subset that applies to your situation. But stay aware of the others in case your deployment approach evolves.

7. Deployment Models

Why It Matters How You Deploy AI

One of the biggest mistakes in AI governance is treating all AI the same regardless of how it is deployed. A firm calling the OpenAI API faces fundamentally different risks than one training custom models on proprietary data. The controls that make sense differ. The regulatory exposure differs. The skills you need internally differ.

The matrix flags each risk with applicability ratings for three deployment patterns.

API and SaaS

You consume AI through third-party services. You send requests, you get responses. The model itself is a black box. You have no control over architecture, training data, or how inference works.

This is the most common deployment pattern and often the most misunderstood from a risk perspective.

People assume that using APIs means they have outsourced AI risk to the vendor. They have not. They have outsourced some technical risks but retained most governance, data, ethics, and compliance risks. If the vendor's model discriminates against protected groups, you are still the one deploying a discriminatory system. If staff paste confidential data into the API, you are responsible for that data breach.

Risks that fully apply for API consumers include most governance risks, data confidentiality risks, ethics and bias risks, and compliance risks based on what decisions you make using AI outputs.

Risks that partially apply include model monitoring (you can observe outputs but not internal behaviour), audit trails (you can log what you send and receive but nothing else), and security (your attack surface differs from organisations running infrastructure).

On-Premises and Open Source

You run models on infrastructure you control. This might be open source models like Llama deployed internally, or commercial models licensed for self-hosting. Data stays within your environment.

This pattern gives you more control but more responsibility. You own the full lifecycle: deployment, monitoring, maintenance, updates, eventual retirement. You cannot blame a vendor when the model degrades or produces unexpected outputs.

All governance risks apply. All model lifecycle risks apply. All security risks apply since you own the attack surface. Most compliance risks apply based on your jurisdiction and use case.

Some risks apply less. Vendor data use is not a concern when data never leaves your environment. Supply chain risks differ since you are not dependent on SaaS availability.

Fine-Tuned and Custom

You build or significantly modify AI models. This includes fine-tuning foundation models on your own data, training models from scratch, or developing novel approaches.

Everything that applies to on-premises deployments also applies here, plus additional risks around training data provenance, model architecture decisions, and development methodology. You bear full responsibility. There is no vendor to share accountability with.

This pattern requires the most internal capability: data scientists, MLOps engineers, model validators, specialist infrastructure. Most organisations should not attempt it unless they have genuine need and appropriate resources.

Using the filter

The deployment model filter helps you prioritise. If you only consume APIs, do not spend equal time on all 84 risks. Focus on the subset that applies to your situation. But stay aware of the others in case your deployment approach evolves.

8. Regulatory Mapping

What is Covered and How to Use It

Each risk in the matrix maps to specific regulatory requirements across four jurisdictions. This section explains what is included and the limitations of the mapping.

United Kingdom

UK mappings reference FCA rules (SYSC for systems and controls, PRIN for principles, SUP for supervision, COBS for conduct of business), PRA requirements (particularly SS1/23 on model risk management and the Fundamental Rules), Bank of England discussion papers (especially DP5/22 on AI and machine learning), ICO guidance on AI and data protection, and Senior Managers Regime accountability requirements.

Financial services firms face the most prescriptive UK requirements. PRA SS1/23 essentially applies SR 11-7 equivalent expectations to UK banks. FCA Consumer Duty creates obligations for customer-facing AI. SM&CR means someone specific must be accountable for AI decisions.

European Union

EU mappings centre on the AI Act (prohibited practices, high-risk system requirements, transparency obligations, general-purpose AI provider duties), GDPR (lawful basis, automated decision-making, data protection impact assessments, data subject rights), and sectoral directives like CRD.

The AI Act is the most comprehensive AI-specific regulation anywhere in the world. Understanding the timelines matters: prohibited practices from February 2025, literacy requirements from February 2025, high-risk requirements from August 2026. General-purpose AI obligations affect foundation model providers with downstream implications for everyone else.

United States

US mappings include federal banking guidance (SR 11-7 on model risk management, OCC bulletins, FDIC guidance), SEC requirements for investment advisers and broker-dealers, FTC Section 5 authority over unfair and deceptive practices, CFPB guidance on AI in consumer credit, EEOC guidance on AI in employment decisions, and the proliferating state-level AI laws (Colorado, Illinois, and others).

The US lacks comprehensive federal AI legislation. Instead, existing authorities are being stretched to cover AI. This creates uncertainty. SR 11-7 is well-established for banking but applying it to AI models requires interpretation. State laws are creating a patchwork of overlapping and sometimes conflicting requirements.

Canada

Canadian mappings reference PIPEDA for privacy requirements, OSFI B-10 for third-party risk management in federally regulated financial institutions, Quebec Law 25 for provincial privacy requirements including automated decision provisions, and AIDA (the Artificial Intelligence and Data Act, Bill C-27).

AIDA would have created comprehensive federal AI requirements but the bill lapsed in January 2025 when Parliament was prorogued. The matrix retains AIDA references as non-binding indicators of regulatory direction. The requirements may resurface in future legislation.

Using the mapping

Filter to your relevant jurisdictions. If you only operate in the UK, deprioritise risks traced primarily to US state laws. If you serve EU customers, AI Act requirements apply regardless of where your company is incorporated.

Limitations

The mapping reflects interpretation at the time of writing. Regulations change. Guidance evolves. Enforcement practice develops. Your legal team should validate any compliance conclusions.

Some regulatory references are more direct than others. Where a regulation explicitly addresses AI (like the EU AI Act), the mapping is straightforward. Where existing rules are being applied to AI contexts (like applying SR 11-7 to AI models), there is more interpretive judgement involved.

Important: this matrix is provided as is without warranty. It does not constitute legal, regulatory, or professional advice. The regulatory mappings reflect our interpretation at the time of writing and may not reflect current requirements or how regulators will interpret obligations in your specific circumstances.
You are responsible for your own compliance. If in doubt, get proper legal advice.

Standards are copyright their respective owners. The matrix provides high-level mappings only and does not reproduce those standards.

8. Regulatory Mapping

What is Covered and How to Use It

Each risk in the matrix maps to specific regulatory requirements across four jurisdictions. This section explains what is included and the limitations of the mapping.

United Kingdom

UK mappings reference FCA rules (SYSC for systems and controls, PRIN for principles, SUP for supervision, COBS for conduct of business), PRA requirements (particularly SS1/23 on model risk management and the Fundamental Rules), Bank of England discussion papers (especially DP5/22 on AI and machine learning), ICO guidance on AI and data protection, and Senior Managers Regime accountability requirements.

Financial services firms face the most prescriptive UK requirements. PRA SS1/23 essentially applies SR 11-7 equivalent expectations to UK banks. FCA Consumer Duty creates obligations for customer-facing AI. SM&CR means someone specific must be accountable for AI decisions.

European Union

EU mappings centre on the AI Act (prohibited practices, high-risk system requirements, transparency obligations, general-purpose AI provider duties), GDPR (lawful basis, automated decision-making, data protection impact assessments, data subject rights), and sectoral directives like CRD.

The AI Act is the most comprehensive AI-specific regulation anywhere in the world. Understanding the timelines matters: prohibited practices from February 2025, literacy requirements from February 2025, high-risk requirements from August 2026. General-purpose AI obligations affect foundation model providers with downstream implications for everyone else.

United States

US mappings include federal banking guidance (SR 11-7 on model risk management, OCC bulletins, FDIC guidance), SEC requirements for investment advisers and broker-dealers, FTC Section 5 authority over unfair and deceptive practices, CFPB guidance on AI in consumer credit, EEOC guidance on AI in employment decisions, and the proliferating state-level AI laws (Colorado, Illinois, and others).

The US lacks comprehensive federal AI legislation. Instead, existing authorities are being stretched to cover AI. This creates uncertainty. SR 11-7 is well-established for banking but applying it to AI models requires interpretation. State laws are creating a patchwork of overlapping and sometimes conflicting requirements.

Canada

Canadian mappings reference PIPEDA for privacy requirements, OSFI B-10 for third-party risk management in federally regulated financial institutions, Quebec Law 25 for provincial privacy requirements including automated decision provisions, and AIDA (the Artificial Intelligence and Data Act, Bill C-27).

AIDA would have created comprehensive federal AI requirements but the bill lapsed in January 2025 when Parliament was prorogued. The matrix retains AIDA references as non-binding indicators of regulatory direction. The requirements may resurface in future legislation.

Using the mapping

Filter to your relevant jurisdictions. If you only operate in the UK, deprioritise risks traced primarily to US state laws. If you serve EU customers, AI Act requirements apply regardless of where your company is incorporated.

Limitations

The mapping reflects interpretation at the time of writing. Regulations change. Guidance evolves. Enforcement practice develops. Your legal team should validate any compliance conclusions.

Some regulatory references are more direct than others. Where a regulation explicitly addresses AI (like the EU AI Act), the mapping is straightforward. Where existing rules are being applied to AI contexts (like applying SR 11-7 to AI models), there is more interpretive judgement involved.

Important: this matrix is provided as is without warranty. It does not constitute legal, regulatory, or professional advice. The regulatory mappings reflect our interpretation at the time of writing and may not reflect current requirements or how regulators will interpret obligations in your specific circumstances.
You are responsible for your own compliance. If in doubt, get proper legal advice.

Standards are copyright their respective owners. The matrix provides high-level mappings only and does not reproduce those standards.

8. Regulatory Mapping

What is Covered and How to Use It

Each risk in the matrix maps to specific regulatory requirements across four jurisdictions. This section explains what is included and the limitations of the mapping.

United Kingdom

UK mappings reference FCA rules (SYSC for systems and controls, PRIN for principles, SUP for supervision, COBS for conduct of business), PRA requirements (particularly SS1/23 on model risk management and the Fundamental Rules), Bank of England discussion papers (especially DP5/22 on AI and machine learning), ICO guidance on AI and data protection, and Senior Managers Regime accountability requirements.

Financial services firms face the most prescriptive UK requirements. PRA SS1/23 essentially applies SR 11-7 equivalent expectations to UK banks. FCA Consumer Duty creates obligations for customer-facing AI. SM&CR means someone specific must be accountable for AI decisions.

European Union

EU mappings centre on the AI Act (prohibited practices, high-risk system requirements, transparency obligations, general-purpose AI provider duties), GDPR (lawful basis, automated decision-making, data protection impact assessments, data subject rights), and sectoral directives like CRD.

The AI Act is the most comprehensive AI-specific regulation anywhere in the world. Understanding the timelines matters: prohibited practices from February 2025, literacy requirements from February 2025, high-risk requirements from August 2026. General-purpose AI obligations affect foundation model providers with downstream implications for everyone else.

United States

US mappings include federal banking guidance (SR 11-7 on model risk management, OCC bulletins, FDIC guidance), SEC requirements for investment advisers and broker-dealers, FTC Section 5 authority over unfair and deceptive practices, CFPB guidance on AI in consumer credit, EEOC guidance on AI in employment decisions, and the proliferating state-level AI laws (Colorado, Illinois, and others).

The US lacks comprehensive federal AI legislation. Instead, existing authorities are being stretched to cover AI. This creates uncertainty. SR 11-7 is well-established for banking but applying it to AI models requires interpretation. State laws are creating a patchwork of overlapping and sometimes conflicting requirements.

Canada

Canadian mappings reference PIPEDA for privacy requirements, OSFI B-10 for third-party risk management in federally regulated financial institutions, Quebec Law 25 for provincial privacy requirements including automated decision provisions, and AIDA (the Artificial Intelligence and Data Act, Bill C-27).

AIDA would have created comprehensive federal AI requirements but the bill lapsed in January 2025 when Parliament was prorogued. The matrix retains AIDA references as non-binding indicators of regulatory direction. The requirements may resurface in future legislation.

Using the mapping

Filter to your relevant jurisdictions. If you only operate in the UK, deprioritise risks traced primarily to US state laws. If you serve EU customers, AI Act requirements apply regardless of where your company is incorporated.

Limitations

The mapping reflects interpretation at the time of writing. Regulations change. Guidance evolves. Enforcement practice develops. Your legal team should validate any compliance conclusions.

Some regulatory references are more direct than others. Where a regulation explicitly addresses AI (like the EU AI Act), the mapping is straightforward. Where existing rules are being applied to AI contexts (like applying SR 11-7 to AI models), there is more interpretive judgement involved.

Important: this matrix is provided as is without warranty. It does not constitute legal, regulatory, or professional advice. The regulatory mappings reflect our interpretation at the time of writing and may not reflect current requirements or how regulators will interpret obligations in your specific circumstances.
You are responsible for your own compliance. If in doubt, get proper legal advice.

Standards are copyright their respective owners. The matrix provides high-level mappings only and does not reproduce those standards.

9. Contributing

Help Us Make It Better

This matrix is open source because, well we are a bit sick of being in consultancy and charging multiple people for the same thing. Life is better when you work together, right? It is super important that AI governance is not be a proprietary advantage. The risks are too important and the knowledge too dispersed for any single person or organisation to get it right alone.

So, help us to help you. If you can make it better, please do.

What would help

Risks that have been missed. There are certainly some. The matrix is weighted towards financial services and UK/EU contexts. Other sectors and jurisdictions will have risks not yet considered.

Better control descriptions. The Expected Controls column reflects a view of good practice, but practical experience often reveals better approaches. If you have implemented something that works well, share it.

Updated regulatory references. Requirements change constantly. If a regulation has been amended, new guidance published, or enforcement practice evolved, the mapping needs updating.

Corrections. Errors inevitably exist. If something is wrong, flag it.

Practical auditor notes. The Auditor Notes column captures things that would have been useful to know earlier. If you have learned something the hard way that would help others, add it.

What would not help

Vendor-specific product recommendations disguised as controls. Generic AI ethics principles without operational substance. Theoretical risks with no practical relevance. Marketing language.

The matrix should stay focused and usable. Comprehensive but not exhaustive. Practical, not academic.

How to contribute

The source files are on GitHub. Fork the repository, make your changes, and submit a pull request. Explain what you changed and why. Cite regulatory sources or practical evidence where relevant.

For small corrections, a brief pull request is fine. For significant additions like new risks or major control revisions, include more context on rationale and supporting evidence.

What happens next

Contributions are reviewed for accuracy, relevance, and consistency with the matrix structure. Not everything will be accepted. The goal is quality over quantity. A focused tool that people actually use beats a comprehensive catalogue that nobody reads.

Contributors are acknowledged in the changelog. Version numbers follow semantic versioning: patches for minor fixes, minor versions for new content, major versions for structural changes.

Discussion

If you want to discuss ideas before contributing, or flag issues without writing specific fixes, open an issue on GitHub. That is also the place for questions about interpretation or suggestions for future direction.

9. Contributing

Help Us Make It Better

This matrix is open source because, well we are a bit sick of being in consultancy and charging multiple people for the same thing. Life is better when you work together, right? It is super important that AI governance is not be a proprietary advantage. The risks are too important and the knowledge too dispersed for any single person or organisation to get it right alone.

So, help us to help you. If you can make it better, please do.

What would help

Risks that have been missed. There are certainly some. The matrix is weighted towards financial services and UK/EU contexts. Other sectors and jurisdictions will have risks not yet considered.

Better control descriptions. The Expected Controls column reflects a view of good practice, but practical experience often reveals better approaches. If you have implemented something that works well, share it.

Updated regulatory references. Requirements change constantly. If a regulation has been amended, new guidance published, or enforcement practice evolved, the mapping needs updating.

Corrections. Errors inevitably exist. If something is wrong, flag it.

Practical auditor notes. The Auditor Notes column captures things that would have been useful to know earlier. If you have learned something the hard way that would help others, add it.

What would not help

Vendor-specific product recommendations disguised as controls. Generic AI ethics principles without operational substance. Theoretical risks with no practical relevance. Marketing language.

The matrix should stay focused and usable. Comprehensive but not exhaustive. Practical, not academic.

How to contribute

The source files are on GitHub. Fork the repository, make your changes, and submit a pull request. Explain what you changed and why. Cite regulatory sources or practical evidence where relevant.

For small corrections, a brief pull request is fine. For significant additions like new risks or major control revisions, include more context on rationale and supporting evidence.

What happens next

Contributions are reviewed for accuracy, relevance, and consistency with the matrix structure. Not everything will be accepted. The goal is quality over quantity. A focused tool that people actually use beats a comprehensive catalogue that nobody reads.

Contributors are acknowledged in the changelog. Version numbers follow semantic versioning: patches for minor fixes, minor versions for new content, major versions for structural changes.

Discussion

If you want to discuss ideas before contributing, or flag issues without writing specific fixes, open an issue on GitHub. That is also the place for questions about interpretation or suggestions for future direction.

9. Contributing

Help Us Make It Better

This matrix is open source because, well we are a bit sick of being in consultancy and charging multiple people for the same thing. Life is better when you work together, right? It is super important that AI governance is not be a proprietary advantage. The risks are too important and the knowledge too dispersed for any single person or organisation to get it right alone.

So, help us to help you. If you can make it better, please do.

What would help

Risks that have been missed. There are certainly some. The matrix is weighted towards financial services and UK/EU contexts. Other sectors and jurisdictions will have risks not yet considered.

Better control descriptions. The Expected Controls column reflects a view of good practice, but practical experience often reveals better approaches. If you have implemented something that works well, share it.

Updated regulatory references. Requirements change constantly. If a regulation has been amended, new guidance published, or enforcement practice evolved, the mapping needs updating.

Corrections. Errors inevitably exist. If something is wrong, flag it.

Practical auditor notes. The Auditor Notes column captures things that would have been useful to know earlier. If you have learned something the hard way that would help others, add it.

What would not help

Vendor-specific product recommendations disguised as controls. Generic AI ethics principles without operational substance. Theoretical risks with no practical relevance. Marketing language.

The matrix should stay focused and usable. Comprehensive but not exhaustive. Practical, not academic.

How to contribute

The source files are on GitHub. Fork the repository, make your changes, and submit a pull request. Explain what you changed and why. Cite regulatory sources or practical evidence where relevant.

For small corrections, a brief pull request is fine. For significant additions like new risks or major control revisions, include more context on rationale and supporting evidence.

What happens next

Contributions are reviewed for accuracy, relevance, and consistency with the matrix structure. Not everything will be accepted. The goal is quality over quantity. A focused tool that people actually use beats a comprehensive catalogue that nobody reads.

Contributors are acknowledged in the changelog. Version numbers follow semantic versioning: patches for minor fixes, minor versions for new content, major versions for structural changes.

Discussion

If you want to discuss ideas before contributing, or flag issues without writing specific fixes, open an issue on GitHub. That is also the place for questions about interpretation or suggestions for future direction.

10. Download

Get the Matrix

Current Version: 0.5 DRAFT

Last updated: November 2024

This is an initial public release. It reflects current thinking and will evolve based on feedback and regulatory developments.

Excel Download

The full matrix with all 84 risks, expected controls, regulatory mapping, deployment model flags, and auditor notes. Excel format so you can filter, sort, and annotate for your context.

Download from GitHub

GitHub Repository

Source files, version history, and issue tracking. This is where updates happen and where you can suggest improvements.

github.com/Bloch-AI/AI-RACM

What is Included

84 risks across 12 domains. Expected controls for each risk. Deployment model applicability (API/SaaS, On-Prem/OSS, Fine-Tuned/Custom). Practical auditor notes. ISO 42001 Annex A control mapping. NIST AI RMF trustworthiness and function mapping. Regulatory traceability to UK, EU, US, and Canadian requirements.

What is Coming

Refinement based on feedback from people actually using this. Additional regulatory mapping as new requirements emerge. Supplementary materials like audit programme templates and risk assessment questionnaires if there is demand.

Licensing

Apache 2.0 License. Use it, modify it, distribute it, include it in commercial products. Attribution is required: retain the copyright notice and licence file in any redistribution.

Disclaimer
Important: This matrix is provided on an “as is” basis, without any representations or warranties of any kind, whether express or implied. It is a general information and reference tool and does not constitute legal, regulatory, accounting, or other professional advice.
The risk descriptions, expected controls and regulatory mappings reflect Bloch.ai’s interpretation at the time of writing. They are indicative only and may be incomplete, out of date, or differ from the views of regulators or your legal advisers. We take no responsibility for their accuracy.
You remain solely responsible for assessing your organisation’s risks, designing and operating appropriate controls, and ensuring compliance with all applicable laws and regulations. If you require advice on any of these matters, you should obtain it from a suitably qualified professional.
All standards referenced are copyright their respective owners. This matrix provides only high-level, paraphrased mappings and does not reproduce the underlying standards.
This matrix is made available under the Apache License, Version 2.0. By using it, you agree to be bound by the terms of that licence.

Feedback and Questions

Found a problem? Have a suggestion? Want to discuss something? Raise an issue on GitHub.

github.com/Bloch-AI/AI-RACM/issues

For other enquiries:

bloch.ai/contact-us

Stay Updated

New versions will be announced on LinkedIn and through the Bloch AI website.

Join our mailing list below.

bloch.ai

10. Download

Get the Matrix

Current Version: 0.5 DRAFT

Last updated: November 2024

This is an initial public release. It reflects current thinking and will evolve based on feedback and regulatory developments.

Excel Download

The full matrix with all 84 risks, expected controls, regulatory mapping, deployment model flags, and auditor notes. Excel format so you can filter, sort, and annotate for your context.

Download from GitHub

GitHub Repository

Source files, version history, and issue tracking. This is where updates happen and where you can suggest improvements.

github.com/Bloch-AI/AI-RACM

What is Included

84 risks across 12 domains. Expected controls for each risk. Deployment model applicability (API/SaaS, On-Prem/OSS, Fine-Tuned/Custom). Practical auditor notes. ISO 42001 Annex A control mapping. NIST AI RMF trustworthiness and function mapping. Regulatory traceability to UK, EU, US, and Canadian requirements.

What is Coming

Refinement based on feedback from people actually using this. Additional regulatory mapping as new requirements emerge. Supplementary materials like audit programme templates and risk assessment questionnaires if there is demand.

Licensing

Apache 2.0 License. Use it, modify it, distribute it, include it in commercial products. Attribution is required: retain the copyright notice and licence file in any redistribution.

Disclaimer
Important: This matrix is provided on an “as is” basis, without any representations or warranties of any kind, whether express or implied. It is a general information and reference tool and does not constitute legal, regulatory, accounting, or other professional advice.
The risk descriptions, expected controls and regulatory mappings reflect Bloch.ai’s interpretation at the time of writing. They are indicative only and may be incomplete, out of date, or differ from the views of regulators or your legal advisers. We take no responsibility for their accuracy.
You remain solely responsible for assessing your organisation’s risks, designing and operating appropriate controls, and ensuring compliance with all applicable laws and regulations. If you require advice on any of these matters, you should obtain it from a suitably qualified professional.
All standards referenced are copyright their respective owners. This matrix provides only high-level, paraphrased mappings and does not reproduce the underlying standards.
This matrix is made available under the Apache License, Version 2.0. By using it, you agree to be bound by the terms of that licence.

Feedback and Questions

Found a problem? Have a suggestion? Want to discuss something? Raise an issue on GitHub.

github.com/Bloch-AI/AI-RACM/issues

For other enquiries:

bloch.ai/contact-us

Stay Updated

New versions will be announced on LinkedIn and through the Bloch AI website.

Join our mailing list below.

bloch.ai

10. Download

Get the Matrix

Current Version: 0.5 DRAFT

Last updated: November 2024

This is an initial public release. It reflects current thinking and will evolve based on feedback and regulatory developments.

Excel Download

The full matrix with all 84 risks, expected controls, regulatory mapping, deployment model flags, and auditor notes. Excel format so you can filter, sort, and annotate for your context.

Download from GitHub

GitHub Repository

Source files, version history, and issue tracking. This is where updates happen and where you can suggest improvements.

github.com/Bloch-AI/AI-RACM

What is Included

84 risks across 12 domains. Expected controls for each risk. Deployment model applicability (API/SaaS, On-Prem/OSS, Fine-Tuned/Custom). Practical auditor notes. ISO 42001 Annex A control mapping. NIST AI RMF trustworthiness and function mapping. Regulatory traceability to UK, EU, US, and Canadian requirements.

What is Coming

Refinement based on feedback from people actually using this. Additional regulatory mapping as new requirements emerge. Supplementary materials like audit programme templates and risk assessment questionnaires if there is demand.

Licensing

Apache 2.0 License. Use it, modify it, distribute it, include it in commercial products. Attribution is required: retain the copyright notice and licence file in any redistribution.

Disclaimer
Important: This matrix is provided on an “as is” basis, without any representations or warranties of any kind, whether express or implied. It is a general information and reference tool and does not constitute legal, regulatory, accounting, or other professional advice.
The risk descriptions, expected controls and regulatory mappings reflect Bloch.ai’s interpretation at the time of writing. They are indicative only and may be incomplete, out of date, or differ from the views of regulators or your legal advisers. We take no responsibility for their accuracy.
You remain solely responsible for assessing your organisation’s risks, designing and operating appropriate controls, and ensuring compliance with all applicable laws and regulations. If you require advice on any of these matters, you should obtain it from a suitably qualified professional.
All standards referenced are copyright their respective owners. This matrix provides only high-level, paraphrased mappings and does not reproduce the underlying standards.
This matrix is made available under the Apache License, Version 2.0. By using it, you agree to be bound by the terms of that licence.

Feedback and Questions

Found a problem? Have a suggestion? Want to discuss something? Raise an issue on GitHub.

github.com/Bloch-AI/AI-RACM/issues

For other enquiries:

bloch.ai/contact-us

Stay Updated

New versions will be announced on LinkedIn and through the Bloch AI website.

Join our mailing list below.

bloch.ai

Continue Learning

Explore more guides, tools, and insights to advance your journey and build lasting capability across your business.

Join Our Mailing List

Join Our Mailing List

Join Our Mailing List

The Innovation Experts

Get in Touch

The Innovation Experts

Get in Touch

The Innovation Experts

Get in Touch