CEO Playbook

CEO Playbook

CEO Playbook

CEO Playbook: AI Strategy

Most Digital & AI strategies fail because organisations pick technology first, then work out what to do with it. This playbook shows you how to work backwards from your goal, mapping what needs to happen across your business before you buy a single tool.

1. Why AI Strategies Fail

You probably need an AI strategy. You might already have one that isn't working. Or you might be stuck at the starting line, unsure where to begin.

The common problem is the same: organisations start with technology instead of strategy. Someone hears about GPT at a conference and comes back excited. Leadership decides "we need to do something with AI." The question becomes "what tools should we buy?" not "what are we actually trying to achieve?"

This is backwards. AI is not a strategy. It's a tool that might help you execute a strategy if you have one. But most organisations don't have a clear goal. They have vague aspirations like "improve efficiency" or "become more data-driven" that mean nothing specific.

Here's what usually happens. Leadership forms an AI committee. The committee produces a document about innovation and transformation. Maybe they buy some tools. Twelve months later, nothing has changed. Someone points out they've spent money and have nothing to show for it. The cycle repeats.

The problem isn't the technology. The problem is lack of strategic clarity. If you cannot articulate what winning looks like in specific, measurable terms, no amount of AI will help you. "Improve efficiency" is not a goal. "Reduce invoice processing time from 5 days to 24 hours" is a goal.

The second problem is that AI strategies ignore culture. Buying tools is easy. Getting people to change how they work is hard. Getting them to trust a system instead of their spreadsheet is hard. Getting senior people to admit they don't understand something is hard.

Most AI strategies pretend this doesn't exist. They focus on technology selection and implementation timelines. They produce Gantt charts and milestone plans. They skip over the uncomfortable bit where someone has to explain that projects don't fail because of budget, they fail because of how people actually work.

You need a different approach. One that forces you to define what you're trying to achieve before anyone mentions technology. One that makes you map out what has to change about how people work, not just what tools you'll buy. That's what the Strategy Canvas does.

1. Why AI Strategies Fail

You probably need an AI strategy. You might already have one that isn't working. Or you might be stuck at the starting line, unsure where to begin.

The common problem is the same: organisations start with technology instead of strategy. Someone hears about GPT at a conference and comes back excited. Leadership decides "we need to do something with AI." The question becomes "what tools should we buy?" not "what are we actually trying to achieve?"

This is backwards. AI is not a strategy. It's a tool that might help you execute a strategy if you have one. But most organisations don't have a clear goal. They have vague aspirations like "improve efficiency" or "become more data-driven" that mean nothing specific.

Here's what usually happens. Leadership forms an AI committee. The committee produces a document about innovation and transformation. Maybe they buy some tools. Twelve months later, nothing has changed. Someone points out they've spent money and have nothing to show for it. The cycle repeats.

The problem isn't the technology. The problem is lack of strategic clarity. If you cannot articulate what winning looks like in specific, measurable terms, no amount of AI will help you. "Improve efficiency" is not a goal. "Reduce invoice processing time from 5 days to 24 hours" is a goal.

The second problem is that AI strategies ignore culture. Buying tools is easy. Getting people to change how they work is hard. Getting them to trust a system instead of their spreadsheet is hard. Getting senior people to admit they don't understand something is hard.

Most AI strategies pretend this doesn't exist. They focus on technology selection and implementation timelines. They produce Gantt charts and milestone plans. They skip over the uncomfortable bit where someone has to explain that projects don't fail because of budget, they fail because of how people actually work.

You need a different approach. One that forces you to define what you're trying to achieve before anyone mentions technology. One that makes you map out what has to change about how people work, not just what tools you'll buy. That's what the Strategy Canvas does.

1. Why AI Strategies Fail

You probably need an AI strategy. You might already have one that isn't working. Or you might be stuck at the starting line, unsure where to begin.

The common problem is the same: organisations start with technology instead of strategy. Someone hears about GPT at a conference and comes back excited. Leadership decides "we need to do something with AI." The question becomes "what tools should we buy?" not "what are we actually trying to achieve?"

This is backwards. AI is not a strategy. It's a tool that might help you execute a strategy if you have one. But most organisations don't have a clear goal. They have vague aspirations like "improve efficiency" or "become more data-driven" that mean nothing specific.

Here's what usually happens. Leadership forms an AI committee. The committee produces a document about innovation and transformation. Maybe they buy some tools. Twelve months later, nothing has changed. Someone points out they've spent money and have nothing to show for it. The cycle repeats.

The problem isn't the technology. The problem is lack of strategic clarity. If you cannot articulate what winning looks like in specific, measurable terms, no amount of AI will help you. "Improve efficiency" is not a goal. "Reduce invoice processing time from 5 days to 24 hours" is a goal.

The second problem is that AI strategies ignore culture. Buying tools is easy. Getting people to change how they work is hard. Getting them to trust a system instead of their spreadsheet is hard. Getting senior people to admit they don't understand something is hard.

Most AI strategies pretend this doesn't exist. They focus on technology selection and implementation timelines. They produce Gantt charts and milestone plans. They skip over the uncomfortable bit where someone has to explain that projects don't fail because of budget, they fail because of how people actually work.

You need a different approach. One that forces you to define what you're trying to achieve before anyone mentions technology. One that makes you map out what has to change about how people work, not just what tools you'll buy. That's what the Strategy Canvas does.

2. Introducing the Strategy Canvas

The Strategy Canvas is a planning tool that forces you to work backwards from your goal. Most organisations do this the wrong way round. They pick technology, then work out what to do with it. This canvas makes you start with where you want to get to. Only then do you figure out what you need to do, what resources you need, and what has to change about how people actually work.

Click Here: It is available here on the Miroverse.

Why a canvas? Why a workshop?

Strategy doesn't happen in isolation. It happens through conversation, debate, and honest assessment of what's actually possible. A strategy document written by one person and presented to leadership rarely survives contact with reality. It gets nodded at, filed, and forgotten.

A workshop forces the conversation to happen. It gets decision-makers in a room together, looking at the same information, making trade-offs in real time. The canvas gives structure to that conversation so it doesn't devolve into unfocused debate about everything at once.

The visual nature matters. Post-it notes on a wall (or a Miro board) let everyone see the whole picture simultaneously. You can spot gaps, dependencies, and contradictions that aren't obvious in a written document. When someone says "we need better data" and you look at the canvas and see they haven't identified who will clean that data or how people will be trained to use it, the gap becomes obvious.

How this works in practice

When we run this for clients, we don't just turn up with a blank canvas and hope for the best. We recommend spending a few days beforehand understanding the business. Talk to people individually - leadership, technical staff, whoever will be involved in executing whatever comes out of the workshop. Ask about previous initiatives that failed and why. Look at what's already been tried. Identify the political dynamics and sacred cows.

This pre-work serves two purposes. First, it means the workshop is productive from minute one because you're not spending the first hour just establishing context. Second, you spot the gaps between what people say publicly and what they say privately. That tells you where the difficult conversations need to happen.

The workshop itself is where the canvas gets filled out. A good facilitator will push for specificity and make sure the uncomfortable topics don't get sidestepped. Afterwards, spend a couple of days producing a clear deliverable - the completed canvas, a prioritised action plan, and an honest assessment of what the real blockers are.

If you're running this internally, the same principles apply. Do the pre-work. Make sure you have the right people in the room. Don't rush the process. And be prepared for the conversation to get uncomfortable - that's usually a sign it's working.

What the canvas looks like

The canvas is a single page divided into four columns that move from right to left. On the right: your goal, written on a green post-it note. Moving left, three columns: what to do (yellow notes), what you need (blue notes), and ways of working (pink notes). Each column asks a specific question and forces specific thinking.

It covers four areas of any business:

  • Client Value (how you create value for clients)

  • Risk (how you manage what could go wrong)

  • Teams (your people and their capabilities)

  • Operations (how you deliver efficiently)

Every action, every resource requirement, and every cultural change maps to one of these four areas. This structure stops you missing critical pieces. You might remember to think about operations but forget about risk. You might focus on client value but ignore team capabilities. The four areas make sure you think through the whole business.

The canvas uses coloured post-it notes to distinguish between different types of thinking. Green for your goal. Yellow for actions. Blue for resources. Pink for behaviour and culture changes. This visual system keeps conversations focused and stops people jumping between "what we want to achieve" and "what tools we should buy" as if they are the same question.

Once you've filled the canvas with post-its, you add dots and arrows. Red dots mark gaps or blockers. Green dots mark quick wins. Orange dots show dependencies - where one thing must happen before another. Stars indicate critical success factors. Arrows between orange dots show the sequence. This layer turns the canvas from a list of ideas into a prioritised plan.

You can use this for enterprise-wide transformation or a single department initiative. The scale changes but the logic stays the same. Define what winning looks like, work backwards to what needs to happen, be honest about what has to change.

Using this guide

This guide gives you everything you need to run this yourself. The framework is simple. Getting leadership teams to be honest about what is actually broken is not simple. That is where most people struggle. We'll walk through how to set up the workshop, how to facilitate the conversations, how to handle the difficult dynamics, and how to turn the canvas into an action plan.

If you get to the end and realise you'd rather have someone else facilitate it, that's what we're here for. But even if you bring in external help, understanding the framework first means you'll get more out of the process.

2. Introducing the Strategy Canvas

The Strategy Canvas is a planning tool that forces you to work backwards from your goal. Most organisations do this the wrong way round. They pick technology, then work out what to do with it. This canvas makes you start with where you want to get to. Only then do you figure out what you need to do, what resources you need, and what has to change about how people actually work.

Click Here: It is available here on the Miroverse.

Why a canvas? Why a workshop?

Strategy doesn't happen in isolation. It happens through conversation, debate, and honest assessment of what's actually possible. A strategy document written by one person and presented to leadership rarely survives contact with reality. It gets nodded at, filed, and forgotten.

A workshop forces the conversation to happen. It gets decision-makers in a room together, looking at the same information, making trade-offs in real time. The canvas gives structure to that conversation so it doesn't devolve into unfocused debate about everything at once.

The visual nature matters. Post-it notes on a wall (or a Miro board) let everyone see the whole picture simultaneously. You can spot gaps, dependencies, and contradictions that aren't obvious in a written document. When someone says "we need better data" and you look at the canvas and see they haven't identified who will clean that data or how people will be trained to use it, the gap becomes obvious.

How this works in practice

When we run this for clients, we don't just turn up with a blank canvas and hope for the best. We recommend spending a few days beforehand understanding the business. Talk to people individually - leadership, technical staff, whoever will be involved in executing whatever comes out of the workshop. Ask about previous initiatives that failed and why. Look at what's already been tried. Identify the political dynamics and sacred cows.

This pre-work serves two purposes. First, it means the workshop is productive from minute one because you're not spending the first hour just establishing context. Second, you spot the gaps between what people say publicly and what they say privately. That tells you where the difficult conversations need to happen.

The workshop itself is where the canvas gets filled out. A good facilitator will push for specificity and make sure the uncomfortable topics don't get sidestepped. Afterwards, spend a couple of days producing a clear deliverable - the completed canvas, a prioritised action plan, and an honest assessment of what the real blockers are.

If you're running this internally, the same principles apply. Do the pre-work. Make sure you have the right people in the room. Don't rush the process. And be prepared for the conversation to get uncomfortable - that's usually a sign it's working.

What the canvas looks like

The canvas is a single page divided into four columns that move from right to left. On the right: your goal, written on a green post-it note. Moving left, three columns: what to do (yellow notes), what you need (blue notes), and ways of working (pink notes). Each column asks a specific question and forces specific thinking.

It covers four areas of any business:

  • Client Value (how you create value for clients)

  • Risk (how you manage what could go wrong)

  • Teams (your people and their capabilities)

  • Operations (how you deliver efficiently)

Every action, every resource requirement, and every cultural change maps to one of these four areas. This structure stops you missing critical pieces. You might remember to think about operations but forget about risk. You might focus on client value but ignore team capabilities. The four areas make sure you think through the whole business.

The canvas uses coloured post-it notes to distinguish between different types of thinking. Green for your goal. Yellow for actions. Blue for resources. Pink for behaviour and culture changes. This visual system keeps conversations focused and stops people jumping between "what we want to achieve" and "what tools we should buy" as if they are the same question.

Once you've filled the canvas with post-its, you add dots and arrows. Red dots mark gaps or blockers. Green dots mark quick wins. Orange dots show dependencies - where one thing must happen before another. Stars indicate critical success factors. Arrows between orange dots show the sequence. This layer turns the canvas from a list of ideas into a prioritised plan.

You can use this for enterprise-wide transformation or a single department initiative. The scale changes but the logic stays the same. Define what winning looks like, work backwards to what needs to happen, be honest about what has to change.

Using this guide

This guide gives you everything you need to run this yourself. The framework is simple. Getting leadership teams to be honest about what is actually broken is not simple. That is where most people struggle. We'll walk through how to set up the workshop, how to facilitate the conversations, how to handle the difficult dynamics, and how to turn the canvas into an action plan.

If you get to the end and realise you'd rather have someone else facilitate it, that's what we're here for. But even if you bring in external help, understanding the framework first means you'll get more out of the process.

2. Introducing the Strategy Canvas

The Strategy Canvas is a planning tool that forces you to work backwards from your goal. Most organisations do this the wrong way round. They pick technology, then work out what to do with it. This canvas makes you start with where you want to get to. Only then do you figure out what you need to do, what resources you need, and what has to change about how people actually work.

Click Here: It is available here on the Miroverse.

Why a canvas? Why a workshop?

Strategy doesn't happen in isolation. It happens through conversation, debate, and honest assessment of what's actually possible. A strategy document written by one person and presented to leadership rarely survives contact with reality. It gets nodded at, filed, and forgotten.

A workshop forces the conversation to happen. It gets decision-makers in a room together, looking at the same information, making trade-offs in real time. The canvas gives structure to that conversation so it doesn't devolve into unfocused debate about everything at once.

The visual nature matters. Post-it notes on a wall (or a Miro board) let everyone see the whole picture simultaneously. You can spot gaps, dependencies, and contradictions that aren't obvious in a written document. When someone says "we need better data" and you look at the canvas and see they haven't identified who will clean that data or how people will be trained to use it, the gap becomes obvious.

How this works in practice

When we run this for clients, we don't just turn up with a blank canvas and hope for the best. We recommend spending a few days beforehand understanding the business. Talk to people individually - leadership, technical staff, whoever will be involved in executing whatever comes out of the workshop. Ask about previous initiatives that failed and why. Look at what's already been tried. Identify the political dynamics and sacred cows.

This pre-work serves two purposes. First, it means the workshop is productive from minute one because you're not spending the first hour just establishing context. Second, you spot the gaps between what people say publicly and what they say privately. That tells you where the difficult conversations need to happen.

The workshop itself is where the canvas gets filled out. A good facilitator will push for specificity and make sure the uncomfortable topics don't get sidestepped. Afterwards, spend a couple of days producing a clear deliverable - the completed canvas, a prioritised action plan, and an honest assessment of what the real blockers are.

If you're running this internally, the same principles apply. Do the pre-work. Make sure you have the right people in the room. Don't rush the process. And be prepared for the conversation to get uncomfortable - that's usually a sign it's working.

What the canvas looks like

The canvas is a single page divided into four columns that move from right to left. On the right: your goal, written on a green post-it note. Moving left, three columns: what to do (yellow notes), what you need (blue notes), and ways of working (pink notes). Each column asks a specific question and forces specific thinking.

It covers four areas of any business:

  • Client Value (how you create value for clients)

  • Risk (how you manage what could go wrong)

  • Teams (your people and their capabilities)

  • Operations (how you deliver efficiently)

Every action, every resource requirement, and every cultural change maps to one of these four areas. This structure stops you missing critical pieces. You might remember to think about operations but forget about risk. You might focus on client value but ignore team capabilities. The four areas make sure you think through the whole business.

The canvas uses coloured post-it notes to distinguish between different types of thinking. Green for your goal. Yellow for actions. Blue for resources. Pink for behaviour and culture changes. This visual system keeps conversations focused and stops people jumping between "what we want to achieve" and "what tools we should buy" as if they are the same question.

Once you've filled the canvas with post-its, you add dots and arrows. Red dots mark gaps or blockers. Green dots mark quick wins. Orange dots show dependencies - where one thing must happen before another. Stars indicate critical success factors. Arrows between orange dots show the sequence. This layer turns the canvas from a list of ideas into a prioritised plan.

You can use this for enterprise-wide transformation or a single department initiative. The scale changes but the logic stays the same. Define what winning looks like, work backwards to what needs to happen, be honest about what has to change.

Using this guide

This guide gives you everything you need to run this yourself. The framework is simple. Getting leadership teams to be honest about what is actually broken is not simple. That is where most people struggle. We'll walk through how to set up the workshop, how to facilitate the conversations, how to handle the difficult dynamics, and how to turn the canvas into an action plan.

If you get to the end and realise you'd rather have someone else facilitate it, that's what we're here for. But even if you bring in external help, understanding the framework first means you'll get more out of the process.

3. Before You Start

Whether you're facilitating this internally or bringing in external help, you need the right people in the room. This is not a working group exercise you delegate to middle management. This is not a working group exercise where you delegate it to middle management. You need the people who can actually make decisions and commit budget. If the conversation keeps ending with "we will need to check with..." then you have the wrong people.

Typical participants include the CEO or managing director, CFO, heads of major functions or business units, and anyone who controls significant budget or resources. You do not need everyone, but you need anyone whose support you cannot do without.

Think about who is responsible for executing changes and who is accountable for outcomes. Those people must be in the room. If your strategy requires the operations director to change how their team works, they need to be there. If it requires the IT director to build or integrate systems, they need to be there. If it requires budget approval from the CFO, they need to be there.

You may also need people who should be consulted because they have critical knowledge or relationships, even if they are not directly responsible for execution. A senior client partner who understands what clients actually value. A technical lead who knows what is realistic and what is fantasy. Someone who has tried this before and knows where it went wrong.

You do not need people who just need to be kept informed. They can be briefed after the workshop. Including them dilutes the conversation and slows decision-making. If someone's main contribution would be "can you send me the notes afterwards?" they should not be in the workshop.

The worst outcome is running a great workshop, producing a clear plan, and then discovering that someone not in the room has veto power and disagrees with the goal. Know who your decision-makers are before you start.

Set aside half a day minimum, ideally a full day. This is not something you squeeze into a two-hour slot between other meetings. People need time to think, debate, and get uncomfortable. If you rush it, you get superficial answers.

Physical space works better than virtual if you can manage it. There is something about standing around a wall of post-it notes that creates better conversations than staring at a Miro board on a screen. But virtual can work if your team is distributed. Just make sure everyone can see the canvas and contribute easily.

You will need post-it notes in four colours (green, yellow, blue, pink), markers, and space on a wall or large whiteboard. If you are doing this virtually, set up a Miro board with the canvas template. Either way, make sure everyone understands the colour system before you start.

Do some pre-work. This does not mean producing a draft strategy and presenting it for approval. That defeats the point. But you should understand the context. Talk to participants individually beforehand. What do they think the goal should be? Where do they see gaps or blockers? What initiatives have failed before and why?

This pre-work serves two purposes. First, you avoid spending the first hour of the workshop just getting everyone aligned on basic context. Second, you spot the political dynamics. If three people tell you individually that the real problem is the sales director's refusal to use the CRM, but nobody says this in group settings, you know you have a facilitation challenge.

Be clear about what this session will produce. You are not walking out with a detailed implementation plan. You are walking out with clarity on the goal, the major actions required across four business areas, the resources you need, and the culture changes that have to happen. That is the foundation. The detailed project plan comes after.

3. Before You Start

Whether you're facilitating this internally or bringing in external help, you need the right people in the room. This is not a working group exercise you delegate to middle management. This is not a working group exercise where you delegate it to middle management. You need the people who can actually make decisions and commit budget. If the conversation keeps ending with "we will need to check with..." then you have the wrong people.

Typical participants include the CEO or managing director, CFO, heads of major functions or business units, and anyone who controls significant budget or resources. You do not need everyone, but you need anyone whose support you cannot do without.

Think about who is responsible for executing changes and who is accountable for outcomes. Those people must be in the room. If your strategy requires the operations director to change how their team works, they need to be there. If it requires the IT director to build or integrate systems, they need to be there. If it requires budget approval from the CFO, they need to be there.

You may also need people who should be consulted because they have critical knowledge or relationships, even if they are not directly responsible for execution. A senior client partner who understands what clients actually value. A technical lead who knows what is realistic and what is fantasy. Someone who has tried this before and knows where it went wrong.

You do not need people who just need to be kept informed. They can be briefed after the workshop. Including them dilutes the conversation and slows decision-making. If someone's main contribution would be "can you send me the notes afterwards?" they should not be in the workshop.

The worst outcome is running a great workshop, producing a clear plan, and then discovering that someone not in the room has veto power and disagrees with the goal. Know who your decision-makers are before you start.

Set aside half a day minimum, ideally a full day. This is not something you squeeze into a two-hour slot between other meetings. People need time to think, debate, and get uncomfortable. If you rush it, you get superficial answers.

Physical space works better than virtual if you can manage it. There is something about standing around a wall of post-it notes that creates better conversations than staring at a Miro board on a screen. But virtual can work if your team is distributed. Just make sure everyone can see the canvas and contribute easily.

You will need post-it notes in four colours (green, yellow, blue, pink), markers, and space on a wall or large whiteboard. If you are doing this virtually, set up a Miro board with the canvas template. Either way, make sure everyone understands the colour system before you start.

Do some pre-work. This does not mean producing a draft strategy and presenting it for approval. That defeats the point. But you should understand the context. Talk to participants individually beforehand. What do they think the goal should be? Where do they see gaps or blockers? What initiatives have failed before and why?

This pre-work serves two purposes. First, you avoid spending the first hour of the workshop just getting everyone aligned on basic context. Second, you spot the political dynamics. If three people tell you individually that the real problem is the sales director's refusal to use the CRM, but nobody says this in group settings, you know you have a facilitation challenge.

Be clear about what this session will produce. You are not walking out with a detailed implementation plan. You are walking out with clarity on the goal, the major actions required across four business areas, the resources you need, and the culture changes that have to happen. That is the foundation. The detailed project plan comes after.

3. Before You Start

Whether you're facilitating this internally or bringing in external help, you need the right people in the room. This is not a working group exercise you delegate to middle management. This is not a working group exercise where you delegate it to middle management. You need the people who can actually make decisions and commit budget. If the conversation keeps ending with "we will need to check with..." then you have the wrong people.

Typical participants include the CEO or managing director, CFO, heads of major functions or business units, and anyone who controls significant budget or resources. You do not need everyone, but you need anyone whose support you cannot do without.

Think about who is responsible for executing changes and who is accountable for outcomes. Those people must be in the room. If your strategy requires the operations director to change how their team works, they need to be there. If it requires the IT director to build or integrate systems, they need to be there. If it requires budget approval from the CFO, they need to be there.

You may also need people who should be consulted because they have critical knowledge or relationships, even if they are not directly responsible for execution. A senior client partner who understands what clients actually value. A technical lead who knows what is realistic and what is fantasy. Someone who has tried this before and knows where it went wrong.

You do not need people who just need to be kept informed. They can be briefed after the workshop. Including them dilutes the conversation and slows decision-making. If someone's main contribution would be "can you send me the notes afterwards?" they should not be in the workshop.

The worst outcome is running a great workshop, producing a clear plan, and then discovering that someone not in the room has veto power and disagrees with the goal. Know who your decision-makers are before you start.

Set aside half a day minimum, ideally a full day. This is not something you squeeze into a two-hour slot between other meetings. People need time to think, debate, and get uncomfortable. If you rush it, you get superficial answers.

Physical space works better than virtual if you can manage it. There is something about standing around a wall of post-it notes that creates better conversations than staring at a Miro board on a screen. But virtual can work if your team is distributed. Just make sure everyone can see the canvas and contribute easily.

You will need post-it notes in four colours (green, yellow, blue, pink), markers, and space on a wall or large whiteboard. If you are doing this virtually, set up a Miro board with the canvas template. Either way, make sure everyone understands the colour system before you start.

Do some pre-work. This does not mean producing a draft strategy and presenting it for approval. That defeats the point. But you should understand the context. Talk to participants individually beforehand. What do they think the goal should be? Where do they see gaps or blockers? What initiatives have failed before and why?

This pre-work serves two purposes. First, you avoid spending the first hour of the workshop just getting everyone aligned on basic context. Second, you spot the political dynamics. If three people tell you individually that the real problem is the sales director's refusal to use the CRM, but nobody says this in group settings, you know you have a facilitation challenge.

Be clear about what this session will produce. You are not walking out with a detailed implementation plan. You are walking out with clarity on the goal, the major actions required across four business areas, the resources you need, and the culture changes that have to happen. That is the foundation. The detailed project plan comes after.

4. How To Use The Canvas

The canvas works right to left, even though that feels counter-intuitive. Look at the blank template below. You start with your goal on the right-hand side, in the section marked "1. Your goal". Everything else flows backwards from there.

Start with your goal (Green post-its)

Write your goal on a green post-it in the right-hand column. This needs to be specific and measurable. Not "improve customer service" but "reduce average resolution time from 48 hours to 4 hours." Not "increase revenue" but "grow revenue from mid-market clients by 30% within 18 months."

Let's use a realistic example: a professional services firm that wants to use AI but hasn't figured out what for. After discussion, they define their goal:

Goal (Green): "Reduce proposal response time from 3 weeks to 5 days while maintaining win rate above 30%"

This is specific and measurable. They currently take three weeks to respond to RFPs. Competitors are faster. They are losing opportunities because by the time they respond, prospects have moved on. But they cannot sacrifice quality - their 30% win rate on submitted proposals is good, and they need to maintain it.

Notice this goal says nothing about AI or chatbots yet. It starts with the business outcome. Only through working backwards will they discover what they actually need.

Compare that to where they started: "We need an AI strategy." That is not a goal. Or "We should implement GPT for proposals." That is a solution looking for a problem. By forcing them to articulate the actual business goal first, the canvas ensures any AI implementation serves a real purpose.

The goal needs to be ambitious enough to matter but realistic enough to be credible. If everyone in the room knows it's impossible, they will not engage. If it's too easy, you will not uncover the real barriers.

You may have multiple goals if they are tightly related. But be careful. Three unrelated goals that require completely different actions means you are spreading yourself thin. Better to pick one, do it properly, then move to the next.

Work out what you need to do (Yellow post-its)

Once you have the goal, move left to the "2. What to do" column. This is where you map the actions required to achieve your goal. For each of the four business areas (Client Value, Risk, Teams, Operations), write yellow post-its describing specific actions.

For our professional services firm trying to speed up proposals, the actions might look like this:

Client Value:

  • Build searchable proposal library with past winning proposals

  • Enable clients to ask questions during bid process via AI chat

Risk:

  • Ensure client confidentiality in proposal database

  • Implement quality control workflow for AI-generated content

  • Maintain audit trail of who approved what

Teams:

  • Train bid team (8 people) on prompt engineering for proposals

  • Up-skill junior staff on proposal best practices using AI assistance

Operations:

  • Deploy secure AI chat system for proposal drafting

  • Build AI agents for automated research and section generation

  • Integrate with existing document management system

Each of these is a concrete action. Not "use AI for proposals" but specific things that will happen. Some are broad (build AI agents) and will need breaking down into sub-tasks later, but they are specific enough to understand what success looks like.

Notice how "deploy secure AI chat system" and "build AI agents" have emerged as actions - but only after defining the goal and thinking through what needs to happen. They didn't start by saying "we need chatbots." They started with "we need faster proposals" and discovered that chatbots and agents are part of the solution.

Be specific about what will actually be different. Not "implement AI" but "deploy secure AI chat system on our Azure tenant for proposal drafting, with access to historical proposal library." Not "train people" but "run 4-hour workshop on prompt engineering for proposal writing, covering research, section drafting, and quality review, for all 8 bid team members."

This is where you break down the big goal into concrete actions. If you cannot describe what people will actually be doing differently, you do not have an action, you have an aspiration.

Identify what you need (Blue post-its)

Move left again to "3. What you need". For each action, write blue post-its describing the resources required. This includes tools, people, budget, data, and time.

For our proposal example:

Client Value:

  • Historical proposal database (past 3 years, 200+ proposals)

  • Win/loss analysis data to identify what makes proposals succeed

  • Client feedback on current proposal quality

Risk:

  • Secure Azure deployment within firm's tenant

  • Review workflow system with approval gates

  • Information security assessment and sign-off

Teams:

  • Training budget: £5k for external facilitation + staff time (32 hours)

  • Access to proposal best practices library

  • Ongoing coaching for first month

Operations:

  • Enterprise AI platform (Azure OpenAI Service): £500/month

  • GPT-4 API access: estimated 2M tokens/month at £40/month

  • Integration developer: 3 weeks at £1,200/day = £18k

  • Document management system API documentation

Again, wherever possible be specific. If you can, not "AI tools" but "Azure OpenAI Service with GPT-5" Not "someone to build it" but "integration developer with experience in Azure OpenAI and DMS APIs, 3-week engagement, approximately £18k." Get as far as you can in the session, it could be that you do not know what you need but get as many requirements as possible.

If you do not know exact costs, that's fine, but you should know order of magnitude. The firm can now see this is roughly a £25-30k investment to get started, plus £500-600/month ongoing. That helps them decide whether the goal (faster proposals, potentially more wins) justifies the investment.

This column also surfaces resource gaps. They need historical proposals in a usable format - do they have that? They need their document management system to have API access - does it? If not, that is a blocker that needs addressing first.

Agree what has to change (Pink post-its)

This is the uncomfortable bit. Move left again to "4. Ways of working". For each action, write pink post-its describing the behaviour and culture changes required.

For our proposal example:

Client Value:

  • Partners must contribute winning proposals to shared library (currently hoarded)

  • Accept AI-drafted sections with human review rather than writing everything from scratch

  • Respond to client questions within 24 hours during bid process

Risk:

  • Trust but verify: review all AI outputs, don't just accept them

  • Implement quality checkpoints without partners becoming bottlenecks

  • Flag sensitive client data before adding to system

Teams:

  • Junior staff empowered to draft proposal sections using AI assistance

  • Partners shift from writing to reviewing and enhancing

  • Accept that proposal quality comes from good prompts + good review, not from writing everything yourself

Operations:

  • Use templates and AI assistance as starting point, not bypass for thinking

  • Measure and report on proposal response times weekly

  • Commit to 5-day maximum turnaround with no exceptions

These are behaviour changes, not aspirations. "Partners must contribute winning proposals to shared library" is brutally honest. It names the problem: partners currently hoard their best work because they see it as competitive advantage internally. That needs to stop.

"Accept AI-drafted sections with human review" is a significant culture shift. It means trusting that AI plus review can be as good as writing from scratch. For professional services firms where writing skill is core identity, that is uncomfortable.

"Junior staff empowered to draft proposal sections" flips the current model where juniors do research and partners write. Now juniors can draft using AI, and partners review and enhance. That requires partners to trust junior judgement and junior staff to step up.

This is not about tools or processes. This is about how people actually work. If your goal requires partners to share their best work and they currently protect it, that is a culture change. If it requires people to trust AI output with verification rather than doing everything manually, that is a culture change.

Most people want to skip this column because it is awkward. Do not skip it. This is where most strategies fail. You can buy tools, you can hire people, but changing how people work takes time, effort, and political capital.

Be honest here. If the real blocker is that the senior partner refuses to use templates because he believes every client is unique, write that down. If the issue is that junior staff are not trusted to draft client-facing content, write that down. The whole point of this exercise is to surface the real barriers, not produce a sanitised version that sounds good in board papers.

Add priority and dependency markers

Once you have filled the canvas with post-its, step back and add dots and arrows to show priority, status, and dependencies.

The key shows you what each marking means:

  • Red dot: Gap or blocker that needs addressing

  • Green dot: Quick win that can be delivered fast

  • Orange dot: Dependency on another item

  • Star: High priority or critical success factor

  • Arrows between orange dots: Show which items must happen first

For the proposal example, you might mark:

  • Red dot on "Partners must contribute winning proposals to shared library" - this is a known blocker, partners are resistant

  • Green dot on "Train bid team on prompt engineering" - can be done quickly, builds capability for everything else

  • Orange dot on "Deploy secure AI chat system" with arrow from "Secure Azure deployment within firm's tenant" - can't deploy until security is approved

  • Star on "Reduce proposal response time from 3 weeks to 5 days" goal and on "Deploy secure AI chat system" - these are critical to success

Red dots force you to be honest about what is stopping you. The partner resistance to sharing proposals is real. It needs addressing through leadership intervention, incentive changes, or both. Ignoring it means the proposal library will be empty and the AI will have nothing to learn from.

Green dots identify momentum-builders. Training the bid team is quick, relatively cheap, and builds confidence that this can work. It also starts changing the culture before the technology is fully deployed.

Orange dots and arrows show the critical path. You cannot deploy the AI system until information security approves it. So security approval is on the critical path. If that takes three months, everything else waits. Better to know that now.

Stars mark what absolutely must happen. You should not have many stars. If everything is critical, nothing is critical. Three to five starred items across the whole canvas is plenty.

4. How To Use The Canvas

The canvas works right to left, even though that feels counter-intuitive. Look at the blank template below. You start with your goal on the right-hand side, in the section marked "1. Your goal". Everything else flows backwards from there.

Start with your goal (Green post-its)

Write your goal on a green post-it in the right-hand column. This needs to be specific and measurable. Not "improve customer service" but "reduce average resolution time from 48 hours to 4 hours." Not "increase revenue" but "grow revenue from mid-market clients by 30% within 18 months."

Let's use a realistic example: a professional services firm that wants to use AI but hasn't figured out what for. After discussion, they define their goal:

Goal (Green): "Reduce proposal response time from 3 weeks to 5 days while maintaining win rate above 30%"

This is specific and measurable. They currently take three weeks to respond to RFPs. Competitors are faster. They are losing opportunities because by the time they respond, prospects have moved on. But they cannot sacrifice quality - their 30% win rate on submitted proposals is good, and they need to maintain it.

Notice this goal says nothing about AI or chatbots yet. It starts with the business outcome. Only through working backwards will they discover what they actually need.

Compare that to where they started: "We need an AI strategy." That is not a goal. Or "We should implement GPT for proposals." That is a solution looking for a problem. By forcing them to articulate the actual business goal first, the canvas ensures any AI implementation serves a real purpose.

The goal needs to be ambitious enough to matter but realistic enough to be credible. If everyone in the room knows it's impossible, they will not engage. If it's too easy, you will not uncover the real barriers.

You may have multiple goals if they are tightly related. But be careful. Three unrelated goals that require completely different actions means you are spreading yourself thin. Better to pick one, do it properly, then move to the next.

Work out what you need to do (Yellow post-its)

Once you have the goal, move left to the "2. What to do" column. This is where you map the actions required to achieve your goal. For each of the four business areas (Client Value, Risk, Teams, Operations), write yellow post-its describing specific actions.

For our professional services firm trying to speed up proposals, the actions might look like this:

Client Value:

  • Build searchable proposal library with past winning proposals

  • Enable clients to ask questions during bid process via AI chat

Risk:

  • Ensure client confidentiality in proposal database

  • Implement quality control workflow for AI-generated content

  • Maintain audit trail of who approved what

Teams:

  • Train bid team (8 people) on prompt engineering for proposals

  • Up-skill junior staff on proposal best practices using AI assistance

Operations:

  • Deploy secure AI chat system for proposal drafting

  • Build AI agents for automated research and section generation

  • Integrate with existing document management system

Each of these is a concrete action. Not "use AI for proposals" but specific things that will happen. Some are broad (build AI agents) and will need breaking down into sub-tasks later, but they are specific enough to understand what success looks like.

Notice how "deploy secure AI chat system" and "build AI agents" have emerged as actions - but only after defining the goal and thinking through what needs to happen. They didn't start by saying "we need chatbots." They started with "we need faster proposals" and discovered that chatbots and agents are part of the solution.

Be specific about what will actually be different. Not "implement AI" but "deploy secure AI chat system on our Azure tenant for proposal drafting, with access to historical proposal library." Not "train people" but "run 4-hour workshop on prompt engineering for proposal writing, covering research, section drafting, and quality review, for all 8 bid team members."

This is where you break down the big goal into concrete actions. If you cannot describe what people will actually be doing differently, you do not have an action, you have an aspiration.

Identify what you need (Blue post-its)

Move left again to "3. What you need". For each action, write blue post-its describing the resources required. This includes tools, people, budget, data, and time.

For our proposal example:

Client Value:

  • Historical proposal database (past 3 years, 200+ proposals)

  • Win/loss analysis data to identify what makes proposals succeed

  • Client feedback on current proposal quality

Risk:

  • Secure Azure deployment within firm's tenant

  • Review workflow system with approval gates

  • Information security assessment and sign-off

Teams:

  • Training budget: £5k for external facilitation + staff time (32 hours)

  • Access to proposal best practices library

  • Ongoing coaching for first month

Operations:

  • Enterprise AI platform (Azure OpenAI Service): £500/month

  • GPT-4 API access: estimated 2M tokens/month at £40/month

  • Integration developer: 3 weeks at £1,200/day = £18k

  • Document management system API documentation

Again, wherever possible be specific. If you can, not "AI tools" but "Azure OpenAI Service with GPT-5" Not "someone to build it" but "integration developer with experience in Azure OpenAI and DMS APIs, 3-week engagement, approximately £18k." Get as far as you can in the session, it could be that you do not know what you need but get as many requirements as possible.

If you do not know exact costs, that's fine, but you should know order of magnitude. The firm can now see this is roughly a £25-30k investment to get started, plus £500-600/month ongoing. That helps them decide whether the goal (faster proposals, potentially more wins) justifies the investment.

This column also surfaces resource gaps. They need historical proposals in a usable format - do they have that? They need their document management system to have API access - does it? If not, that is a blocker that needs addressing first.

Agree what has to change (Pink post-its)

This is the uncomfortable bit. Move left again to "4. Ways of working". For each action, write pink post-its describing the behaviour and culture changes required.

For our proposal example:

Client Value:

  • Partners must contribute winning proposals to shared library (currently hoarded)

  • Accept AI-drafted sections with human review rather than writing everything from scratch

  • Respond to client questions within 24 hours during bid process

Risk:

  • Trust but verify: review all AI outputs, don't just accept them

  • Implement quality checkpoints without partners becoming bottlenecks

  • Flag sensitive client data before adding to system

Teams:

  • Junior staff empowered to draft proposal sections using AI assistance

  • Partners shift from writing to reviewing and enhancing

  • Accept that proposal quality comes from good prompts + good review, not from writing everything yourself

Operations:

  • Use templates and AI assistance as starting point, not bypass for thinking

  • Measure and report on proposal response times weekly

  • Commit to 5-day maximum turnaround with no exceptions

These are behaviour changes, not aspirations. "Partners must contribute winning proposals to shared library" is brutally honest. It names the problem: partners currently hoard their best work because they see it as competitive advantage internally. That needs to stop.

"Accept AI-drafted sections with human review" is a significant culture shift. It means trusting that AI plus review can be as good as writing from scratch. For professional services firms where writing skill is core identity, that is uncomfortable.

"Junior staff empowered to draft proposal sections" flips the current model where juniors do research and partners write. Now juniors can draft using AI, and partners review and enhance. That requires partners to trust junior judgement and junior staff to step up.

This is not about tools or processes. This is about how people actually work. If your goal requires partners to share their best work and they currently protect it, that is a culture change. If it requires people to trust AI output with verification rather than doing everything manually, that is a culture change.

Most people want to skip this column because it is awkward. Do not skip it. This is where most strategies fail. You can buy tools, you can hire people, but changing how people work takes time, effort, and political capital.

Be honest here. If the real blocker is that the senior partner refuses to use templates because he believes every client is unique, write that down. If the issue is that junior staff are not trusted to draft client-facing content, write that down. The whole point of this exercise is to surface the real barriers, not produce a sanitised version that sounds good in board papers.

Add priority and dependency markers

Once you have filled the canvas with post-its, step back and add dots and arrows to show priority, status, and dependencies.

The key shows you what each marking means:

  • Red dot: Gap or blocker that needs addressing

  • Green dot: Quick win that can be delivered fast

  • Orange dot: Dependency on another item

  • Star: High priority or critical success factor

  • Arrows between orange dots: Show which items must happen first

For the proposal example, you might mark:

  • Red dot on "Partners must contribute winning proposals to shared library" - this is a known blocker, partners are resistant

  • Green dot on "Train bid team on prompt engineering" - can be done quickly, builds capability for everything else

  • Orange dot on "Deploy secure AI chat system" with arrow from "Secure Azure deployment within firm's tenant" - can't deploy until security is approved

  • Star on "Reduce proposal response time from 3 weeks to 5 days" goal and on "Deploy secure AI chat system" - these are critical to success

Red dots force you to be honest about what is stopping you. The partner resistance to sharing proposals is real. It needs addressing through leadership intervention, incentive changes, or both. Ignoring it means the proposal library will be empty and the AI will have nothing to learn from.

Green dots identify momentum-builders. Training the bid team is quick, relatively cheap, and builds confidence that this can work. It also starts changing the culture before the technology is fully deployed.

Orange dots and arrows show the critical path. You cannot deploy the AI system until information security approves it. So security approval is on the critical path. If that takes three months, everything else waits. Better to know that now.

Stars mark what absolutely must happen. You should not have many stars. If everything is critical, nothing is critical. Three to five starred items across the whole canvas is plenty.

4. How To Use The Canvas

The canvas works right to left, even though that feels counter-intuitive. Look at the blank template below. You start with your goal on the right-hand side, in the section marked "1. Your goal". Everything else flows backwards from there.

Start with your goal (Green post-its)

Write your goal on a green post-it in the right-hand column. This needs to be specific and measurable. Not "improve customer service" but "reduce average resolution time from 48 hours to 4 hours." Not "increase revenue" but "grow revenue from mid-market clients by 30% within 18 months."

Let's use a realistic example: a professional services firm that wants to use AI but hasn't figured out what for. After discussion, they define their goal:

Goal (Green): "Reduce proposal response time from 3 weeks to 5 days while maintaining win rate above 30%"

This is specific and measurable. They currently take three weeks to respond to RFPs. Competitors are faster. They are losing opportunities because by the time they respond, prospects have moved on. But they cannot sacrifice quality - their 30% win rate on submitted proposals is good, and they need to maintain it.

Notice this goal says nothing about AI or chatbots yet. It starts with the business outcome. Only through working backwards will they discover what they actually need.

Compare that to where they started: "We need an AI strategy." That is not a goal. Or "We should implement GPT for proposals." That is a solution looking for a problem. By forcing them to articulate the actual business goal first, the canvas ensures any AI implementation serves a real purpose.

The goal needs to be ambitious enough to matter but realistic enough to be credible. If everyone in the room knows it's impossible, they will not engage. If it's too easy, you will not uncover the real barriers.

You may have multiple goals if they are tightly related. But be careful. Three unrelated goals that require completely different actions means you are spreading yourself thin. Better to pick one, do it properly, then move to the next.

Work out what you need to do (Yellow post-its)

Once you have the goal, move left to the "2. What to do" column. This is where you map the actions required to achieve your goal. For each of the four business areas (Client Value, Risk, Teams, Operations), write yellow post-its describing specific actions.

For our professional services firm trying to speed up proposals, the actions might look like this:

Client Value:

  • Build searchable proposal library with past winning proposals

  • Enable clients to ask questions during bid process via AI chat

Risk:

  • Ensure client confidentiality in proposal database

  • Implement quality control workflow for AI-generated content

  • Maintain audit trail of who approved what

Teams:

  • Train bid team (8 people) on prompt engineering for proposals

  • Up-skill junior staff on proposal best practices using AI assistance

Operations:

  • Deploy secure AI chat system for proposal drafting

  • Build AI agents for automated research and section generation

  • Integrate with existing document management system

Each of these is a concrete action. Not "use AI for proposals" but specific things that will happen. Some are broad (build AI agents) and will need breaking down into sub-tasks later, but they are specific enough to understand what success looks like.

Notice how "deploy secure AI chat system" and "build AI agents" have emerged as actions - but only after defining the goal and thinking through what needs to happen. They didn't start by saying "we need chatbots." They started with "we need faster proposals" and discovered that chatbots and agents are part of the solution.

Be specific about what will actually be different. Not "implement AI" but "deploy secure AI chat system on our Azure tenant for proposal drafting, with access to historical proposal library." Not "train people" but "run 4-hour workshop on prompt engineering for proposal writing, covering research, section drafting, and quality review, for all 8 bid team members."

This is where you break down the big goal into concrete actions. If you cannot describe what people will actually be doing differently, you do not have an action, you have an aspiration.

Identify what you need (Blue post-its)

Move left again to "3. What you need". For each action, write blue post-its describing the resources required. This includes tools, people, budget, data, and time.

For our proposal example:

Client Value:

  • Historical proposal database (past 3 years, 200+ proposals)

  • Win/loss analysis data to identify what makes proposals succeed

  • Client feedback on current proposal quality

Risk:

  • Secure Azure deployment within firm's tenant

  • Review workflow system with approval gates

  • Information security assessment and sign-off

Teams:

  • Training budget: £5k for external facilitation + staff time (32 hours)

  • Access to proposal best practices library

  • Ongoing coaching for first month

Operations:

  • Enterprise AI platform (Azure OpenAI Service): £500/month

  • GPT-4 API access: estimated 2M tokens/month at £40/month

  • Integration developer: 3 weeks at £1,200/day = £18k

  • Document management system API documentation

Again, wherever possible be specific. If you can, not "AI tools" but "Azure OpenAI Service with GPT-5" Not "someone to build it" but "integration developer with experience in Azure OpenAI and DMS APIs, 3-week engagement, approximately £18k." Get as far as you can in the session, it could be that you do not know what you need but get as many requirements as possible.

If you do not know exact costs, that's fine, but you should know order of magnitude. The firm can now see this is roughly a £25-30k investment to get started, plus £500-600/month ongoing. That helps them decide whether the goal (faster proposals, potentially more wins) justifies the investment.

This column also surfaces resource gaps. They need historical proposals in a usable format - do they have that? They need their document management system to have API access - does it? If not, that is a blocker that needs addressing first.

Agree what has to change (Pink post-its)

This is the uncomfortable bit. Move left again to "4. Ways of working". For each action, write pink post-its describing the behaviour and culture changes required.

For our proposal example:

Client Value:

  • Partners must contribute winning proposals to shared library (currently hoarded)

  • Accept AI-drafted sections with human review rather than writing everything from scratch

  • Respond to client questions within 24 hours during bid process

Risk:

  • Trust but verify: review all AI outputs, don't just accept them

  • Implement quality checkpoints without partners becoming bottlenecks

  • Flag sensitive client data before adding to system

Teams:

  • Junior staff empowered to draft proposal sections using AI assistance

  • Partners shift from writing to reviewing and enhancing

  • Accept that proposal quality comes from good prompts + good review, not from writing everything yourself

Operations:

  • Use templates and AI assistance as starting point, not bypass for thinking

  • Measure and report on proposal response times weekly

  • Commit to 5-day maximum turnaround with no exceptions

These are behaviour changes, not aspirations. "Partners must contribute winning proposals to shared library" is brutally honest. It names the problem: partners currently hoard their best work because they see it as competitive advantage internally. That needs to stop.

"Accept AI-drafted sections with human review" is a significant culture shift. It means trusting that AI plus review can be as good as writing from scratch. For professional services firms where writing skill is core identity, that is uncomfortable.

"Junior staff empowered to draft proposal sections" flips the current model where juniors do research and partners write. Now juniors can draft using AI, and partners review and enhance. That requires partners to trust junior judgement and junior staff to step up.

This is not about tools or processes. This is about how people actually work. If your goal requires partners to share their best work and they currently protect it, that is a culture change. If it requires people to trust AI output with verification rather than doing everything manually, that is a culture change.

Most people want to skip this column because it is awkward. Do not skip it. This is where most strategies fail. You can buy tools, you can hire people, but changing how people work takes time, effort, and political capital.

Be honest here. If the real blocker is that the senior partner refuses to use templates because he believes every client is unique, write that down. If the issue is that junior staff are not trusted to draft client-facing content, write that down. The whole point of this exercise is to surface the real barriers, not produce a sanitised version that sounds good in board papers.

Add priority and dependency markers

Once you have filled the canvas with post-its, step back and add dots and arrows to show priority, status, and dependencies.

The key shows you what each marking means:

  • Red dot: Gap or blocker that needs addressing

  • Green dot: Quick win that can be delivered fast

  • Orange dot: Dependency on another item

  • Star: High priority or critical success factor

  • Arrows between orange dots: Show which items must happen first

For the proposal example, you might mark:

  • Red dot on "Partners must contribute winning proposals to shared library" - this is a known blocker, partners are resistant

  • Green dot on "Train bid team on prompt engineering" - can be done quickly, builds capability for everything else

  • Orange dot on "Deploy secure AI chat system" with arrow from "Secure Azure deployment within firm's tenant" - can't deploy until security is approved

  • Star on "Reduce proposal response time from 3 weeks to 5 days" goal and on "Deploy secure AI chat system" - these are critical to success

Red dots force you to be honest about what is stopping you. The partner resistance to sharing proposals is real. It needs addressing through leadership intervention, incentive changes, or both. Ignoring it means the proposal library will be empty and the AI will have nothing to learn from.

Green dots identify momentum-builders. Training the bid team is quick, relatively cheap, and builds confidence that this can work. It also starts changing the culture before the technology is fully deployed.

Orange dots and arrows show the critical path. You cannot deploy the AI system until information security approves it. So security approval is on the critical path. If that takes three months, everything else waits. Better to know that now.

Stars mark what absolutely must happen. You should not have many stars. If everything is critical, nothing is critical. Three to five starred items across the whole canvas is plenty.

5. The Four Strategic Dimensions

The canvas uses four perspectives to ensure you think through your strategy completely. These are not departments or org chart boxes. They are lenses through which to examine what needs to happen. Every action, resource requirement, and culture change should be considered through all four lenses.

Most strategies fail because they only think through one or two of these. You focus on operations (efficiency) but ignore client value (whether clients actually want faster service). You focus on teams (training) but ignore risk (whether your approach creates new liabilities). The canvas forces you to think through all four.

Client Value: How you create value for clients

This is about what your clients experience and why they choose you. Not what you think they value, what they actually value. Not what your marketing says you deliver, what you actually deliver.

In our proposal example, the firm initially thought this was purely about internal efficiency. Work through Client Value and they realised faster proposals are client value - prospects can make decisions faster. But more importantly, the ability to answer detailed questions during the bid process (via AI chat) is differentiating client value. Their competitors send static proposals and go silent for three weeks. They could provide dynamic responses within hours.

Common patterns here include automating routine interactions to free up expert time for complex client problems, using AI to generate insights that clients cannot get elsewhere, or compressing timelines from enquiry to delivery.

But here is where it gets uncomfortable. The culture changes in this area often expose that established processes serve internal convenience rather than client value.

Example: A law firm discovers that their 48-hour document turnaround time (which they are proud of) exists because partners review everything and partners are busy. Clients would pay more for 4-hour turnaround on urgent matters. But the culture of "everything must be partner-reviewed" prevents that. The client would be happy with senior associate review for urgent work, partner review for strategic decisions. But the firm's identity is wrapped up in "partner attention to every detail."

This example is a Client Value culture problem. The process serves partner control, not client needs. AI could enable senior associates to produce partner-quality work with AI assistance plus focused review. But only if partners accept that their value is strategic judgement, not personally touching every document.

Another pattern: firms assume clients want bespoke everything. Often clients want fast and good enough, not slow and perfect. But the firm's identity is "bespoke quality" so they over-deliver on a dimension clients do not value and under-deliver on speed, which clients do value. AI could help template the 80% that is commodity and reserve human expert time for the 20% that is genuinely bespoke. But the culture has to accept that "bespoke everything" is not always client value.

Risk: How you manage what could go wrong

This covers compliance, quality control, data security, reputation risk, and liability. Actions here might include implementing approval workflows, building audit trails, or setting up monitoring systems. In professional services, this is often your highest-priority area because a single mistake can be catastrophic.

AI creates specific new risks that most organisations underestimate until it's too late:

  • Hallucination risk: LLMs confidently state things that are untrue. In our proposal example, if the AI cites a regulation that does not exist, and nobody checks, you lose the bid at best and damage your reputation at worst. This requires systematic verification processes, not just "read it carefully."

  • Data leakage risk: If you use public APIs, your client data goes to third parties. For professional services, this is not acceptable. This is why our proposal example requires Azure OpenAI deployed in their own tenant. Data never leaves their control. Many firms discover this requirement late, after they have built on public APIs.

  • Quality control risk: AI output varies in quality. Sometimes it is excellent, sometimes mediocre, sometimes wrong. If your review process cannot catch the wrong outputs, you have a quality problem. This requires designing review workflows that actually work at speed, not just adding "please review carefully" to the process.

  • Bias and fairness risk: AI trained on historical data reproduces historical patterns. If your historical proposals under-served certain client types, AI will continue that. You need to actively monitor for this.

The culture changes here are hardest in professional services because the culture is "trust the expert." Moving to "trust but verify with evidence" feels like you are questioning people's competence.

Example: A consulting firm implements AI research assistants. Partners are supposed to verify AI research before using it in client work. But partners are busy and the AI output looks good. Gradually, partners stop checking. Three months in, a client presentation cites research that does not exist. The client googles it during the presentation. Catastrophic.

The culture change required is not "verify AI output." Everyone agrees with that in principle. The culture change is "make verification so fast and easy that busy people actually do it" combined with "create accountability so if someone does not verify, there are consequences."

Another pattern: firms implement approval gates to manage AI quality risk. Junior staff draft with AI, senior staff review, partners approve. Sounds good. In practice, if senior staff take a week to review and partners take another week, you are back to slow turnaround. The culture change is "commit to 24-hour review SLAs at each stage" which means partners have to trust senior staff more and let some things go without partner review. That feels risky. But if the alternative is being too slow to compete, the risk calculation changes.

Teams: Your people and their capabilities

This is about whether you have the right people with the right skills to execute your strategy. Not whether you have people. Whether you have people who can actually do what needs doing.

The common mistake is assuming people will figure out new tools themselves. They will not. Or they will figure out bad habits that are hard to undo later.

In our proposal example, prompt engineering for proposals is a genuine skill. It is not "typing questions into ChatGPT." It is understanding how to structure a prompt so the AI accesses the right context, generates appropriate tone, and produces output that is useful not generic. That takes training and practice.

But the deeper issue is role change. Junior staff are being asked to draft sections they previously only researched. That is a step up in responsibility. Do they have the judgement to know when AI output is good enough and when it is wrong? Do they have the confidence to send a draft to a partner? Those are capability and culture questions.

Meanwhile, partners are being asked to shift from writing to reviewing and enhancing. That is a different skill. Some partners will resist because they define themselves as "writers" not "editors." Some will embrace it because they get more time for strategic thinking. You need to identify who is in which camp and manage accordingly.

Example: An accounting firm trains everyone on AI for financial analysis. Three months later, only 20% are using it regularly. Why? The training covered features ("here is how you upload data") but not judgement ("here is when AI analysis is trustworthy and when you need to verify"). People do not trust the output so they fall back to doing it manually. The missing piece was not tool training, it was building judgement.

Another pattern: firms hire "AI experts" without thinking about where they sit in the organization. You hire a data scientist. Where do they report? If they report to IT, they build technically impressive things that nobody uses because they are not connected to client delivery. If they report to a practice area, they build useful things for that practice but knowledge does not spread. You need to think about how expertise flows through the organization, not just whether you have expertise somewhere.

The culture change required is psychological safety. In professional services, there is immense pressure to appear competent at all times. Admitting "I do not understand how this works" or "I tried and got stuck" feels like admitting weakness. If your culture punishes that, people will not ask for help. They will either avoid using AI (and fall behind) or use it badly (and create risk).

Creating psychological safety is not "be nice to people." It is:

  • Leadership visibly admitting what they do not know yet

  • Creating forums where asking "stupid questions" is normal

  • Pairing experienced staff with AI-confident staff so both learn

  • Celebrating good failures ("I tried this approach, it did not work, here is what I learned")

If your culture cannot do that, your team cannot learn fast enough to keep up with AI evolution.

Operations: How you deliver efficiently

This covers your internal processes, systems, and ways of working. How work flows through your organisation. Where bottlenecks exist. How information moves between people and systems.

This is where AI often has the most immediate impact because operational inefficiency is visible and measurable. Document generation, data processing, research synthesis, routine analysis - these can usually be automated or augmented. But the value is not in the automation itself. It is in what you do with the time you save.

In our proposal example, the operational improvements are clear: proposals that took three weeks now take five days. But that time saving only happens if you also change the workflow. If you automate research but then partners still take two weeks to write, you have saved research time but not overall time.

The real operational change is workflow redesign: Junior staff draft using AI → Senior staff review within 24 hours → Partner enhances and approves within 24 hours → Final production. That is a 2-3 day workflow. But it only works if everyone commits to the timeframes. That is culture, not technology.

Common operational pattern: firms automate one step in a multi-step process and wonder why nothing improves. You automate data collection but analysis is still manual. You automate first draft but review still takes weeks. You need to look at the whole process, identify the constraint (the slowest step), and address that.

Example: A consulting firm automates meeting note generation using AI. Meetings are transcribed, AI generates structured notes, everyone gets them within an hour. Six months later, nobody uses the notes. Why? Because the notes are not in the format people actually use. The AI generates paragraphs. People want action items in their task management system. The automation solved a problem nobody had (typing up notes) and did not solve the problem people did have (capturing and tracking actions).

The culture changes here are about control and trust. Operations improvements usually mean letting go of manual control and trusting systems.

Pattern: Finance teams who manually check every invoice. They say it is quality control. Actually, it is because they do not trust the system to catch errors. If you automate invoice checking with AI, do they stop manual checking? Not if the culture is "I am responsible so I must personally verify." The culture change is "I am responsible for ensuring good systems, not for personally checking everything." That is hard.

Another pattern: Partners who insist on reviewing every document. They say it is quality control. Actually, it is often control and identity. They define themselves as "the person who ensures quality by reviewing everything." If you enable senior staff to produce high-quality work with AI assistance, does partner review become selective? Only if partners accept their value is strategic oversight, not universal review. That threatens identity.

The operational culture change required is moving from "I trust things I personally controlled" to "I trust well-designed systems with appropriate oversight." That sounds simple. It is not. It requires evidence that systems work, which means running them in parallel with manual processes initially. It requires making system failures visible and fixing them quickly. And it requires leadership to stop rewarding heroic manual effort ("Sarah stayed late all week manually processing invoices") and start rewarding system improvement ("Sarah automated invoice processing and now has time for strategic analysis").

In our proposal example, the culture change "commit to 5-day maximum turnaround with no exceptions" is operational discipline. It means saying no to "this client is special, we need more time" because that breaks the process. It means triaging: this proposal fits the 5-day process, that one is genuinely bespoke and needs different handling. That triage decision is operational judgement enabled by clear process design.

5. The Four Strategic Dimensions

The canvas uses four perspectives to ensure you think through your strategy completely. These are not departments or org chart boxes. They are lenses through which to examine what needs to happen. Every action, resource requirement, and culture change should be considered through all four lenses.

Most strategies fail because they only think through one or two of these. You focus on operations (efficiency) but ignore client value (whether clients actually want faster service). You focus on teams (training) but ignore risk (whether your approach creates new liabilities). The canvas forces you to think through all four.

Client Value: How you create value for clients

This is about what your clients experience and why they choose you. Not what you think they value, what they actually value. Not what your marketing says you deliver, what you actually deliver.

In our proposal example, the firm initially thought this was purely about internal efficiency. Work through Client Value and they realised faster proposals are client value - prospects can make decisions faster. But more importantly, the ability to answer detailed questions during the bid process (via AI chat) is differentiating client value. Their competitors send static proposals and go silent for three weeks. They could provide dynamic responses within hours.

Common patterns here include automating routine interactions to free up expert time for complex client problems, using AI to generate insights that clients cannot get elsewhere, or compressing timelines from enquiry to delivery.

But here is where it gets uncomfortable. The culture changes in this area often expose that established processes serve internal convenience rather than client value.

Example: A law firm discovers that their 48-hour document turnaround time (which they are proud of) exists because partners review everything and partners are busy. Clients would pay more for 4-hour turnaround on urgent matters. But the culture of "everything must be partner-reviewed" prevents that. The client would be happy with senior associate review for urgent work, partner review for strategic decisions. But the firm's identity is wrapped up in "partner attention to every detail."

This example is a Client Value culture problem. The process serves partner control, not client needs. AI could enable senior associates to produce partner-quality work with AI assistance plus focused review. But only if partners accept that their value is strategic judgement, not personally touching every document.

Another pattern: firms assume clients want bespoke everything. Often clients want fast and good enough, not slow and perfect. But the firm's identity is "bespoke quality" so they over-deliver on a dimension clients do not value and under-deliver on speed, which clients do value. AI could help template the 80% that is commodity and reserve human expert time for the 20% that is genuinely bespoke. But the culture has to accept that "bespoke everything" is not always client value.

Risk: How you manage what could go wrong

This covers compliance, quality control, data security, reputation risk, and liability. Actions here might include implementing approval workflows, building audit trails, or setting up monitoring systems. In professional services, this is often your highest-priority area because a single mistake can be catastrophic.

AI creates specific new risks that most organisations underestimate until it's too late:

  • Hallucination risk: LLMs confidently state things that are untrue. In our proposal example, if the AI cites a regulation that does not exist, and nobody checks, you lose the bid at best and damage your reputation at worst. This requires systematic verification processes, not just "read it carefully."

  • Data leakage risk: If you use public APIs, your client data goes to third parties. For professional services, this is not acceptable. This is why our proposal example requires Azure OpenAI deployed in their own tenant. Data never leaves their control. Many firms discover this requirement late, after they have built on public APIs.

  • Quality control risk: AI output varies in quality. Sometimes it is excellent, sometimes mediocre, sometimes wrong. If your review process cannot catch the wrong outputs, you have a quality problem. This requires designing review workflows that actually work at speed, not just adding "please review carefully" to the process.

  • Bias and fairness risk: AI trained on historical data reproduces historical patterns. If your historical proposals under-served certain client types, AI will continue that. You need to actively monitor for this.

The culture changes here are hardest in professional services because the culture is "trust the expert." Moving to "trust but verify with evidence" feels like you are questioning people's competence.

Example: A consulting firm implements AI research assistants. Partners are supposed to verify AI research before using it in client work. But partners are busy and the AI output looks good. Gradually, partners stop checking. Three months in, a client presentation cites research that does not exist. The client googles it during the presentation. Catastrophic.

The culture change required is not "verify AI output." Everyone agrees with that in principle. The culture change is "make verification so fast and easy that busy people actually do it" combined with "create accountability so if someone does not verify, there are consequences."

Another pattern: firms implement approval gates to manage AI quality risk. Junior staff draft with AI, senior staff review, partners approve. Sounds good. In practice, if senior staff take a week to review and partners take another week, you are back to slow turnaround. The culture change is "commit to 24-hour review SLAs at each stage" which means partners have to trust senior staff more and let some things go without partner review. That feels risky. But if the alternative is being too slow to compete, the risk calculation changes.

Teams: Your people and their capabilities

This is about whether you have the right people with the right skills to execute your strategy. Not whether you have people. Whether you have people who can actually do what needs doing.

The common mistake is assuming people will figure out new tools themselves. They will not. Or they will figure out bad habits that are hard to undo later.

In our proposal example, prompt engineering for proposals is a genuine skill. It is not "typing questions into ChatGPT." It is understanding how to structure a prompt so the AI accesses the right context, generates appropriate tone, and produces output that is useful not generic. That takes training and practice.

But the deeper issue is role change. Junior staff are being asked to draft sections they previously only researched. That is a step up in responsibility. Do they have the judgement to know when AI output is good enough and when it is wrong? Do they have the confidence to send a draft to a partner? Those are capability and culture questions.

Meanwhile, partners are being asked to shift from writing to reviewing and enhancing. That is a different skill. Some partners will resist because they define themselves as "writers" not "editors." Some will embrace it because they get more time for strategic thinking. You need to identify who is in which camp and manage accordingly.

Example: An accounting firm trains everyone on AI for financial analysis. Three months later, only 20% are using it regularly. Why? The training covered features ("here is how you upload data") but not judgement ("here is when AI analysis is trustworthy and when you need to verify"). People do not trust the output so they fall back to doing it manually. The missing piece was not tool training, it was building judgement.

Another pattern: firms hire "AI experts" without thinking about where they sit in the organization. You hire a data scientist. Where do they report? If they report to IT, they build technically impressive things that nobody uses because they are not connected to client delivery. If they report to a practice area, they build useful things for that practice but knowledge does not spread. You need to think about how expertise flows through the organization, not just whether you have expertise somewhere.

The culture change required is psychological safety. In professional services, there is immense pressure to appear competent at all times. Admitting "I do not understand how this works" or "I tried and got stuck" feels like admitting weakness. If your culture punishes that, people will not ask for help. They will either avoid using AI (and fall behind) or use it badly (and create risk).

Creating psychological safety is not "be nice to people." It is:

  • Leadership visibly admitting what they do not know yet

  • Creating forums where asking "stupid questions" is normal

  • Pairing experienced staff with AI-confident staff so both learn

  • Celebrating good failures ("I tried this approach, it did not work, here is what I learned")

If your culture cannot do that, your team cannot learn fast enough to keep up with AI evolution.

Operations: How you deliver efficiently

This covers your internal processes, systems, and ways of working. How work flows through your organisation. Where bottlenecks exist. How information moves between people and systems.

This is where AI often has the most immediate impact because operational inefficiency is visible and measurable. Document generation, data processing, research synthesis, routine analysis - these can usually be automated or augmented. But the value is not in the automation itself. It is in what you do with the time you save.

In our proposal example, the operational improvements are clear: proposals that took three weeks now take five days. But that time saving only happens if you also change the workflow. If you automate research but then partners still take two weeks to write, you have saved research time but not overall time.

The real operational change is workflow redesign: Junior staff draft using AI → Senior staff review within 24 hours → Partner enhances and approves within 24 hours → Final production. That is a 2-3 day workflow. But it only works if everyone commits to the timeframes. That is culture, not technology.

Common operational pattern: firms automate one step in a multi-step process and wonder why nothing improves. You automate data collection but analysis is still manual. You automate first draft but review still takes weeks. You need to look at the whole process, identify the constraint (the slowest step), and address that.

Example: A consulting firm automates meeting note generation using AI. Meetings are transcribed, AI generates structured notes, everyone gets them within an hour. Six months later, nobody uses the notes. Why? Because the notes are not in the format people actually use. The AI generates paragraphs. People want action items in their task management system. The automation solved a problem nobody had (typing up notes) and did not solve the problem people did have (capturing and tracking actions).

The culture changes here are about control and trust. Operations improvements usually mean letting go of manual control and trusting systems.

Pattern: Finance teams who manually check every invoice. They say it is quality control. Actually, it is because they do not trust the system to catch errors. If you automate invoice checking with AI, do they stop manual checking? Not if the culture is "I am responsible so I must personally verify." The culture change is "I am responsible for ensuring good systems, not for personally checking everything." That is hard.

Another pattern: Partners who insist on reviewing every document. They say it is quality control. Actually, it is often control and identity. They define themselves as "the person who ensures quality by reviewing everything." If you enable senior staff to produce high-quality work with AI assistance, does partner review become selective? Only if partners accept their value is strategic oversight, not universal review. That threatens identity.

The operational culture change required is moving from "I trust things I personally controlled" to "I trust well-designed systems with appropriate oversight." That sounds simple. It is not. It requires evidence that systems work, which means running them in parallel with manual processes initially. It requires making system failures visible and fixing them quickly. And it requires leadership to stop rewarding heroic manual effort ("Sarah stayed late all week manually processing invoices") and start rewarding system improvement ("Sarah automated invoice processing and now has time for strategic analysis").

In our proposal example, the culture change "commit to 5-day maximum turnaround with no exceptions" is operational discipline. It means saying no to "this client is special, we need more time" because that breaks the process. It means triaging: this proposal fits the 5-day process, that one is genuinely bespoke and needs different handling. That triage decision is operational judgement enabled by clear process design.

5. The Four Strategic Dimensions

The canvas uses four perspectives to ensure you think through your strategy completely. These are not departments or org chart boxes. They are lenses through which to examine what needs to happen. Every action, resource requirement, and culture change should be considered through all four lenses.

Most strategies fail because they only think through one or two of these. You focus on operations (efficiency) but ignore client value (whether clients actually want faster service). You focus on teams (training) but ignore risk (whether your approach creates new liabilities). The canvas forces you to think through all four.

Client Value: How you create value for clients

This is about what your clients experience and why they choose you. Not what you think they value, what they actually value. Not what your marketing says you deliver, what you actually deliver.

In our proposal example, the firm initially thought this was purely about internal efficiency. Work through Client Value and they realised faster proposals are client value - prospects can make decisions faster. But more importantly, the ability to answer detailed questions during the bid process (via AI chat) is differentiating client value. Their competitors send static proposals and go silent for three weeks. They could provide dynamic responses within hours.

Common patterns here include automating routine interactions to free up expert time for complex client problems, using AI to generate insights that clients cannot get elsewhere, or compressing timelines from enquiry to delivery.

But here is where it gets uncomfortable. The culture changes in this area often expose that established processes serve internal convenience rather than client value.

Example: A law firm discovers that their 48-hour document turnaround time (which they are proud of) exists because partners review everything and partners are busy. Clients would pay more for 4-hour turnaround on urgent matters. But the culture of "everything must be partner-reviewed" prevents that. The client would be happy with senior associate review for urgent work, partner review for strategic decisions. But the firm's identity is wrapped up in "partner attention to every detail."

This example is a Client Value culture problem. The process serves partner control, not client needs. AI could enable senior associates to produce partner-quality work with AI assistance plus focused review. But only if partners accept that their value is strategic judgement, not personally touching every document.

Another pattern: firms assume clients want bespoke everything. Often clients want fast and good enough, not slow and perfect. But the firm's identity is "bespoke quality" so they over-deliver on a dimension clients do not value and under-deliver on speed, which clients do value. AI could help template the 80% that is commodity and reserve human expert time for the 20% that is genuinely bespoke. But the culture has to accept that "bespoke everything" is not always client value.

Risk: How you manage what could go wrong

This covers compliance, quality control, data security, reputation risk, and liability. Actions here might include implementing approval workflows, building audit trails, or setting up monitoring systems. In professional services, this is often your highest-priority area because a single mistake can be catastrophic.

AI creates specific new risks that most organisations underestimate until it's too late:

  • Hallucination risk: LLMs confidently state things that are untrue. In our proposal example, if the AI cites a regulation that does not exist, and nobody checks, you lose the bid at best and damage your reputation at worst. This requires systematic verification processes, not just "read it carefully."

  • Data leakage risk: If you use public APIs, your client data goes to third parties. For professional services, this is not acceptable. This is why our proposal example requires Azure OpenAI deployed in their own tenant. Data never leaves their control. Many firms discover this requirement late, after they have built on public APIs.

  • Quality control risk: AI output varies in quality. Sometimes it is excellent, sometimes mediocre, sometimes wrong. If your review process cannot catch the wrong outputs, you have a quality problem. This requires designing review workflows that actually work at speed, not just adding "please review carefully" to the process.

  • Bias and fairness risk: AI trained on historical data reproduces historical patterns. If your historical proposals under-served certain client types, AI will continue that. You need to actively monitor for this.

The culture changes here are hardest in professional services because the culture is "trust the expert." Moving to "trust but verify with evidence" feels like you are questioning people's competence.

Example: A consulting firm implements AI research assistants. Partners are supposed to verify AI research before using it in client work. But partners are busy and the AI output looks good. Gradually, partners stop checking. Three months in, a client presentation cites research that does not exist. The client googles it during the presentation. Catastrophic.

The culture change required is not "verify AI output." Everyone agrees with that in principle. The culture change is "make verification so fast and easy that busy people actually do it" combined with "create accountability so if someone does not verify, there are consequences."

Another pattern: firms implement approval gates to manage AI quality risk. Junior staff draft with AI, senior staff review, partners approve. Sounds good. In practice, if senior staff take a week to review and partners take another week, you are back to slow turnaround. The culture change is "commit to 24-hour review SLAs at each stage" which means partners have to trust senior staff more and let some things go without partner review. That feels risky. But if the alternative is being too slow to compete, the risk calculation changes.

Teams: Your people and their capabilities

This is about whether you have the right people with the right skills to execute your strategy. Not whether you have people. Whether you have people who can actually do what needs doing.

The common mistake is assuming people will figure out new tools themselves. They will not. Or they will figure out bad habits that are hard to undo later.

In our proposal example, prompt engineering for proposals is a genuine skill. It is not "typing questions into ChatGPT." It is understanding how to structure a prompt so the AI accesses the right context, generates appropriate tone, and produces output that is useful not generic. That takes training and practice.

But the deeper issue is role change. Junior staff are being asked to draft sections they previously only researched. That is a step up in responsibility. Do they have the judgement to know when AI output is good enough and when it is wrong? Do they have the confidence to send a draft to a partner? Those are capability and culture questions.

Meanwhile, partners are being asked to shift from writing to reviewing and enhancing. That is a different skill. Some partners will resist because they define themselves as "writers" not "editors." Some will embrace it because they get more time for strategic thinking. You need to identify who is in which camp and manage accordingly.

Example: An accounting firm trains everyone on AI for financial analysis. Three months later, only 20% are using it regularly. Why? The training covered features ("here is how you upload data") but not judgement ("here is when AI analysis is trustworthy and when you need to verify"). People do not trust the output so they fall back to doing it manually. The missing piece was not tool training, it was building judgement.

Another pattern: firms hire "AI experts" without thinking about where they sit in the organization. You hire a data scientist. Where do they report? If they report to IT, they build technically impressive things that nobody uses because they are not connected to client delivery. If they report to a practice area, they build useful things for that practice but knowledge does not spread. You need to think about how expertise flows through the organization, not just whether you have expertise somewhere.

The culture change required is psychological safety. In professional services, there is immense pressure to appear competent at all times. Admitting "I do not understand how this works" or "I tried and got stuck" feels like admitting weakness. If your culture punishes that, people will not ask for help. They will either avoid using AI (and fall behind) or use it badly (and create risk).

Creating psychological safety is not "be nice to people." It is:

  • Leadership visibly admitting what they do not know yet

  • Creating forums where asking "stupid questions" is normal

  • Pairing experienced staff with AI-confident staff so both learn

  • Celebrating good failures ("I tried this approach, it did not work, here is what I learned")

If your culture cannot do that, your team cannot learn fast enough to keep up with AI evolution.

Operations: How you deliver efficiently

This covers your internal processes, systems, and ways of working. How work flows through your organisation. Where bottlenecks exist. How information moves between people and systems.

This is where AI often has the most immediate impact because operational inefficiency is visible and measurable. Document generation, data processing, research synthesis, routine analysis - these can usually be automated or augmented. But the value is not in the automation itself. It is in what you do with the time you save.

In our proposal example, the operational improvements are clear: proposals that took three weeks now take five days. But that time saving only happens if you also change the workflow. If you automate research but then partners still take two weeks to write, you have saved research time but not overall time.

The real operational change is workflow redesign: Junior staff draft using AI → Senior staff review within 24 hours → Partner enhances and approves within 24 hours → Final production. That is a 2-3 day workflow. But it only works if everyone commits to the timeframes. That is culture, not technology.

Common operational pattern: firms automate one step in a multi-step process and wonder why nothing improves. You automate data collection but analysis is still manual. You automate first draft but review still takes weeks. You need to look at the whole process, identify the constraint (the slowest step), and address that.

Example: A consulting firm automates meeting note generation using AI. Meetings are transcribed, AI generates structured notes, everyone gets them within an hour. Six months later, nobody uses the notes. Why? Because the notes are not in the format people actually use. The AI generates paragraphs. People want action items in their task management system. The automation solved a problem nobody had (typing up notes) and did not solve the problem people did have (capturing and tracking actions).

The culture changes here are about control and trust. Operations improvements usually mean letting go of manual control and trusting systems.

Pattern: Finance teams who manually check every invoice. They say it is quality control. Actually, it is because they do not trust the system to catch errors. If you automate invoice checking with AI, do they stop manual checking? Not if the culture is "I am responsible so I must personally verify." The culture change is "I am responsible for ensuring good systems, not for personally checking everything." That is hard.

Another pattern: Partners who insist on reviewing every document. They say it is quality control. Actually, it is often control and identity. They define themselves as "the person who ensures quality by reviewing everything." If you enable senior staff to produce high-quality work with AI assistance, does partner review become selective? Only if partners accept their value is strategic oversight, not universal review. That threatens identity.

The operational culture change required is moving from "I trust things I personally controlled" to "I trust well-designed systems with appropriate oversight." That sounds simple. It is not. It requires evidence that systems work, which means running them in parallel with manual processes initially. It requires making system failures visible and fixing them quickly. And it requires leadership to stop rewarding heroic manual effort ("Sarah stayed late all week manually processing invoices") and start rewarding system improvement ("Sarah automated invoice processing and now has time for strategic analysis").

In our proposal example, the culture change "commit to 5-day maximum turnaround with no exceptions" is operational discipline. It means saying no to "this client is special, we need more time" because that breaks the process. It means triaging: this proposal fits the 5-day process, that one is genuinely bespoke and needs different handling. That triage decision is operational judgement enabled by clear process design.

6. Understanding The Colour System

The colour system is not decoration. It prevents the most common mistake in strategy discussions: confusing what you want to achieve with how you will achieve it, or confusing actions with the resources needed, or worst of all, forgetting about culture change entirely.

When everyone is looking at the same wall of post-its, the colours make patterns visible. If you have lots of yellow (actions) but very little blue (resources), you are planning to do things without the means to do them. If you have lots of yellow and blue but almost no pink, you are pretending culture does not need to change. If someone starts talking about "what we need to do" while pointing at a blue post-it (resources), the colour mismatch makes the confusion visible.

Green: Your goal

Green post-its describe your strategic outcomes. What does success look like? How will you know you have achieved it?

Be as specific as you can. "Reduce client onboarding time from 2 weeks to 3 days" is specific. "Improve efficiency" is not. "Increase proposal win rate from 30% to 40%" is specific. "Be more competitive" is not.

If you cannot measure it or describe what will be observably different when you have achieved it, keep refining. You may not know the exact target number at first, but you should know the direction and roughly what success looks like.

Examples:

  • "Reduce average invoice processing cost from £12 to £3 per invoice"

  • "Achieve 90% client satisfaction score (currently 72%)"

  • "Launch new service line generating £500k revenue in first year"

  • "Cut proposal response time by at least 50%" (if you don't yet know current baseline)

You may have multiple green post-its for related goals. But resist wishlist thinking. Three clear goals are better than ten vague ones.

Yellow: Actions and activities

Yellow post-its describe what needs to happen. These are verbs, not nouns. Things people will actually do.

Be as concrete as you can. Early in the process, "Build automated proposal system" is fine. As you work through the canvas, you might refine it to "Deploy Azure OpenAI chatbot for proposal Q&A" or even "Integrate GPT-4 API into document management system for proposal assistance." Start with what you know, add detail as you learn more.

Examples:

  • "Migrate client records from Excel to proper database"

  • "Create template library of most common proposal types"

  • "Build AI agent to draft risk assessment sections"

  • "Train staff on prompt engineering" (early version)

  • "Run 4-hour workshop on prompt engineering for 15 client-facing staff" (refined version)

Most yellow post-its go in the "What to do" column. Occasionally you will have yellow post-its in "Ways of working" if an action is required to change culture.

Blue: Resources, tools, people, budget

Blue post-its describe what you need to execute the actions. Technology, people, budget, data, time - anything required to make actions happen.

Add as much detail as you know. If you know you need "Azure OpenAI, approximately £500/month" write that. If you only know you need "cloud AI service, cost unknown" that is fine too. The point is to identify what resources are needed. You can research specifics later.

Same with people. If you know you need "integration developer, 4 weeks, approximately £25k" write that. If you only know "someone who can integrate APIs, time and cost unknown" that is still useful because it identifies the capability gap.

Order of magnitude matters. Even if you do not know exact costs, try to know whether this is a £10k initiative, a £50k initiative, or a £200k initiative. That affects whether you can just do it or need board approval.

Examples:

  • "Azure OpenAI Service, GPT-4, estimated £500/month" (if you know)

  • "Enterprise AI platform, cost to be confirmed" (if you don't)

  • "Integration developer with Azure experience, approximately 4 weeks" (if you know)

  • "Technical resource to build integration, time unknown" (if you don't)

  • "Historical proposal data in usable format" (free but requires work)

  • "Training budget, approximately £5k" (if you know)

  • "External training, cost to be scoped" (if you don't)

The canvas works at whatever level of detail you have. What matters is identifying the resource requirements. You can get quotes and refine numbers later.

Pink: Culture and behaviour changes

Pink post-its describe what must change about how people actually work. This is the most important colour and the one people most want to avoid.

Write observable behaviours, not slogans. Not "be more innovative" but "partners must share successful proposals to common library within 48 hours of winning." Not "embrace collaboration" but "respond to internal queries within 4 business hours."

You may not know exact timescales at first. "Partners must share proposals promptly" is a start. As you discuss, you will realize "promptly" is too vague and refine it to "within 48 hours" or "within a week." The important thing is naming the behaviour that needs to change.

Examples:

  • "Sales team logs all client interactions in CRM same day" (specific)

  • "Sales team must log client interactions consistently" (less specific but still useful)

  • "Finance approves expense requests within 48 hours"

  • "Finance must speed up approval process significantly"

  • "Stop starting proposals from blank document, use template library"

  • "Junior staff empowered to make decisions under £5k without escalation"

  • "Accept that good enough delivered fast beats perfect delivered late"

Pink post-its reveal uncomfortable truths. The senior person who is the bottleneck. The department that hoards information. The process everyone bypasses because it is broken. Write it down even if you cannot yet articulate the exact behaviour change required. "Managing partner approval is a bottleneck" is a start. Through discussion, you will refine it to "managing partner must delegate approval authority for decisions under £25k."

The discomfort you feel writing pink post-its is often proportional to how important they are. If it feels awkward to write down, it probably needs writing down.

The canvas is iterative. Your first pass will be rougher and less detailed than your final version. That is normal. Start with what you know. Add detail as you work through the four strategic dimensions and as you do research on costs and options. The important thing is getting the ideas out and visible, not having perfect information from the start.

6. Understanding The Colour System

The colour system is not decoration. It prevents the most common mistake in strategy discussions: confusing what you want to achieve with how you will achieve it, or confusing actions with the resources needed, or worst of all, forgetting about culture change entirely.

When everyone is looking at the same wall of post-its, the colours make patterns visible. If you have lots of yellow (actions) but very little blue (resources), you are planning to do things without the means to do them. If you have lots of yellow and blue but almost no pink, you are pretending culture does not need to change. If someone starts talking about "what we need to do" while pointing at a blue post-it (resources), the colour mismatch makes the confusion visible.

Green: Your goal

Green post-its describe your strategic outcomes. What does success look like? How will you know you have achieved it?

Be as specific as you can. "Reduce client onboarding time from 2 weeks to 3 days" is specific. "Improve efficiency" is not. "Increase proposal win rate from 30% to 40%" is specific. "Be more competitive" is not.

If you cannot measure it or describe what will be observably different when you have achieved it, keep refining. You may not know the exact target number at first, but you should know the direction and roughly what success looks like.

Examples:

  • "Reduce average invoice processing cost from £12 to £3 per invoice"

  • "Achieve 90% client satisfaction score (currently 72%)"

  • "Launch new service line generating £500k revenue in first year"

  • "Cut proposal response time by at least 50%" (if you don't yet know current baseline)

You may have multiple green post-its for related goals. But resist wishlist thinking. Three clear goals are better than ten vague ones.

Yellow: Actions and activities

Yellow post-its describe what needs to happen. These are verbs, not nouns. Things people will actually do.

Be as concrete as you can. Early in the process, "Build automated proposal system" is fine. As you work through the canvas, you might refine it to "Deploy Azure OpenAI chatbot for proposal Q&A" or even "Integrate GPT-4 API into document management system for proposal assistance." Start with what you know, add detail as you learn more.

Examples:

  • "Migrate client records from Excel to proper database"

  • "Create template library of most common proposal types"

  • "Build AI agent to draft risk assessment sections"

  • "Train staff on prompt engineering" (early version)

  • "Run 4-hour workshop on prompt engineering for 15 client-facing staff" (refined version)

Most yellow post-its go in the "What to do" column. Occasionally you will have yellow post-its in "Ways of working" if an action is required to change culture.

Blue: Resources, tools, people, budget

Blue post-its describe what you need to execute the actions. Technology, people, budget, data, time - anything required to make actions happen.

Add as much detail as you know. If you know you need "Azure OpenAI, approximately £500/month" write that. If you only know you need "cloud AI service, cost unknown" that is fine too. The point is to identify what resources are needed. You can research specifics later.

Same with people. If you know you need "integration developer, 4 weeks, approximately £25k" write that. If you only know "someone who can integrate APIs, time and cost unknown" that is still useful because it identifies the capability gap.

Order of magnitude matters. Even if you do not know exact costs, try to know whether this is a £10k initiative, a £50k initiative, or a £200k initiative. That affects whether you can just do it or need board approval.

Examples:

  • "Azure OpenAI Service, GPT-4, estimated £500/month" (if you know)

  • "Enterprise AI platform, cost to be confirmed" (if you don't)

  • "Integration developer with Azure experience, approximately 4 weeks" (if you know)

  • "Technical resource to build integration, time unknown" (if you don't)

  • "Historical proposal data in usable format" (free but requires work)

  • "Training budget, approximately £5k" (if you know)

  • "External training, cost to be scoped" (if you don't)

The canvas works at whatever level of detail you have. What matters is identifying the resource requirements. You can get quotes and refine numbers later.

Pink: Culture and behaviour changes

Pink post-its describe what must change about how people actually work. This is the most important colour and the one people most want to avoid.

Write observable behaviours, not slogans. Not "be more innovative" but "partners must share successful proposals to common library within 48 hours of winning." Not "embrace collaboration" but "respond to internal queries within 4 business hours."

You may not know exact timescales at first. "Partners must share proposals promptly" is a start. As you discuss, you will realize "promptly" is too vague and refine it to "within 48 hours" or "within a week." The important thing is naming the behaviour that needs to change.

Examples:

  • "Sales team logs all client interactions in CRM same day" (specific)

  • "Sales team must log client interactions consistently" (less specific but still useful)

  • "Finance approves expense requests within 48 hours"

  • "Finance must speed up approval process significantly"

  • "Stop starting proposals from blank document, use template library"

  • "Junior staff empowered to make decisions under £5k without escalation"

  • "Accept that good enough delivered fast beats perfect delivered late"

Pink post-its reveal uncomfortable truths. The senior person who is the bottleneck. The department that hoards information. The process everyone bypasses because it is broken. Write it down even if you cannot yet articulate the exact behaviour change required. "Managing partner approval is a bottleneck" is a start. Through discussion, you will refine it to "managing partner must delegate approval authority for decisions under £25k."

The discomfort you feel writing pink post-its is often proportional to how important they are. If it feels awkward to write down, it probably needs writing down.

The canvas is iterative. Your first pass will be rougher and less detailed than your final version. That is normal. Start with what you know. Add detail as you work through the four strategic dimensions and as you do research on costs and options. The important thing is getting the ideas out and visible, not having perfect information from the start.

6. Understanding The Colour System

The colour system is not decoration. It prevents the most common mistake in strategy discussions: confusing what you want to achieve with how you will achieve it, or confusing actions with the resources needed, or worst of all, forgetting about culture change entirely.

When everyone is looking at the same wall of post-its, the colours make patterns visible. If you have lots of yellow (actions) but very little blue (resources), you are planning to do things without the means to do them. If you have lots of yellow and blue but almost no pink, you are pretending culture does not need to change. If someone starts talking about "what we need to do" while pointing at a blue post-it (resources), the colour mismatch makes the confusion visible.

Green: Your goal

Green post-its describe your strategic outcomes. What does success look like? How will you know you have achieved it?

Be as specific as you can. "Reduce client onboarding time from 2 weeks to 3 days" is specific. "Improve efficiency" is not. "Increase proposal win rate from 30% to 40%" is specific. "Be more competitive" is not.

If you cannot measure it or describe what will be observably different when you have achieved it, keep refining. You may not know the exact target number at first, but you should know the direction and roughly what success looks like.

Examples:

  • "Reduce average invoice processing cost from £12 to £3 per invoice"

  • "Achieve 90% client satisfaction score (currently 72%)"

  • "Launch new service line generating £500k revenue in first year"

  • "Cut proposal response time by at least 50%" (if you don't yet know current baseline)

You may have multiple green post-its for related goals. But resist wishlist thinking. Three clear goals are better than ten vague ones.

Yellow: Actions and activities

Yellow post-its describe what needs to happen. These are verbs, not nouns. Things people will actually do.

Be as concrete as you can. Early in the process, "Build automated proposal system" is fine. As you work through the canvas, you might refine it to "Deploy Azure OpenAI chatbot for proposal Q&A" or even "Integrate GPT-4 API into document management system for proposal assistance." Start with what you know, add detail as you learn more.

Examples:

  • "Migrate client records from Excel to proper database"

  • "Create template library of most common proposal types"

  • "Build AI agent to draft risk assessment sections"

  • "Train staff on prompt engineering" (early version)

  • "Run 4-hour workshop on prompt engineering for 15 client-facing staff" (refined version)

Most yellow post-its go in the "What to do" column. Occasionally you will have yellow post-its in "Ways of working" if an action is required to change culture.

Blue: Resources, tools, people, budget

Blue post-its describe what you need to execute the actions. Technology, people, budget, data, time - anything required to make actions happen.

Add as much detail as you know. If you know you need "Azure OpenAI, approximately £500/month" write that. If you only know you need "cloud AI service, cost unknown" that is fine too. The point is to identify what resources are needed. You can research specifics later.

Same with people. If you know you need "integration developer, 4 weeks, approximately £25k" write that. If you only know "someone who can integrate APIs, time and cost unknown" that is still useful because it identifies the capability gap.

Order of magnitude matters. Even if you do not know exact costs, try to know whether this is a £10k initiative, a £50k initiative, or a £200k initiative. That affects whether you can just do it or need board approval.

Examples:

  • "Azure OpenAI Service, GPT-4, estimated £500/month" (if you know)

  • "Enterprise AI platform, cost to be confirmed" (if you don't)

  • "Integration developer with Azure experience, approximately 4 weeks" (if you know)

  • "Technical resource to build integration, time unknown" (if you don't)

  • "Historical proposal data in usable format" (free but requires work)

  • "Training budget, approximately £5k" (if you know)

  • "External training, cost to be scoped" (if you don't)

The canvas works at whatever level of detail you have. What matters is identifying the resource requirements. You can get quotes and refine numbers later.

Pink: Culture and behaviour changes

Pink post-its describe what must change about how people actually work. This is the most important colour and the one people most want to avoid.

Write observable behaviours, not slogans. Not "be more innovative" but "partners must share successful proposals to common library within 48 hours of winning." Not "embrace collaboration" but "respond to internal queries within 4 business hours."

You may not know exact timescales at first. "Partners must share proposals promptly" is a start. As you discuss, you will realize "promptly" is too vague and refine it to "within 48 hours" or "within a week." The important thing is naming the behaviour that needs to change.

Examples:

  • "Sales team logs all client interactions in CRM same day" (specific)

  • "Sales team must log client interactions consistently" (less specific but still useful)

  • "Finance approves expense requests within 48 hours"

  • "Finance must speed up approval process significantly"

  • "Stop starting proposals from blank document, use template library"

  • "Junior staff empowered to make decisions under £5k without escalation"

  • "Accept that good enough delivered fast beats perfect delivered late"

Pink post-its reveal uncomfortable truths. The senior person who is the bottleneck. The department that hoards information. The process everyone bypasses because it is broken. Write it down even if you cannot yet articulate the exact behaviour change required. "Managing partner approval is a bottleneck" is a start. Through discussion, you will refine it to "managing partner must delegate approval authority for decisions under £25k."

The discomfort you feel writing pink post-its is often proportional to how important they are. If it feels awkward to write down, it probably needs writing down.

The canvas is iterative. Your first pass will be rougher and less detailed than your final version. That is normal. Start with what you know. Add detail as you work through the four strategic dimensions and as you do research on costs and options. The important thing is getting the ideas out and visible, not having perfect information from the start.

7. Priority Mapping And Dependencies

Once you have filled out the canvas with post-its, step back and add dots and arrows to show priority, status, and dependencies. This layer stops you trying to do everything at once and surfaces the critical path. You will not know all of this immediately - some patterns only become clear as you discuss the canvas. That is fine. Mark what you know, discuss, refine.

Red dots: Gaps or blockers that need addressing

Put a red dot on any post-it that represents a significant gap or blocker. This might be a missing capability, a resource you do not have, cultural resistance that will prevent progress, or a dependency on something outside your control.

Some blockers are obvious. "We need an integration developer but don't have one" - clear red dot. Others emerge through discussion. Someone mentions "we tried this before and it failed because finance wouldn't approve spending" - that surfaces a blocker you may not have written down yet. Add a pink post-it about finance approval authority and give it a red dot.

Red dots force honesty about what is stopping you. If you have ten actions but five of them have red dots because you lack the people to do them, that is valuable information. You need to address the people gap before you can execute the actions.

Be as honest as you can. It is tempting to put red dots only on things that feel politically safe to flag. "We need budget" is safe. "Our CFO does not believe in this strategy and will not release budget" is uncomfortable but more accurate. If that is the real blocker, it needs a red dot even if it is awkward to raise.

You may not spot all blockers in the first pass. That is normal. As you discuss dependencies and sequence, more blockers surface. "We can't do that until X, and X is blocked because..." Keep refining.

Green dots: Quick wins that can be delivered fast

Put a green dot on actions that can be done quickly without significant barriers. These are your early wins that build momentum and credibility.

What counts as "quick" depends on your organisation. In a small firm, "quick" might be "done this month." In a large organisation, "quick" might be "done this quarter." The point is these are things you can start and finish relatively fast without major barriers.

You may not immediately know which things are quick wins. Through discussion, you discover that "train staff on prompt engineering" has no blockers and could start next week - green dot. Or you discover that "create proposal template library" seems simple but requires getting 10 partners to contribute their best work, which historically takes months of nagging - not a green dot, possibly a red dot.

Quick wins are useful but do not mistake them for strategy. Doing easy things feels productive but if they do not move you meaningfully towards the goal, they are just activity. The best quick wins prove the concept and build confidence. If you can show that automating one specific process saves 10 hours a week and improves accuracy, that makes it easier to get buy-in for more ambitious automation.

Orange dots: Dependencies on another item

Put an orange dot on any post-it that depends on something else happening first. Then draw an arrow from what must happen first to what depends on it.

For example, "train staff on new CRM" depends on "implement CRM platform." You cannot train people on a system that does not exist yet. So the training post-it gets an orange dot, and you draw an arrow from the implementation post-it to the training post-it.

Some dependencies are obvious. Others emerge during discussion. Someone says "we can do that once we have the data in a usable format" - that surfaces a dependency you had not articulated. Add the data preparation as a blue resource post-it if it is not already there, then draw the arrow.

Dependencies surface your critical path. If everything depends on hiring a specific person and you have not hired them yet, that is your constraint. No point planning everything else in detail until you address the constraint.

Multiple orange dots on one item means high risk. If something depends on three other things all happening first, any one of them being delayed will delay the whole thing. That might be unavoidable, but you should know about it.

Be realistic about dependencies. Not everything that would be nice to have first is a true dependency. "Train staff on prompt engineering" does not strictly depend on having the full AI system deployed. You could train them on principles using publicly available tools, then apply those skills to your specific system later. Thinking through which dependencies are hard (cannot proceed without this) versus soft (would be easier with this) helps sequence your work.

Stars: High priority or critical success factors

Put a star on items that are absolutely critical. These are the things that if they do not happen, nothing else matters.

Stars are different from red dots. Red dots are blockers. Stars are priorities. An item can have both a star (critical) and a red dot (currently blocked). That combination - starred and red-dotted - tells you where to focus urgently.

You should not have many stars. If everything is critical, nothing is critical. Three to five starred items across the whole canvas is plenty. These are the genuine make-or-break elements.

What deserves a star? Your goal definitely gets a star - that is what you are trying to achieve. Within the actions and resources, the items that are absolutely essential to reaching that goal. In our proposal example, "Deploy secure AI chat system" probably gets a star because without it, you cannot achieve the goal. "Train staff on prompt engineering" might not get a star - it is important, but you could potentially achieve the goal without it or do it later.

Discussing which items deserve stars often surfaces useful disagreement. One person thinks something is critical, another thinks it is nice-to-have. That conversation clarifies what you really believe is essential versus what is supporting.

Using the dots and arrows together

The combination of dots and arrows gives you visual prioritisation:

  • Stars + green dots: High priority and achievable quickly - start here

  • Stars + red dots: Critical but currently blocked - needs urgent attention to unblock

  • Stars + orange dots: Critical but depends on other things - make sure those dependencies are progressing

  • Lots of arrows pointing to one item: This is on the critical path - if it is delayed, everything delays

  • Red dots on items with many arrows pointing from them: The blocker is stopping multiple other things - high-leverage place to intervene

You will not get all of this right in the first pass. The value is in the conversation. Someone points out a dependency nobody had thought about. Someone challenges whether something really is a blocker or just an excuse. Someone realizes that the thing everyone marked as critical actually is not - there is another way to achieve the goal.

The canvas is a thinking tool, not a reporting tool. The messy wall of post-its with dots and arrows scribbled on them is more valuable than a clean PowerPoint slide, because the mess represents real thinking about real constraints.

7. Priority Mapping And Dependencies

Once you have filled out the canvas with post-its, step back and add dots and arrows to show priority, status, and dependencies. This layer stops you trying to do everything at once and surfaces the critical path. You will not know all of this immediately - some patterns only become clear as you discuss the canvas. That is fine. Mark what you know, discuss, refine.

Red dots: Gaps or blockers that need addressing

Put a red dot on any post-it that represents a significant gap or blocker. This might be a missing capability, a resource you do not have, cultural resistance that will prevent progress, or a dependency on something outside your control.

Some blockers are obvious. "We need an integration developer but don't have one" - clear red dot. Others emerge through discussion. Someone mentions "we tried this before and it failed because finance wouldn't approve spending" - that surfaces a blocker you may not have written down yet. Add a pink post-it about finance approval authority and give it a red dot.

Red dots force honesty about what is stopping you. If you have ten actions but five of them have red dots because you lack the people to do them, that is valuable information. You need to address the people gap before you can execute the actions.

Be as honest as you can. It is tempting to put red dots only on things that feel politically safe to flag. "We need budget" is safe. "Our CFO does not believe in this strategy and will not release budget" is uncomfortable but more accurate. If that is the real blocker, it needs a red dot even if it is awkward to raise.

You may not spot all blockers in the first pass. That is normal. As you discuss dependencies and sequence, more blockers surface. "We can't do that until X, and X is blocked because..." Keep refining.

Green dots: Quick wins that can be delivered fast

Put a green dot on actions that can be done quickly without significant barriers. These are your early wins that build momentum and credibility.

What counts as "quick" depends on your organisation. In a small firm, "quick" might be "done this month." In a large organisation, "quick" might be "done this quarter." The point is these are things you can start and finish relatively fast without major barriers.

You may not immediately know which things are quick wins. Through discussion, you discover that "train staff on prompt engineering" has no blockers and could start next week - green dot. Or you discover that "create proposal template library" seems simple but requires getting 10 partners to contribute their best work, which historically takes months of nagging - not a green dot, possibly a red dot.

Quick wins are useful but do not mistake them for strategy. Doing easy things feels productive but if they do not move you meaningfully towards the goal, they are just activity. The best quick wins prove the concept and build confidence. If you can show that automating one specific process saves 10 hours a week and improves accuracy, that makes it easier to get buy-in for more ambitious automation.

Orange dots: Dependencies on another item

Put an orange dot on any post-it that depends on something else happening first. Then draw an arrow from what must happen first to what depends on it.

For example, "train staff on new CRM" depends on "implement CRM platform." You cannot train people on a system that does not exist yet. So the training post-it gets an orange dot, and you draw an arrow from the implementation post-it to the training post-it.

Some dependencies are obvious. Others emerge during discussion. Someone says "we can do that once we have the data in a usable format" - that surfaces a dependency you had not articulated. Add the data preparation as a blue resource post-it if it is not already there, then draw the arrow.

Dependencies surface your critical path. If everything depends on hiring a specific person and you have not hired them yet, that is your constraint. No point planning everything else in detail until you address the constraint.

Multiple orange dots on one item means high risk. If something depends on three other things all happening first, any one of them being delayed will delay the whole thing. That might be unavoidable, but you should know about it.

Be realistic about dependencies. Not everything that would be nice to have first is a true dependency. "Train staff on prompt engineering" does not strictly depend on having the full AI system deployed. You could train them on principles using publicly available tools, then apply those skills to your specific system later. Thinking through which dependencies are hard (cannot proceed without this) versus soft (would be easier with this) helps sequence your work.

Stars: High priority or critical success factors

Put a star on items that are absolutely critical. These are the things that if they do not happen, nothing else matters.

Stars are different from red dots. Red dots are blockers. Stars are priorities. An item can have both a star (critical) and a red dot (currently blocked). That combination - starred and red-dotted - tells you where to focus urgently.

You should not have many stars. If everything is critical, nothing is critical. Three to five starred items across the whole canvas is plenty. These are the genuine make-or-break elements.

What deserves a star? Your goal definitely gets a star - that is what you are trying to achieve. Within the actions and resources, the items that are absolutely essential to reaching that goal. In our proposal example, "Deploy secure AI chat system" probably gets a star because without it, you cannot achieve the goal. "Train staff on prompt engineering" might not get a star - it is important, but you could potentially achieve the goal without it or do it later.

Discussing which items deserve stars often surfaces useful disagreement. One person thinks something is critical, another thinks it is nice-to-have. That conversation clarifies what you really believe is essential versus what is supporting.

Using the dots and arrows together

The combination of dots and arrows gives you visual prioritisation:

  • Stars + green dots: High priority and achievable quickly - start here

  • Stars + red dots: Critical but currently blocked - needs urgent attention to unblock

  • Stars + orange dots: Critical but depends on other things - make sure those dependencies are progressing

  • Lots of arrows pointing to one item: This is on the critical path - if it is delayed, everything delays

  • Red dots on items with many arrows pointing from them: The blocker is stopping multiple other things - high-leverage place to intervene

You will not get all of this right in the first pass. The value is in the conversation. Someone points out a dependency nobody had thought about. Someone challenges whether something really is a blocker or just an excuse. Someone realizes that the thing everyone marked as critical actually is not - there is another way to achieve the goal.

The canvas is a thinking tool, not a reporting tool. The messy wall of post-its with dots and arrows scribbled on them is more valuable than a clean PowerPoint slide, because the mess represents real thinking about real constraints.

7. Priority Mapping And Dependencies

Once you have filled out the canvas with post-its, step back and add dots and arrows to show priority, status, and dependencies. This layer stops you trying to do everything at once and surfaces the critical path. You will not know all of this immediately - some patterns only become clear as you discuss the canvas. That is fine. Mark what you know, discuss, refine.

Red dots: Gaps or blockers that need addressing

Put a red dot on any post-it that represents a significant gap or blocker. This might be a missing capability, a resource you do not have, cultural resistance that will prevent progress, or a dependency on something outside your control.

Some blockers are obvious. "We need an integration developer but don't have one" - clear red dot. Others emerge through discussion. Someone mentions "we tried this before and it failed because finance wouldn't approve spending" - that surfaces a blocker you may not have written down yet. Add a pink post-it about finance approval authority and give it a red dot.

Red dots force honesty about what is stopping you. If you have ten actions but five of them have red dots because you lack the people to do them, that is valuable information. You need to address the people gap before you can execute the actions.

Be as honest as you can. It is tempting to put red dots only on things that feel politically safe to flag. "We need budget" is safe. "Our CFO does not believe in this strategy and will not release budget" is uncomfortable but more accurate. If that is the real blocker, it needs a red dot even if it is awkward to raise.

You may not spot all blockers in the first pass. That is normal. As you discuss dependencies and sequence, more blockers surface. "We can't do that until X, and X is blocked because..." Keep refining.

Green dots: Quick wins that can be delivered fast

Put a green dot on actions that can be done quickly without significant barriers. These are your early wins that build momentum and credibility.

What counts as "quick" depends on your organisation. In a small firm, "quick" might be "done this month." In a large organisation, "quick" might be "done this quarter." The point is these are things you can start and finish relatively fast without major barriers.

You may not immediately know which things are quick wins. Through discussion, you discover that "train staff on prompt engineering" has no blockers and could start next week - green dot. Or you discover that "create proposal template library" seems simple but requires getting 10 partners to contribute their best work, which historically takes months of nagging - not a green dot, possibly a red dot.

Quick wins are useful but do not mistake them for strategy. Doing easy things feels productive but if they do not move you meaningfully towards the goal, they are just activity. The best quick wins prove the concept and build confidence. If you can show that automating one specific process saves 10 hours a week and improves accuracy, that makes it easier to get buy-in for more ambitious automation.

Orange dots: Dependencies on another item

Put an orange dot on any post-it that depends on something else happening first. Then draw an arrow from what must happen first to what depends on it.

For example, "train staff on new CRM" depends on "implement CRM platform." You cannot train people on a system that does not exist yet. So the training post-it gets an orange dot, and you draw an arrow from the implementation post-it to the training post-it.

Some dependencies are obvious. Others emerge during discussion. Someone says "we can do that once we have the data in a usable format" - that surfaces a dependency you had not articulated. Add the data preparation as a blue resource post-it if it is not already there, then draw the arrow.

Dependencies surface your critical path. If everything depends on hiring a specific person and you have not hired them yet, that is your constraint. No point planning everything else in detail until you address the constraint.

Multiple orange dots on one item means high risk. If something depends on three other things all happening first, any one of them being delayed will delay the whole thing. That might be unavoidable, but you should know about it.

Be realistic about dependencies. Not everything that would be nice to have first is a true dependency. "Train staff on prompt engineering" does not strictly depend on having the full AI system deployed. You could train them on principles using publicly available tools, then apply those skills to your specific system later. Thinking through which dependencies are hard (cannot proceed without this) versus soft (would be easier with this) helps sequence your work.

Stars: High priority or critical success factors

Put a star on items that are absolutely critical. These are the things that if they do not happen, nothing else matters.

Stars are different from red dots. Red dots are blockers. Stars are priorities. An item can have both a star (critical) and a red dot (currently blocked). That combination - starred and red-dotted - tells you where to focus urgently.

You should not have many stars. If everything is critical, nothing is critical. Three to five starred items across the whole canvas is plenty. These are the genuine make-or-break elements.

What deserves a star? Your goal definitely gets a star - that is what you are trying to achieve. Within the actions and resources, the items that are absolutely essential to reaching that goal. In our proposal example, "Deploy secure AI chat system" probably gets a star because without it, you cannot achieve the goal. "Train staff on prompt engineering" might not get a star - it is important, but you could potentially achieve the goal without it or do it later.

Discussing which items deserve stars often surfaces useful disagreement. One person thinks something is critical, another thinks it is nice-to-have. That conversation clarifies what you really believe is essential versus what is supporting.

Using the dots and arrows together

The combination of dots and arrows gives you visual prioritisation:

  • Stars + green dots: High priority and achievable quickly - start here

  • Stars + red dots: Critical but currently blocked - needs urgent attention to unblock

  • Stars + orange dots: Critical but depends on other things - make sure those dependencies are progressing

  • Lots of arrows pointing to one item: This is on the critical path - if it is delayed, everything delays

  • Red dots on items with many arrows pointing from them: The blocker is stopping multiple other things - high-leverage place to intervene

You will not get all of this right in the first pass. The value is in the conversation. Someone points out a dependency nobody had thought about. Someone challenges whether something really is a blocker or just an excuse. Someone realizes that the thing everyone marked as critical actually is not - there is another way to achieve the goal.

The canvas is a thinking tool, not a reporting tool. The messy wall of post-its with dots and arrows scribbled on them is more valuable than a clean PowerPoint slide, because the mess represents real thinking about real constraints.

8. Running The Workshop

Facilitation is where this either works or produces nothing useful. The framework is simple. Getting people to be honest is not.

Set up the room properly

You need three roles: facilitator, scribe, and observer.

The facilitator runs the session, asks questions, pushes for specificity, and manages the group dynamics. They should be standing, moving around, writing on post-its, and keeping energy high.

The scribe captures what is said in real-time, either on a laptop or flip chart visible to everyone. This is not minutes - it is capturing the reasoning behind decisions. When someone says "we tried that before and it failed because finance wouldn't approve," the scribe captures that. When the group decides something is not a blocker, the scribe notes why. This context is essential when you write up the workshop later.

The observer watches the dynamics. Who is not speaking? Who gets shut down when they try to contribute? What topics create tension? Are there patterns - like every time someone mentions a specific department, the energy drops? The observer takes notes on what is not being said as much as what is being said. They share observations during breaks or at the end.

If you are running this internally, the observer role is critical. The facilitator is focused on the content. The observer is focused on the people and the dynamics. After the workshop, the observer's notes often explain why certain things are blockers or why certain culture changes will be hard.

Set the tone early

In the first five minutes, make it clear this is not a theoretical exercise. You are here to make decisions, not produce another strategic document that sits on a shelf. Anything that comes out of this session needs to be something you are prepared to act on.

Also make it clear that political answers and corporate speak will be challenged. If someone says "we need to be more customer-centric" ask them what that means in practice. If they say "improve efficiency" ask them efficient at what, measured how, by when.

Example: A consulting firm starts their workshop with the managing partner saying "we need to embrace AI across the business." The facilitator writes that on a post-it, sticks it on the wall, then asks: "If we successfully embrace AI across the business, what will be different in 12 months? What will clients notice? What will staff notice? What will the accounts show?"

Silence. Then someone says "we'll be more efficient." Facilitator: "Efficient at what? Give me a specific example of something that currently takes X time and will take Y time." Eventually: "Proposals currently take 3 weeks, we need them to take 5 days." Now you have something specific to work with.

Start with the goal, and do not move on until it is specific

This usually takes longer than you expect. People want to list multiple goals or keep goals vague enough that nobody can be held accountable.

Push back. If the goal is "improve customer satisfaction" ask how they will measure it. If they say "NPS scores" ask from what to what, by when. If they cannot answer, you do not have a goal yet.

It is fine to have an awkward silence here. Let people sit with the discomfort of not being able to articulate what success looks like. That discomfort is useful. It surfaces the fact that maybe the leadership team is not actually aligned on what they are trying to achieve.

Example: A professional services firm says their goal is "better quality work." The facilitator asks "how do you measure quality?" Someone says "client feedback." The facilitator asks "what's your current client satisfaction score?" Nobody knows. The facilitator asks "do you track complaints?" They do - currently 8 complaints per quarter. "So would reducing complaints to 2 per quarter be success?" Yes. "What about client retention - what's your current retention rate?" 85%. "What would success look like?" 92%. Now you have measurable goals: reduce complaints from 8 to 2 per quarter, increase retention from 85% to 92%.

This conversation took 20 minutes and felt uncomfortable because they realised they were talking about quality without measuring it. But now they have a goal they can work backwards from.

Work through the four strategic dimensions systematically

Once you have the goal, move to actions. Go through each dimension in turn: Client Value, Risk, Teams, Operations. What specifically needs to happen in each area to achieve the goal?

Do not let people jump to solutions yet. If someone says "we need AI" ask what they think AI will do. If they say "automate processes" ask which processes, and why automating them will help achieve the goal. Keep pushing for specificity.

Write every idea on a post-it, even if it seems tangential. You can remove things later. In the moment, people need to feel heard. If you dismiss ideas too quickly, people disengage.

Example: Working through Client Value for the proposal speed goal, someone says "we should improve our proposal templates." The facilitator writes that on a yellow post-it and asks "how will better templates help us hit 5-day turnaround?" Someone else says "currently everyone starts from scratch, templates would give us a starting point." Facilitator: "How much time does starting from scratch add?" "Probably 2-3 days." Facilitator: "So if we had templates, what would the process look like?" "Junior staff could draft using templates, senior staff review, partner approves." Facilitator: "How long would that take?" "Probably 2 days if everyone sticks to timelines."

Now you are getting somewhere. The action is not just "improve templates" but "create proposal template library covering 80% of common bids" and the culture change is "stop starting from scratch, use templates as starting point."

Move to resources only after actions are clear

This is important. If you start with "what tools do we need" you get a shopping list. If you start with "what do we need to do" and then ask "what do we need to do that," you get a resource plan that is actually connected to outcomes.

Again, push for specificity. If someone says "we need better technology" that means nothing. If they say "we need a CRM that integrates with our existing finance system and has API access for custom workflows" that is something you can actually evaluate and budget for.

Example: The firm has identified "build searchable proposal library" as an action. The facilitator asks "what do you need to build that?" Someone says "a system to store proposals." Facilitator: "What system? Where? Who can access it?" Through discussion: they need it in their existing document management system (already have it), they need proposals in searchable format (currently PDFs and Word docs, need standardising), they need someone to tag and categorise 3 years of proposals (approximately 200 proposals, 2-3 days work).

Blue post-its now read: "Document management system with search capability (already have)", "Standardised proposal format for past 3 years (need to create)", "Resource to categorise and tag 200 proposals, 2-3 days, approximately £2k if external or internal staff time."

This is much more specific than "we need a system."

The culture column is where the real conversation happens

By the time you get to pink post-its, people are warmed up. This is where you ask the difficult questions.

"What needs to change about how people actually work?" Not what should change, what needs to change if this goal is going to happen.

Listen for passive voice and euphemisms. "There needs to be better communication between departments" is too vague. Ask who needs to communicate what to whom, and what is currently preventing that. Often you will find there is a specific person or dynamic that everyone knows about but nobody wants to name.

Your job as facilitator is to name it. Not in an aggressive way, but clearly. "It sounds like what you are saying is that sign-off takes three weeks because Sarah insists on reviewing everything personally and she is often unavailable. Is that accurate?" If everyone nods, write it down.

Example: Discussing why proposals take three weeks, someone mentions "the approval process is slow." Facilitator asks "who approves?" "Partners." "How many partners need to approve?" "All three." "How long does each approval take?" Silence. Then someone says "Depends on the partner." Another person: "James usually takes a week." Another: "And you can't get the next approval until James has done his."

The facilitator writes a pink post-it: "James must review proposals within 48 hours, not one week." Then asks James (who is in the room): "What would need to change for you to review within 48 hours?" James: "I'd need proposals to be higher quality when they reach me so I'm not rewriting everything." Facilitator: "What would higher quality look like?" James describes what good looks like. Facilitator writes a new yellow action: "Create quality checklist for proposals before partner review" and a new pink culture change: "Proposals must meet quality checklist before going to partners - no exceptions."

This is uncomfortable. You have named James as a bottleneck. You have surfaced that the real problem is quality coming to him, not his review speed. But now you have specific, actionable items instead of vague "improve approval process."

This is why external facilitation often works better than internal. An outsider can say things that an insider cannot say without risking their career. The observer's notes during this section are particularly valuable - they capture who tensed up, who looked away, who tried to change the subject.

Use dots and arrows to prioritise

Once the canvas is filled, step back and look at the whole thing. Ask the group: where are the gaps? What are the quick wins? What dependencies exist?

Hand out dot stickers and let people mark items as they discuss them. Red for blockers, green for quick wins, orange for dependencies, stars for critical items.

This creates a visual picture of where effort needs to focus. If you have 30 actions but only five green dots, you are planning too many hard things simultaneously. If everything has a star, your priorities are not clear.

Example: The group puts red dots on "Partners must share successful proposals to library" (known resistance), "Deploy secure AI system" (needs security approval, currently pending), and "Train staff on prompt engineering" (nobody knows how to do this training yet). Green dots go on "Create quality checklist" (can be done this week) and "Standardise proposal format going forward" (easy change). Stars go on the goal and on "Deploy secure AI system" (critical to achieving the goal).

Looking at the pattern, the facilitator asks: "You have stars on the AI system but also a red dot - it needs security approval. How long does security approval take?" "Usually 2-3 months." "So everything is blocked for 2-3 months?" This surfaces that they need to prioritise the security approval process and potentially start it immediately, even before the workshop report is finalised.

End with commitments, not just plans

Before you finish, get specific about what happens next. Who is responsible for what? When is the next review? What decisions need to be made before you can proceed?

Do not let this turn into "we will take this away and think about it." That is where momentum dies. Get at least one concrete next action committed before people leave the room.

Example: Before closing, the facilitator writes on a flip chart visible to everyone:

Immediate next actions:

  • Sarah: Initiate security approval for Azure OpenAI deployment - by Friday

  • James: Draft proposal quality checklist - by end of next week

  • Michael: Get quotes for integration developer - by Monday

  • Leadership team: Decide budget allocation for this initiative - meeting scheduled for two weeks from today

Next review: Four weeks from today, 2-hour session to review progress and finalise detailed plan.

Everyone in the room commits verbally to their action. The scribe has captured it. The observer notes who hesitated. You have momentum.

The facilitator also clarifies what will be delivered: "We'll provide a clean version of this canvas, a written report explaining the reasoning behind key decisions, and a prioritised action plan within one week. The report will include the observer's notes on dynamics and potential risks we should watch for."

8. Running The Workshop

Facilitation is where this either works or produces nothing useful. The framework is simple. Getting people to be honest is not.

Set up the room properly

You need three roles: facilitator, scribe, and observer.

The facilitator runs the session, asks questions, pushes for specificity, and manages the group dynamics. They should be standing, moving around, writing on post-its, and keeping energy high.

The scribe captures what is said in real-time, either on a laptop or flip chart visible to everyone. This is not minutes - it is capturing the reasoning behind decisions. When someone says "we tried that before and it failed because finance wouldn't approve," the scribe captures that. When the group decides something is not a blocker, the scribe notes why. This context is essential when you write up the workshop later.

The observer watches the dynamics. Who is not speaking? Who gets shut down when they try to contribute? What topics create tension? Are there patterns - like every time someone mentions a specific department, the energy drops? The observer takes notes on what is not being said as much as what is being said. They share observations during breaks or at the end.

If you are running this internally, the observer role is critical. The facilitator is focused on the content. The observer is focused on the people and the dynamics. After the workshop, the observer's notes often explain why certain things are blockers or why certain culture changes will be hard.

Set the tone early

In the first five minutes, make it clear this is not a theoretical exercise. You are here to make decisions, not produce another strategic document that sits on a shelf. Anything that comes out of this session needs to be something you are prepared to act on.

Also make it clear that political answers and corporate speak will be challenged. If someone says "we need to be more customer-centric" ask them what that means in practice. If they say "improve efficiency" ask them efficient at what, measured how, by when.

Example: A consulting firm starts their workshop with the managing partner saying "we need to embrace AI across the business." The facilitator writes that on a post-it, sticks it on the wall, then asks: "If we successfully embrace AI across the business, what will be different in 12 months? What will clients notice? What will staff notice? What will the accounts show?"

Silence. Then someone says "we'll be more efficient." Facilitator: "Efficient at what? Give me a specific example of something that currently takes X time and will take Y time." Eventually: "Proposals currently take 3 weeks, we need them to take 5 days." Now you have something specific to work with.

Start with the goal, and do not move on until it is specific

This usually takes longer than you expect. People want to list multiple goals or keep goals vague enough that nobody can be held accountable.

Push back. If the goal is "improve customer satisfaction" ask how they will measure it. If they say "NPS scores" ask from what to what, by when. If they cannot answer, you do not have a goal yet.

It is fine to have an awkward silence here. Let people sit with the discomfort of not being able to articulate what success looks like. That discomfort is useful. It surfaces the fact that maybe the leadership team is not actually aligned on what they are trying to achieve.

Example: A professional services firm says their goal is "better quality work." The facilitator asks "how do you measure quality?" Someone says "client feedback." The facilitator asks "what's your current client satisfaction score?" Nobody knows. The facilitator asks "do you track complaints?" They do - currently 8 complaints per quarter. "So would reducing complaints to 2 per quarter be success?" Yes. "What about client retention - what's your current retention rate?" 85%. "What would success look like?" 92%. Now you have measurable goals: reduce complaints from 8 to 2 per quarter, increase retention from 85% to 92%.

This conversation took 20 minutes and felt uncomfortable because they realised they were talking about quality without measuring it. But now they have a goal they can work backwards from.

Work through the four strategic dimensions systematically

Once you have the goal, move to actions. Go through each dimension in turn: Client Value, Risk, Teams, Operations. What specifically needs to happen in each area to achieve the goal?

Do not let people jump to solutions yet. If someone says "we need AI" ask what they think AI will do. If they say "automate processes" ask which processes, and why automating them will help achieve the goal. Keep pushing for specificity.

Write every idea on a post-it, even if it seems tangential. You can remove things later. In the moment, people need to feel heard. If you dismiss ideas too quickly, people disengage.

Example: Working through Client Value for the proposal speed goal, someone says "we should improve our proposal templates." The facilitator writes that on a yellow post-it and asks "how will better templates help us hit 5-day turnaround?" Someone else says "currently everyone starts from scratch, templates would give us a starting point." Facilitator: "How much time does starting from scratch add?" "Probably 2-3 days." Facilitator: "So if we had templates, what would the process look like?" "Junior staff could draft using templates, senior staff review, partner approves." Facilitator: "How long would that take?" "Probably 2 days if everyone sticks to timelines."

Now you are getting somewhere. The action is not just "improve templates" but "create proposal template library covering 80% of common bids" and the culture change is "stop starting from scratch, use templates as starting point."

Move to resources only after actions are clear

This is important. If you start with "what tools do we need" you get a shopping list. If you start with "what do we need to do" and then ask "what do we need to do that," you get a resource plan that is actually connected to outcomes.

Again, push for specificity. If someone says "we need better technology" that means nothing. If they say "we need a CRM that integrates with our existing finance system and has API access for custom workflows" that is something you can actually evaluate and budget for.

Example: The firm has identified "build searchable proposal library" as an action. The facilitator asks "what do you need to build that?" Someone says "a system to store proposals." Facilitator: "What system? Where? Who can access it?" Through discussion: they need it in their existing document management system (already have it), they need proposals in searchable format (currently PDFs and Word docs, need standardising), they need someone to tag and categorise 3 years of proposals (approximately 200 proposals, 2-3 days work).

Blue post-its now read: "Document management system with search capability (already have)", "Standardised proposal format for past 3 years (need to create)", "Resource to categorise and tag 200 proposals, 2-3 days, approximately £2k if external or internal staff time."

This is much more specific than "we need a system."

The culture column is where the real conversation happens

By the time you get to pink post-its, people are warmed up. This is where you ask the difficult questions.

"What needs to change about how people actually work?" Not what should change, what needs to change if this goal is going to happen.

Listen for passive voice and euphemisms. "There needs to be better communication between departments" is too vague. Ask who needs to communicate what to whom, and what is currently preventing that. Often you will find there is a specific person or dynamic that everyone knows about but nobody wants to name.

Your job as facilitator is to name it. Not in an aggressive way, but clearly. "It sounds like what you are saying is that sign-off takes three weeks because Sarah insists on reviewing everything personally and she is often unavailable. Is that accurate?" If everyone nods, write it down.

Example: Discussing why proposals take three weeks, someone mentions "the approval process is slow." Facilitator asks "who approves?" "Partners." "How many partners need to approve?" "All three." "How long does each approval take?" Silence. Then someone says "Depends on the partner." Another person: "James usually takes a week." Another: "And you can't get the next approval until James has done his."

The facilitator writes a pink post-it: "James must review proposals within 48 hours, not one week." Then asks James (who is in the room): "What would need to change for you to review within 48 hours?" James: "I'd need proposals to be higher quality when they reach me so I'm not rewriting everything." Facilitator: "What would higher quality look like?" James describes what good looks like. Facilitator writes a new yellow action: "Create quality checklist for proposals before partner review" and a new pink culture change: "Proposals must meet quality checklist before going to partners - no exceptions."

This is uncomfortable. You have named James as a bottleneck. You have surfaced that the real problem is quality coming to him, not his review speed. But now you have specific, actionable items instead of vague "improve approval process."

This is why external facilitation often works better than internal. An outsider can say things that an insider cannot say without risking their career. The observer's notes during this section are particularly valuable - they capture who tensed up, who looked away, who tried to change the subject.

Use dots and arrows to prioritise

Once the canvas is filled, step back and look at the whole thing. Ask the group: where are the gaps? What are the quick wins? What dependencies exist?

Hand out dot stickers and let people mark items as they discuss them. Red for blockers, green for quick wins, orange for dependencies, stars for critical items.

This creates a visual picture of where effort needs to focus. If you have 30 actions but only five green dots, you are planning too many hard things simultaneously. If everything has a star, your priorities are not clear.

Example: The group puts red dots on "Partners must share successful proposals to library" (known resistance), "Deploy secure AI system" (needs security approval, currently pending), and "Train staff on prompt engineering" (nobody knows how to do this training yet). Green dots go on "Create quality checklist" (can be done this week) and "Standardise proposal format going forward" (easy change). Stars go on the goal and on "Deploy secure AI system" (critical to achieving the goal).

Looking at the pattern, the facilitator asks: "You have stars on the AI system but also a red dot - it needs security approval. How long does security approval take?" "Usually 2-3 months." "So everything is blocked for 2-3 months?" This surfaces that they need to prioritise the security approval process and potentially start it immediately, even before the workshop report is finalised.

End with commitments, not just plans

Before you finish, get specific about what happens next. Who is responsible for what? When is the next review? What decisions need to be made before you can proceed?

Do not let this turn into "we will take this away and think about it." That is where momentum dies. Get at least one concrete next action committed before people leave the room.

Example: Before closing, the facilitator writes on a flip chart visible to everyone:

Immediate next actions:

  • Sarah: Initiate security approval for Azure OpenAI deployment - by Friday

  • James: Draft proposal quality checklist - by end of next week

  • Michael: Get quotes for integration developer - by Monday

  • Leadership team: Decide budget allocation for this initiative - meeting scheduled for two weeks from today

Next review: Four weeks from today, 2-hour session to review progress and finalise detailed plan.

Everyone in the room commits verbally to their action. The scribe has captured it. The observer notes who hesitated. You have momentum.

The facilitator also clarifies what will be delivered: "We'll provide a clean version of this canvas, a written report explaining the reasoning behind key decisions, and a prioritised action plan within one week. The report will include the observer's notes on dynamics and potential risks we should watch for."

8. Running The Workshop

Facilitation is where this either works or produces nothing useful. The framework is simple. Getting people to be honest is not.

Set up the room properly

You need three roles: facilitator, scribe, and observer.

The facilitator runs the session, asks questions, pushes for specificity, and manages the group dynamics. They should be standing, moving around, writing on post-its, and keeping energy high.

The scribe captures what is said in real-time, either on a laptop or flip chart visible to everyone. This is not minutes - it is capturing the reasoning behind decisions. When someone says "we tried that before and it failed because finance wouldn't approve," the scribe captures that. When the group decides something is not a blocker, the scribe notes why. This context is essential when you write up the workshop later.

The observer watches the dynamics. Who is not speaking? Who gets shut down when they try to contribute? What topics create tension? Are there patterns - like every time someone mentions a specific department, the energy drops? The observer takes notes on what is not being said as much as what is being said. They share observations during breaks or at the end.

If you are running this internally, the observer role is critical. The facilitator is focused on the content. The observer is focused on the people and the dynamics. After the workshop, the observer's notes often explain why certain things are blockers or why certain culture changes will be hard.

Set the tone early

In the first five minutes, make it clear this is not a theoretical exercise. You are here to make decisions, not produce another strategic document that sits on a shelf. Anything that comes out of this session needs to be something you are prepared to act on.

Also make it clear that political answers and corporate speak will be challenged. If someone says "we need to be more customer-centric" ask them what that means in practice. If they say "improve efficiency" ask them efficient at what, measured how, by when.

Example: A consulting firm starts their workshop with the managing partner saying "we need to embrace AI across the business." The facilitator writes that on a post-it, sticks it on the wall, then asks: "If we successfully embrace AI across the business, what will be different in 12 months? What will clients notice? What will staff notice? What will the accounts show?"

Silence. Then someone says "we'll be more efficient." Facilitator: "Efficient at what? Give me a specific example of something that currently takes X time and will take Y time." Eventually: "Proposals currently take 3 weeks, we need them to take 5 days." Now you have something specific to work with.

Start with the goal, and do not move on until it is specific

This usually takes longer than you expect. People want to list multiple goals or keep goals vague enough that nobody can be held accountable.

Push back. If the goal is "improve customer satisfaction" ask how they will measure it. If they say "NPS scores" ask from what to what, by when. If they cannot answer, you do not have a goal yet.

It is fine to have an awkward silence here. Let people sit with the discomfort of not being able to articulate what success looks like. That discomfort is useful. It surfaces the fact that maybe the leadership team is not actually aligned on what they are trying to achieve.

Example: A professional services firm says their goal is "better quality work." The facilitator asks "how do you measure quality?" Someone says "client feedback." The facilitator asks "what's your current client satisfaction score?" Nobody knows. The facilitator asks "do you track complaints?" They do - currently 8 complaints per quarter. "So would reducing complaints to 2 per quarter be success?" Yes. "What about client retention - what's your current retention rate?" 85%. "What would success look like?" 92%. Now you have measurable goals: reduce complaints from 8 to 2 per quarter, increase retention from 85% to 92%.

This conversation took 20 minutes and felt uncomfortable because they realised they were talking about quality without measuring it. But now they have a goal they can work backwards from.

Work through the four strategic dimensions systematically

Once you have the goal, move to actions. Go through each dimension in turn: Client Value, Risk, Teams, Operations. What specifically needs to happen in each area to achieve the goal?

Do not let people jump to solutions yet. If someone says "we need AI" ask what they think AI will do. If they say "automate processes" ask which processes, and why automating them will help achieve the goal. Keep pushing for specificity.

Write every idea on a post-it, even if it seems tangential. You can remove things later. In the moment, people need to feel heard. If you dismiss ideas too quickly, people disengage.

Example: Working through Client Value for the proposal speed goal, someone says "we should improve our proposal templates." The facilitator writes that on a yellow post-it and asks "how will better templates help us hit 5-day turnaround?" Someone else says "currently everyone starts from scratch, templates would give us a starting point." Facilitator: "How much time does starting from scratch add?" "Probably 2-3 days." Facilitator: "So if we had templates, what would the process look like?" "Junior staff could draft using templates, senior staff review, partner approves." Facilitator: "How long would that take?" "Probably 2 days if everyone sticks to timelines."

Now you are getting somewhere. The action is not just "improve templates" but "create proposal template library covering 80% of common bids" and the culture change is "stop starting from scratch, use templates as starting point."

Move to resources only after actions are clear

This is important. If you start with "what tools do we need" you get a shopping list. If you start with "what do we need to do" and then ask "what do we need to do that," you get a resource plan that is actually connected to outcomes.

Again, push for specificity. If someone says "we need better technology" that means nothing. If they say "we need a CRM that integrates with our existing finance system and has API access for custom workflows" that is something you can actually evaluate and budget for.

Example: The firm has identified "build searchable proposal library" as an action. The facilitator asks "what do you need to build that?" Someone says "a system to store proposals." Facilitator: "What system? Where? Who can access it?" Through discussion: they need it in their existing document management system (already have it), they need proposals in searchable format (currently PDFs and Word docs, need standardising), they need someone to tag and categorise 3 years of proposals (approximately 200 proposals, 2-3 days work).

Blue post-its now read: "Document management system with search capability (already have)", "Standardised proposal format for past 3 years (need to create)", "Resource to categorise and tag 200 proposals, 2-3 days, approximately £2k if external or internal staff time."

This is much more specific than "we need a system."

The culture column is where the real conversation happens

By the time you get to pink post-its, people are warmed up. This is where you ask the difficult questions.

"What needs to change about how people actually work?" Not what should change, what needs to change if this goal is going to happen.

Listen for passive voice and euphemisms. "There needs to be better communication between departments" is too vague. Ask who needs to communicate what to whom, and what is currently preventing that. Often you will find there is a specific person or dynamic that everyone knows about but nobody wants to name.

Your job as facilitator is to name it. Not in an aggressive way, but clearly. "It sounds like what you are saying is that sign-off takes three weeks because Sarah insists on reviewing everything personally and she is often unavailable. Is that accurate?" If everyone nods, write it down.

Example: Discussing why proposals take three weeks, someone mentions "the approval process is slow." Facilitator asks "who approves?" "Partners." "How many partners need to approve?" "All three." "How long does each approval take?" Silence. Then someone says "Depends on the partner." Another person: "James usually takes a week." Another: "And you can't get the next approval until James has done his."

The facilitator writes a pink post-it: "James must review proposals within 48 hours, not one week." Then asks James (who is in the room): "What would need to change for you to review within 48 hours?" James: "I'd need proposals to be higher quality when they reach me so I'm not rewriting everything." Facilitator: "What would higher quality look like?" James describes what good looks like. Facilitator writes a new yellow action: "Create quality checklist for proposals before partner review" and a new pink culture change: "Proposals must meet quality checklist before going to partners - no exceptions."

This is uncomfortable. You have named James as a bottleneck. You have surfaced that the real problem is quality coming to him, not his review speed. But now you have specific, actionable items instead of vague "improve approval process."

This is why external facilitation often works better than internal. An outsider can say things that an insider cannot say without risking their career. The observer's notes during this section are particularly valuable - they capture who tensed up, who looked away, who tried to change the subject.

Use dots and arrows to prioritise

Once the canvas is filled, step back and look at the whole thing. Ask the group: where are the gaps? What are the quick wins? What dependencies exist?

Hand out dot stickers and let people mark items as they discuss them. Red for blockers, green for quick wins, orange for dependencies, stars for critical items.

This creates a visual picture of where effort needs to focus. If you have 30 actions but only five green dots, you are planning too many hard things simultaneously. If everything has a star, your priorities are not clear.

Example: The group puts red dots on "Partners must share successful proposals to library" (known resistance), "Deploy secure AI system" (needs security approval, currently pending), and "Train staff on prompt engineering" (nobody knows how to do this training yet). Green dots go on "Create quality checklist" (can be done this week) and "Standardise proposal format going forward" (easy change). Stars go on the goal and on "Deploy secure AI system" (critical to achieving the goal).

Looking at the pattern, the facilitator asks: "You have stars on the AI system but also a red dot - it needs security approval. How long does security approval take?" "Usually 2-3 months." "So everything is blocked for 2-3 months?" This surfaces that they need to prioritise the security approval process and potentially start it immediately, even before the workshop report is finalised.

End with commitments, not just plans

Before you finish, get specific about what happens next. Who is responsible for what? When is the next review? What decisions need to be made before you can proceed?

Do not let this turn into "we will take this away and think about it." That is where momentum dies. Get at least one concrete next action committed before people leave the room.

Example: Before closing, the facilitator writes on a flip chart visible to everyone:

Immediate next actions:

  • Sarah: Initiate security approval for Azure OpenAI deployment - by Friday

  • James: Draft proposal quality checklist - by end of next week

  • Michael: Get quotes for integration developer - by Monday

  • Leadership team: Decide budget allocation for this initiative - meeting scheduled for two weeks from today

Next review: Four weeks from today, 2-hour session to review progress and finalise detailed plan.

Everyone in the room commits verbally to their action. The scribe has captured it. The observer notes who hesitated. You have momentum.

The facilitator also clarifies what will be delivered: "We'll provide a clean version of this canvas, a written report explaining the reasoning behind key decisions, and a prioritised action plan within one week. The report will include the observer's notes on dynamics and potential risks we should watch for."

9. What Happens Next?

The canvas is not the end. It is the beginning. You now have clarity on what you are trying to achieve, what needs to happen, what resources you need, and what has to change about culture. Now you need to actually do it.

Turn the canvas into an action plan. Take the yellow post-its and turn them into a project plan with owners and deadlines. Prioritise based on your dots and arrows. Quick wins with green dots should probably happen first to build momentum. Items with red dots need a plan to unblock them. Dependencies mean some things have to happen in sequence.

This is where a lot of strategy work falls apart. The workshop produces energy and clarity, then six weeks later nobody has done anything because it was not clear who was responsible for what.

Assign specific owners to each action. Not departments, individuals. Someone whose job it is to make sure that thing happens. Give them deadlines. Put review points in the diary before you leave the workshop.

Address the culture changes explicitly. The pink post-its do not happen by accident. Changing how people work requires deliberate effort.

If the culture change is "partners must use proposal templates," that needs a plan. Who creates the templates? How do you train people to use them? What happens if people ignore them? Who enforces it, and what authority do they have?

Culture change requires visible leadership support. If the managing partner says templates are important but then bypasses them for an important client, everyone notices. The fastest way to kill a culture change initiative is for senior people to publicly ignore it.

Be prepared for this to take time. You can implement a new system in weeks. Changing entrenched behaviours takes months, sometimes longer. Track it, measure it, talk about it regularly.

Track progress against the goal. You defined a specific, measurable goal at the start. Now measure whether you are making progress towards it.

Put monthly reviews in the diary. Look at the metrics. Are you on track? If not, why not? What is blocking progress? Do you need to adjust the plan?

Strategy is not a one-off exercise. It is an ongoing process of learning and adjusting. You will discover things as you go. Some actions will work better than expected. Some will be harder than you thought. Some will turn out to be the wrong thing entirely. That is fine, as long as you notice and adjust.

Know when to get help. You can run this process yourself. But there are situations where external facilitation or implementation support makes sense.

If your leadership team has significant political dynamics or trust issues, an external facilitator can ask questions and challenge assumptions in ways an insider cannot. If you lack specific technical capabilities (building AI systems, integrating APIs, designing workflows), bringing in people who have done it before will be faster and cheaper than learning by trial and error.

If you are using this canvas and realise you need help, that is what Bloch AI does. We facilitate these workshops as standalone engagements, or we help you execute what comes out of them. We build automation and AI solutions using open-source tools. We do not sell you software licences or lock you into proprietary platforms. We build things that work and hand them over.

Either way, you now have a method for thinking clearly about strategy. Use it honestly, and you will get clarity on what needs to happen. That clarity is worth more than another set of slides about transformation and innovation.

9. What Happens Next?

The canvas is not the end. It is the beginning. You now have clarity on what you are trying to achieve, what needs to happen, what resources you need, and what has to change about culture. Now you need to actually do it.

Turn the canvas into an action plan. Take the yellow post-its and turn them into a project plan with owners and deadlines. Prioritise based on your dots and arrows. Quick wins with green dots should probably happen first to build momentum. Items with red dots need a plan to unblock them. Dependencies mean some things have to happen in sequence.

This is where a lot of strategy work falls apart. The workshop produces energy and clarity, then six weeks later nobody has done anything because it was not clear who was responsible for what.

Assign specific owners to each action. Not departments, individuals. Someone whose job it is to make sure that thing happens. Give them deadlines. Put review points in the diary before you leave the workshop.

Address the culture changes explicitly. The pink post-its do not happen by accident. Changing how people work requires deliberate effort.

If the culture change is "partners must use proposal templates," that needs a plan. Who creates the templates? How do you train people to use them? What happens if people ignore them? Who enforces it, and what authority do they have?

Culture change requires visible leadership support. If the managing partner says templates are important but then bypasses them for an important client, everyone notices. The fastest way to kill a culture change initiative is for senior people to publicly ignore it.

Be prepared for this to take time. You can implement a new system in weeks. Changing entrenched behaviours takes months, sometimes longer. Track it, measure it, talk about it regularly.

Track progress against the goal. You defined a specific, measurable goal at the start. Now measure whether you are making progress towards it.

Put monthly reviews in the diary. Look at the metrics. Are you on track? If not, why not? What is blocking progress? Do you need to adjust the plan?

Strategy is not a one-off exercise. It is an ongoing process of learning and adjusting. You will discover things as you go. Some actions will work better than expected. Some will be harder than you thought. Some will turn out to be the wrong thing entirely. That is fine, as long as you notice and adjust.

Know when to get help. You can run this process yourself. But there are situations where external facilitation or implementation support makes sense.

If your leadership team has significant political dynamics or trust issues, an external facilitator can ask questions and challenge assumptions in ways an insider cannot. If you lack specific technical capabilities (building AI systems, integrating APIs, designing workflows), bringing in people who have done it before will be faster and cheaper than learning by trial and error.

If you are using this canvas and realise you need help, that is what Bloch AI does. We facilitate these workshops as standalone engagements, or we help you execute what comes out of them. We build automation and AI solutions using open-source tools. We do not sell you software licences or lock you into proprietary platforms. We build things that work and hand them over.

Either way, you now have a method for thinking clearly about strategy. Use it honestly, and you will get clarity on what needs to happen. That clarity is worth more than another set of slides about transformation and innovation.

9. What Happens Next?

The canvas is not the end. It is the beginning. You now have clarity on what you are trying to achieve, what needs to happen, what resources you need, and what has to change about culture. Now you need to actually do it.

Turn the canvas into an action plan. Take the yellow post-its and turn them into a project plan with owners and deadlines. Prioritise based on your dots and arrows. Quick wins with green dots should probably happen first to build momentum. Items with red dots need a plan to unblock them. Dependencies mean some things have to happen in sequence.

This is where a lot of strategy work falls apart. The workshop produces energy and clarity, then six weeks later nobody has done anything because it was not clear who was responsible for what.

Assign specific owners to each action. Not departments, individuals. Someone whose job it is to make sure that thing happens. Give them deadlines. Put review points in the diary before you leave the workshop.

Address the culture changes explicitly. The pink post-its do not happen by accident. Changing how people work requires deliberate effort.

If the culture change is "partners must use proposal templates," that needs a plan. Who creates the templates? How do you train people to use them? What happens if people ignore them? Who enforces it, and what authority do they have?

Culture change requires visible leadership support. If the managing partner says templates are important but then bypasses them for an important client, everyone notices. The fastest way to kill a culture change initiative is for senior people to publicly ignore it.

Be prepared for this to take time. You can implement a new system in weeks. Changing entrenched behaviours takes months, sometimes longer. Track it, measure it, talk about it regularly.

Track progress against the goal. You defined a specific, measurable goal at the start. Now measure whether you are making progress towards it.

Put monthly reviews in the diary. Look at the metrics. Are you on track? If not, why not? What is blocking progress? Do you need to adjust the plan?

Strategy is not a one-off exercise. It is an ongoing process of learning and adjusting. You will discover things as you go. Some actions will work better than expected. Some will be harder than you thought. Some will turn out to be the wrong thing entirely. That is fine, as long as you notice and adjust.

Know when to get help. You can run this process yourself. But there are situations where external facilitation or implementation support makes sense.

If your leadership team has significant political dynamics or trust issues, an external facilitator can ask questions and challenge assumptions in ways an insider cannot. If you lack specific technical capabilities (building AI systems, integrating APIs, designing workflows), bringing in people who have done it before will be faster and cheaper than learning by trial and error.

If you are using this canvas and realise you need help, that is what Bloch AI does. We facilitate these workshops as standalone engagements, or we help you execute what comes out of them. We build automation and AI solutions using open-source tools. We do not sell you software licences or lock you into proprietary platforms. We build things that work and hand them over.

Either way, you now have a method for thinking clearly about strategy. Use it honestly, and you will get clarity on what needs to happen. That clarity is worth more than another set of slides about transformation and innovation.

Continue Learning

Explore more guides, tools, and insights to advance your journey and build lasting capability across your business.

Join Our Mailing List

Join Our Mailing List

Join Our Mailing List

The Innovation Experts

Get in Touch

The Innovation Experts

Get in Touch

The Innovation Experts

Get in Touch