
Blog Post

Blind Spots: The Reason Teams Make Bad Decisions
Most bad decisions aren’t stupidity. They’re blind spots.
The meeting ends. Everyone feels aligned. The decision looks “obvious.”Then reality shows up and humiliates the plan.
Blind spots are why smart teams don’t see what’s coming — not because they don’t care, but because the process wasn’t designed to catch what the room couldn’t see.
This piece breaks down the biggest blind spots (including Juliet Bourke’s three), how cognitive drift quietly manufactures them over time, and the practical tools to spot blind spots in team decisions before they cost you.
What is a blind spot in a team decision?
A blind spot is a systematic gap in a decision process that makes a team feel certain… while being wrong.
Not because the team didn’t care. Because the team didn’t see what they weren’t set up to see.
Blind spots show up when:
the “wrong” voices dominate (or the right voices stay quiet)
the room over-trusts one kind of evidence
the team simplifies complexity too early
the decision is shaped by incentives and politics, not reality
everyone confuses agreement with quality
And the tricky part: blind spots usually feel like clarity.
Dr Juliet Bourke’s three decision blind spots (the ones that quietly ruin outcomes)
Juliet’s work is useful because it frames blind spots as structural — not personal.
1) Social bias: who gets heard (and who doesn’t)
Social bias is the invisible filter that decides:
whose ideas land
who gets interrupted
who gets invited next time
who gets labelled “difficult” for raising the wrong question at the wrong moment
This is also where “overconfidence” hides. A room doesn’t become confident because it’s correct — it becomes confident because dissent becomes socially expensive.
How it shows up
“Let’s keep the group small.” (Translation: familiar.)
“We’ll loop them in later.” (Translation: after it’s already decided.)
“We don’t need ops/comms/frontline for this.” (Translation: we’ll discover the risk in production.)
Tool to spot it
A voice audit: Who spoke? Who didn’t? Who shifted the decision? Who got ignored?If you can’t answer that cleanly, you’re guessing about whether your process was smart or just polite.
Cognitive drift: how organisations quietly train out diverse thinking
This is the part that makes blind spots feel inevitable.
Dr Bourke describes a pattern you see in a lot of organisations: there’s often more diversity of thinking lower down — but as people move up, the thinking narrows.
Not because people become less capable. Because two forces kick in:
Selection pressure: leaders promote people who look like them — and “look like” includes how they think.
Adaptation pressure: people learn what gets rewarded in senior rooms and start to edit themselves.
Over time, you get cognitive drift — the slow pull toward a narrower way of thinking. The room feels aligned, but it’s often just converged.
Two short lines from Juliet that capture it:
“Leaders were selecting people who looked just like them… also in the way that people think.”
“As people progressed, they… shed that skin of what wasn’t being valued.”
What cognitive drift creates
fewer dissenting views (because dissent becomes risky)
confidence masked as “decisiveness”
decisions optimised for speed and outcomes while skipping risk, evidence, and stakeholder impact
“we didn’t see it coming” moments that were visible — just not welcome in the room
How to spot drift earlyAsk one uncomfortable question:
Which thinking lens gets praised here… and which one gets punished?
If the answer is obvious, drift is already happening.
2) Information bias: what counts as “evidence”
Information bias is what happens when a room overvalues some inputs and undervalues others.
Common patterns:
spreadsheets beat lived experience
anecdotes beat data
internal opinions beat external signals
last week’s loudest story beats base rates and history
How it shows up
“We don’t have enough data yet.” (Translation: we don’t like the data we have.)
“Let’s run a survey.” (Translation: we want comfort, not clarity.)
“We already know what customers think.” (Translation: we’ve mistaken familiarity for insight.)
A tool to spot it
An “evidence map” with three columns:
What we know (validated)
What we think (assumptions)
What we’re ignoring (missing evidence)
Most teams never write column three down. That’s where the blind spot lives.
3) Capacity bias: how much complexity the room can hold
Capacity bias is cognitive load. Under pressure, humans simplify.
That means teams:
cut options too early
skip second-order impacts
choose the clean narrative
default to speed because it feels like competence
How it shows up
“We’re overthinking this.” (Translation: we’re overloaded.)
“We need a decision by Friday.” (Translation: urgency is driving the process.)
“Let’s go with the safe option.” (Translation: risk hasn’t been understood — just avoided.)
Tool to spot it
A second-order checkpoint:
If we do this, what breaks?
Who pays for it later?
What gets harder downstream?
Capacity bias gets worse when cognitive drift has already narrowed the room — because under pressure, teams fall back to the few lenses they still value. The blind spots teams don’t name (but feel later)
Dr Bourke's three cover the core mechanics. Here are additional repeat offenders you can call out explicitly.
The overconfidence blind spot
The room is certain because:
the story is neat
the leader is decisive
nobody wants to be the person who slows it down
Fix: appoint a “disconfirming lead” whose job is to find what would make the decision wrong.
The incentives blind spot
Teams say they’re deciding for “impact” — but behave as if they’re deciding for:
optics
politics
budget protection
not getting blamed
Fix: ask what does each stakeholder “win” if this is approved? Then write it down.
The “we asked people” blind spot
Consultation theatre: lots of inputs, no proof they shaped anything.
Fix: an audit trail that shows:
who contributed
what themes emerged
what changed in the decision as a result

So where can you buy tools to spot blind spots in team decisions?
Here’s the practical stack. Not “better collaboration.” Actual mechanisms.
Tool 1: A panel strength check (before you ask a question)
Before gathering input, you need to know whether the room is strong enough to trust the result.
A strong decision panel has balance across:
decision styles (how people frame decisions)
experience lenses (strategy, delivery, commercial, engagement, policy, etc.)
diversity indicators (so you can spot homogeneity and conformity risk early)
If you can’t see imbalance up front, you’ll only discover it after the backlash, the delivery failure, or the “why didn’t anyone think of that?” moment.

Tool 2: A recommendation engine for “who’s missing”
Most teams don’t need “more people.” They need the right missing people.
A recommendation engine should answer:
who should be involved for this decision type
what’s missing (style, experience, diversity)
who inside the organisation fits that gap
This is exactly the kind of blind spot that causes expensive rework — because the people who would have spotted the risk were never invited.
Tool 3: Decision profile mapping (so you can predict your default bias)
If a leadership group over-indexes on one decision style, the blind spots are predictable:
lots of momentum → weak evidence checks
lots of risk control → slow decisions, innovation stalls
lots of harmony → unresolved trade-offs and “nice agreement”
Mapping decision styles makes the pattern visible before it becomes culture.
Mapping Decision Profiles makes the invisible pattern visible.
Wizer Technologies is built to spot these blind spots before you run the decision:
it maps decision styles (Decision Profiles)
it shows panel strength (strength indicator)
it recommends who’s missing (recommendation engine)
Not to include everyone. To include the right people.
Tool 4: An audit trail that proves input shaped the outcome
This is the anti “consultation theatre” mechanism:
who was heard
what themes emerged
what changed in the decision as a result
where the gaps were
If you can’t show this, you’re relying on trust. And trust is fragile when people feel ignored.
The bridge: from manual checklists to instrumentation
If you’re relying on the checklist alone, you’re depending on people to:
notice the blind spot
name it in public
slow the group down
push back against hierarchy
That’s fragile — especially once cognitive drift has narrowed what’s considered “acceptable” thinking.
Instrumentation flips it. It makes missing lenses visible by default.
That’s the category Wizer Technologies sits in.
Wizer is built to surface decision blind spots before the decision locks:
it maps decision styles (Decision Profiles)
it shows panel strength (strength indicator)
it recommends who’s missing (recommendation engine)
it helps teams evidence how input influenced outcomes
Not to include everyone. To include the right people — early enough for it to matter.
A simple way to use this tomorrow
If you want a lightweight ritual (no new software required), do this in your next decision meeting:
Run the checklist (10 questions)
Do the evidence map (know / think / ignoring)
Do the second-order checkpoint (what breaks, who pays later)
Run the incentives scan (who wins/loses, who carries risk)
If that feels like a lot: good. That’s the point. Manual decision quality is a heavy lift. Systems scale it.
The Blind Spot Checklist (steal this for your next meeting)
Before you lock a decision, run these ten questions:
Who will be impacted but isn’t represented in the room?
Who has to deliver this and hasn’t been asked?
What evidence are we over-trusting?
What evidence are we ignoring because it’s inconvenient?
What assumption are we treating as a fact?
What would make this decision fail in the real world?
What breaks downstream if we’re wrong?
What incentive is shaping this conversation?
Who disagrees privately but isn’t saying it out loud?
If we had to defend this in 6 months, what would we wish we’d done differently?
If you can’t answer half of these, you’re not “aligned.” You’re under-informed.
We suggest our people start with Decision Profile Mapping their Organisation
FAQs
What are blind spots in decision-making?
Blind spots are predictable gaps in a decision process — missing voices, skewed evidence, or oversimplified complexity — that make teams feel confident while increasing the chance of error.
What are the main causes of blind spots in team decisions?
The biggest causes are social bias (who gets heard), information bias (what counts as evidence), and capacity bias (how much complexity the team can handle under pressure).
What tools help spot blind spots in team decisions?
The most useful tools are panel strength assessments, decision profile mapping, and recommendation engines that identify who is missing from the decision group.
How do you prevent blind spots before making a decision?
Design the decision group intentionally: balance decision styles, include required experience lenses, check for homogeneity, and force an evidence + second-order impacts checkpoint before locking the decision.





