AI Can Simulate Your Customers. Your Decision Room Is a Different Matter Entirely.
- Kylee Ingram
- Mar 18
- 4 min read

There's a striking development in market research right now. Companies are replacing human survey respondents with AI-generated digital replicas — trained on real behavioural data — and getting results that match known findings with up to 95% accuracy. CVS Health is already using it. Gallup is piloting the approach for policy research. What once took months now takes minutes.
That's a reasonable use of AI. Simulating consumer preferences is essentially a pattern recognition problem at scale, and pattern recognition is what these systems do well.
But somewhere in the enthusiasm, a category error is creeping in. Companies are now reaching for the same tools — AI synthesis, AI-generated perspectives, AI-populated viewpoints — when it comes to the decisions that shape their organisations. Those are not the same kind of problem.
Two Very Different Types of Question
Asking what customers want from a product is a research question. The answer exists somewhere in behavioural data, and AI can surface it faster and cheaper than traditional methods.
Deciding whether to enter a new market, restructure a business unit, or respond to a competitor is a different category of question entirely. The answer doesn't exist in data. It has to be constructed — through negotiation, through judgement about what matters and what doesn't, through an understanding of relationships, history, and future consequences that no model has access to.
Human decisions of consequence require human rooms. Not because humans are infallible, but because the people in that room will carry the decision forward. They need to have genuinely wrestled with it. They need to own it. Accountability, commitment, and the ability to adapt as circumstances change are properties of people, not outputs. An AI can tell you what the data suggests. It cannot bear the weight of what happens next.

The Structure of the Decision Room Is the Problem
The biggest risk in high-stakes organisational decisions isn't bad data. It's the structure of the room where the decision is made.
Dr Juliet Bourke's research on cognitive diversity demonstrates that when decision groups lack range — in how people frame problems, assess risk, weigh evidence, and consider people impact — error rates climb by around 30% compared to groups with genuine cognitive diversity. That gap doesn't close by giving a narrow group better information. It closes by changing the composition of the group itself.
Decision rooms narrow over time through entirely predictable mechanisms. Social bias means people gravitate toward those who think like them. Under pressure, organisations default to fast, outcome-focused thinking. The Guardian who would have asked the hard risk question, the Analyzer who would have interrogated the evidence, the Collaborator who would have mapped the stakeholder consequences — they either aren't in the room, or they read the dynamic and stay quiet.
This is the structural failure that sits beneath a striking number of organisational missteps. Not missing data. Missing perspectives — and a room that wasn't designed to surface them.
What AI Cannot Compensate For
AI can help generate perspectives that are absent from a room. A well-constructed prompt can surface a risk lens or a process lens that nobody on the panel naturally brings. Dr Bourke has acknowledged this directly — AI can stand in for some of what's cognitively missing.
But it cannot replace what happens when a room of humans negotiates a decision. The senior leader who pushes back and needs to be heard. The frontline voice that reframes the whole problem. The person who spots the implementation flaw that everyone else has missed because they've actually lived it. The relationship dynamics that will determine whether the decision sticks or quietly dies in execution.
Human decisions about people, strategy, and the future of an organisation need human integration. The synthesis, the trade-offs, the accountability — these cannot be delegated to a system that will not be present when the consequences arrive.
The Question Executives Should Be Asking
Organisations investing heavily in AI capability will gain real advantages in speed and synthesis. The question is whether they're investing equally in the decision infrastructure that sits on top of that capability.
Decision infrastructure means knowing, concretely, who is in your critical decision rooms, what cognitive and experiential range they collectively bring, and where the gaps are — before those gaps become errors. It means treating the composition of decision panels as something that can be designed, not just defaulted into based on availability or hierarchy.
Wizer's Decision Profiles take around four minutes per person and give each individual a map of their decision-making strengths and blind spots. Panel Strength scores any group in real time — showing where it's strong, where it's exposed, and how well the mix matches the decision at hand. The Recommendation Engine surfaces specific people across the organisation who would close the gaps, before the decision is locked in.
The tools companies are building to generate insight are impressive. The decisions those tools inform will still be made by people, in rooms, under pressure. How those rooms are structured is the variable that determines whether the insight gets used well — or gets filtered through the same narrow lens it always has.
That's the question worth asking before the next important decision gets made.
Wizer is a decision science platform. Explore your organisation's decision DNA at wizer.business.



