top of page
Staff Meeting

Blog Post

When AI Should Stay Out of the Driver's Seat: Why We Chose People Over Algorithms

Aug 17

4 min read


The pushback was immediate and passionate.

When we announced at Wizer that we'd be keeping critical decisions in human hands rather than automating them with AI, the reaction from some corners was swift: "You're being left behind. Every business is automating decisions now. Why aren't you?"

It's a fair question but we know we made the right call and we enjoyed our conversation with Jing Hu this week about Humans v AI - Why Human Judgement Still Belongs to Humans.


The Seductive Promise of AI Decision-Making

The appeal is obvious. AI can process vast amounts of data instantly, identify patterns humans might miss, and make decisions without fatigue or emotional bias. For many business processes, this is transformative. But there's a critical distinction between tasks that can be automated and decisions that should be.

As Jing Hu points out in our conversation, "AI is a great brainstorming partner — but a poor decision-maker." The reason isn't just about current technology limitations; it's about the fundamental nature of business decisions themselves.

Interview with Jing Hu from 2ndOrderThinkers
Jing Hu from 2ndOrderThinkers


What Happens When Machines Make Human Decisions

Recent research reveals some troubling patterns when we hand over decision-making to AI:

The Originality Paradox: Studies show that while AI can generate numerous ideas quickly, they tend to cluster around similar themes. The breakthrough innovations — the ideas that truly matter — typically come from the edges, from human intuition and cultural context that algorithms can't access.

The Efficiency Trap: Yes, AI makes processes faster. But speed without judgment often leads to a race to the bottom. When efficiency becomes the primary metric, we lose the nuanced thinking that separates good decisions from great ones.

The Flattening Effect: AI tends to converge on "optimal" solutions based on existing patterns. This flattening effect can erode the diversity of thought and cultural perspectives that drive innovation.

Perhaps most concerning is emerging research showing that AI systems increasingly prefer content generated by other AI systems. When decision-making AI starts optimising for what other AI systems produce, we create closed loops that move further and further away from human needs and values.

Why We Never Went Down That Path

At Wizer, we believed in people's nuance, imagination, and their need to make decisions together from the very start. While others were rushing to automate decision-making, we were asking different questions: What makes human judgment irreplaceable? How do teams actually make great decisions together?

We see this human-centered approach validated daily in our work:

  • The client conversations where reading between the lines reveals the real challenge isn't what was initially described

  • The market opportunities that emerge from cultural insights no algorithm could detect

  • The team dynamics where collective wisdom produces solutions none of us could have reached individually


This isn't just philosophy — it's practical business reality.

Why Human Judgment Still Wins

This isn't about being anti-technology. We use AI extensively at Wizer — for research, content generation, data analysis, and brainstorming. But we've learned to recognise where the technology adds value and where it creates risk.

Human judgment excels in three critical areas that AI currently cannot replicate:

Contextual Understanding: Humans can read between the lines, understanding not just what is said but what isn't said. We can factor in cultural nuances, timing, and relationships that don't appear in datasets.

Values-Based Decisions: Every business decision reflects values. Which trade-offs are acceptable? What risks are worth taking? These aren't optimization problems — they're reflections of what we believe matters.

Long-Term Thinking: AI optimizes based on patterns from the past. Humans can imagine futures that have never existed and make decisions that create entirely new possibilities.

The Path Forward: AI Literacy, Not AI Dependence

The goal isn't to avoid AI but to use it wisely. As Jing Hu emphasizes, real AI literacy for leaders means understanding not just what AI can do, but what it shouldn't do.

At Wizer, we've developed three principles for AI integration:

  1. Use AI to enhance human capabilities, not replace human judgment

  2. Keep humans in the loop for any decision that affects people

  3. Regularly audit AI tools for bias, drift, and unintended consequences


The Competitive Advantage of Human-Centred Decisions

Here's what surprised us most: choosing people over algorithms hasn't slowed us down — it's given us a competitive edge. While our competitors automate everything possible, we're making more thoughtful decisions, building stronger relationships, and creating solutions that genuinely serve human needs.


The pushback we received was actually a signal that we were onto something important. In a world rushing toward full automation, the companies that thoughtfully blend human judgment with AI capabilities will stand out.


Watch the Full Conversation

Our complete discussion with Jing Hu dives deeper into the research behind these insights, explores specific examples of AI's limitations in decision-making, and offers practical frameworks for AI literacy in business.


The conversation challenges both AI panic and AI hype, offering instead a grounded perspective on how to navigate this technological moment thoughtfully.

Watch the full episode of the Higher Business Series featuring Jing Hu on our YouTube channel, and subscribe to her newsletter, 2nd Order Thinkers, for research-backed insights on AI's real impacts.


Related Posts

bottom of page