Humans Are Terrible at Driving: Why Strategy Leaders Must Rethink the Path to Automation and the Trust Gap
- Matthew Bosworth
- Jul 8
- 5 min read

By Matthew Bosworth, Managerial Economist & Strategy Advisor
When Perception Becomes the Problem
Having recently acquired an EV equipped with 'hands off' driving capabilities, and RJ Scaringe's promise to be a market competitor with the introduction of full 'eyes off' driving before 2025 is over, got me thinking about not just the the actual safety behind the wheel, but also the public's perception of the same.
Scenario: A drunk driver veers off the road and kills three people. A bit heavy for a Tuesday afternoon and though it’s tragic, it happens all the time around the world. Maybe it will make a local headline, but if a Tesla on Full Self-Driving mode scrapes a curb or a Waymo minivan pauses at an intersection, the world lights up with moral panic.
This asymmetry in public perception and reaction isn’t just psychological: it’s strategic. It matters to founders, regulators, insurers, investors, or actually any operator thinking about the next wave of automation.
There’s an uncomfortable truth we all must acknowledge: statistically speaking, humans are terrible drivers and automation is already a far safer alternative. The question is then; why does public sentiment, regulatory momentum, and media coverage not align with the mathmatical reality? As strategists, we need to understand the 'why' as much as the 'how', and build frameworks to navigate and capitalize on this gap between perception and empirical truth.
The Data Behind Humans Being Awful at Driving
In the U.S.A. alone, nearly 42,000 people lost their lives in vehicle-related accidents in 2023 and that figure does not appear to be dropping any time soon, either. Contextually speaking, that is significantly more than gun violence or commercial aviation deaths, combined; by an order of several magnitude.
Contrast the frequency and volume of accidents with autonomous driving:
Waymo, being a relative new entrant to the autonomous market, reported Zero at-fault fatalities across over 8 million fully autonomous miles. Further, Tesla’s quarterly Vehicle Safety Report shows accidents are up to 90% less frequent when Autopilot is engaged than in any human-only controlled vehicle.
The fatality rate is roughly 1.35 deaths per 100 million miles driven and it is a fact that an estimated 94% of those crashes are due to human error. Questions for another day that I might perhaps ask:
"Would more people get behind the wheel fully intoxicated if they then leaned into the statistic that a self-driving car is much safer? "
"Would inexperienced drivers actually benefit from automation or would that simply kick the can of the issue down the road as it takes away from actual driver experiece?"
The conclusion is clear that based on averages, and growing empirical data, the machine is already outperforming the human by a number of factors.
So, why the resistance?
The Trust Gap – Why Human Behaivior Leans Toward Trusting Humans More Than Codes
Availability Heuristic
People simply overreact to what they often see; especially sensationalist stories that are pointing out a result somewhat out of the ordinary.
One AI-related fatality, like the Uber autonomous test crash in 2018, dominates prominent newspaper space for weeks. The 122+ average daily deaths caused by humans? Entirely invisible in the media.
Bias of Moral Intuition
We instinctively accept “predictable human failure” but recoil at failure from a machine, especially if it was designed and built by a corporation, with a logo, data and a CEO to point fingers at. Our moral compass is quantified and qualified in calculus and far more stringent when a system is designed and built as it is then an 'at fault' by the data used in that design. ie. we are far more likely to blame the human error in the design architecture than a human error behind a wheel.
Basic Fallacy
Drivers feel in control behind the wheel, even if they are objectively not safer.
Handing over that control, even to an ostensibly safer system triggers anxiety and the feeling of loss of control, even if factually that is not hte case when the human can competently take oer the wheel at any time. This is an “autonomy paradox”: people fear the loss of control much more than they fear being wrong in control.
How Firms and Governments Can Adapt
Managing Narrative Risk
For automation-focused companies, investment in narrative-risk-management is now as critical as the safety of the very ecosystems they build into automated computer systems.
Data visualization creates compelling visual storytelling to reframe public understanding and perspective, by breaking down complex results into easily digestible amd manageable info-graphics. Companies like Waymo are pioneering public ride-along programs not just to collect data, but to build emotional familiarity.
Strategy: Take Control the optics of rare events. The first fatality involving a robotaxi has to be handled in the similar way than a NASA or SpaceX mission failure: with transparency, humility, and an over-delivery of the case-by-case facts. Just as airline safety evolved through iterative improvement (and early disasters), so too must autonomous mobility. Acceptable imperfection is the bridge to societal benefit.
Mathmatically Modelling 'Counts and Amounts' to Factually Quantify Risk
Insurance companies are likely to adapt, too by adapting thei very way in which they wuantify risk. Risk in this sence, at least, is usually made by looking at;
Binomial distributions for frequency (claim counts)
Gamma or Log-normal distributions for severity (claim amounts)

Or, even more fun, my main man Thomas Bayes - famous for being the manthmatician who wrote the equation for quantifying risk; 'what is the probability of x in light of y'.
(who coincidentally wrote that equation back in the 1800's in a house next door to my own in Royal Tunbridge Wells, UK!)

Bayes’ builds on the mathematical backbone for updating insurance risk assessment when new information comes in and patterns emerge. Traditional insurance risk models use analytics of regression (or the R²), though modern risk pricing often builds Bayesian layers into those models to make them more adaptive and realistic.
Essentially, whomever gets comfortable in the lesser levels of risk first, will see a huge market share increase. Though the obvious added aspect of that will rely on the fall out of the recently passed 'The Big Beautiful Bill', and how that in real terms affects the USA / China race for EV automation supremacy.
Winning the future and safeguarding the corporations who are the main players isn’t just about building the best-performing system. It’s about building the most trusted one, and for the world around them to then adapt accordingly.
The first wave of winners won’t be the best coders, they will be the best translators between machine intelligence and human emotion. Whoever wins that race, will take the competitive advantage and market share and strategic pricing advantage.
Matthew Bosworth is an managerial economist and strategic advisor specializing in behavioral economics, technology adoption, and future-of-work transformation.
His primary work focuses on gaining strategic advantage that is sustainble on an ongoing basis. He looks at the intersection of innovation and economic value creation in the strategies he builds.
Comments