AI vs. Human Judgment: Decision-Making When Navigation Data Clashes with Theory-Based Reasoning
Modern navigation technology has transformed the way we drive, with AI-powered systems like Waze providing real-time route optimization based on dynamic traffic data. Traffic alerts from police and news departments further enhance this information by warning of unexpected road hazards. However, AI-driven recommendations are only as good as the data they have at the moment, and sometimes, making the best decision requires human intervention. This article explores my recent experience with AI vs. human judgment in decision-making, where navigation data clashed with my theory-based reasoning. In this scenario, my mental model of traffic patterns and route availability enabled me to make a more robust decision than the AI-driven recommendation from Waze.
When Navigation Data and Human Judgment Collide
Earlier this week, my wife and I were returning home from an event. Soon after I merged onto the freeway, my wife told me that the local police department sent a text message warning of an accident near our house: “Eastbound is closed at [the last controlled intersection before our house] due to a collision. Please use alternate routes.” Seeking more data, we checked Waze for further instructions. Waze showed the accident, yet, surprisingly, the community-driven GPS navigation app still instructed us to stay the course. My wife and I disagreed on how to act on this recommendation.
Making the Call: A Decision Tree for Navigating Around An Accident
My wife wanted to stick with Waze. I was sympathetic to this commitment because I have suffered in the past from ignoring Waze’s command to change my route. However, in this case, Waze was telling us to stay the course when I feared that doing so could get us stuck in a huge traffic mess. I made the case for the conservative decision to take a different route that I knew would cost about five extra minutes. Given the risks I saw in driving into the scene of the accident, I concluded the extra five minutes was a small price to pay for the assurance of avoiding a massive delay from both getting stuck in a traffic jam and then having to take a circuitous route around the accident to get home. In my head, I carried the following (approximate) decision tree:

The expected values of my decision tree summarize the conclusions in my head. Staying the course would “cost” us an extra 11 to 12 minutes. I actually felt a LOT more certain at that time, but I added probabilities in this decision tree in deference to my late Decision Analysis professor Ron Howard who always warned about incorporating certainty into a decision. The range of potential outcomes from the probabilities also demonstrate why I felt so much more comfortable diverting than taking a chance on Waze’s recommendation to stay the course. (Note that the 5% chance that the diversion would cost more than 5 minutes accounts for some outlier case that I did not and could not consider at the time – I aggregated the cost to an “average” of 10 minutes).
Why My Theory-Based Reasoning Led to a Better Decision
My wife deferred to my logic, and, according to Waze, we did indeed drive about 5 more minutes than our regular route. The next day, I saw a warning on NextDoor that the accident tragically involved a fatality. Thus, I have to assume that the accident caused a complete shutdown of the intersection leading to our house from the freeway. While the outcome does not prove the quality of my decision, the outcome validated my theory and mental model. The police alert was a very important driver of the quality of my decision based on theory-based reasoning.
Waze can take time to properly assess the implications of an accident. Until enough cars with Waze (or Google maps?) pile into the scene of the accident, the app will not know that the expected travel time has soared enough to recommend an alternate route. I was able to act faster because of knowledge of Waze’s limitations and knowledge of all the roads leading home.
Limitations of AI in Real-Life Decision-Making
This experience reminded me of an article I recently read on the limitations of AI-based computation for doing forward reasoning in ways that are contrary to a symmetry between data and beliefs. In “Theory Is All You Need: AI, Human Cognition, and Causal Reasoning” authors Teppo Felin and Matthias Holweg convincingly explain that “a theory-based view of cognition allows humans to intervene in the world beyond the given data—not just to process, represent, or extrapolate from existing data. Theories enable the identification or generation of nonobvious data and new knowledge through experimentation.” In this case, I had a theory about Waze’s limitations and the potential costs of getting stuck in an accident-driven traffic jam (as opposed to a traffic jam solely driven by unobstructed congestion). Waze’s extrapolation from existing data was asymmetric from my beliefs. While Waze needed more data to improve its decision-making, I was able to act immediately based on my theory. My experiment reinforced my belief in running Waze’s recommendations through the filter of my human brain.
Conclusion: A Case for Theory-Based Decision-Making
My experience navigating around the unfortunate accident highlights a critical lesson in decision-making: while AI and data-driven models like navigation apps provide powerful tools, they have limitations. Their effectiveness depends on the data available at the moment, often making them reactive rather than proactive. In contrast, human decision-making—grounded in theory-based reasoning—allows us to anticipate challenges, account for unseen risks, and intervene when necessary.
As AI continues to shape decision-making across industries, this case serves as a reminder that data alone is insufficient. The best decisions emerge when we blend algorithmic insights with human intuition, experience, and structured reasoning. Understanding the limitations of AI, recognizing when to challenge its recommendations, and applying sound decision analysis can lead to smarter, more reliable outcomes—whether on the road or in high-stakes environments.
So basically….go with your gut?? 🙂