My AI Coach Made Me Run in Circles: What a Marathon Taught Me About Trusting Algorithms
Have you ever blindly trusted your GPS, only to find yourself on a bizarre detour that somehow, miraculously, gets you to your destination faster? It’s a strange feeling, isn’t it? A mix of skepticism and begrudging respect for the machine. Now, imagine that feeling stretched over 18 weeks of grueling marathon training. That’s the journey economist Tim Harford takes us on in his recent Financial Times piece, and it’s a fascinating look into our increasingly complex relationship with technology.
Harford set out to run a marathon, but he didn’t hire a coach or buy a book. Instead, he outsourced the entire process to an algorithm on his Garmin watch. What followed was a masterclass in the weird, wonderful, and sometimes unsettling power of automated decision-making. He didn’t just run far; he behaved in ways that would look utterly bizarre to an outside observer—all at the behest of a piece of software.
This isn’t just a story about running. It’s a perfect microcosm for how we interact with artificial intelligence in our daily lives, from the products we build to the services we use. It’s a tale for developers, entrepreneurs, and anyone curious about the subtle ways AI is already shaping human behavior.
The Coach in the Cloud: Outsourcing Willpower to a SaaS Platform
At its core, Harford’s Garmin coach is a specialized SaaS (Software as a Service) platform. It lives in the cloud, syncs with his device, and delivers a single, powerful service: a personalized marathon training plan. The goal was simple: run a marathon in under four hours. The method, however, was anything but intuitive.
The algorithm didn’t care about his feelings or his “perceived exertion.” It cared about one thing: data. Specifically, his heart rate. The watch would command him to run at a specific heart rate, a directive that often felt counterintuitive. On some days, it demanded a pace so slow he felt he could walk faster. On others, it pushed him to his absolute limit. The entire training plan was a black box—a series of commands without explanation.
This is where the first major lesson for tech professionals emerges. The Garmin algorithm isn’t a complex, generative machine learning model. It’s likely a much simpler, rules-based system. Yet, its effectiveness hinges on a core principle of good product design: it removes cognitive load. Harford didn’t have to wonder if he was training too hard or too easy. He just had to obey. This is the power of effective automation: simplifying complexity to the point of a single, actionable instruction.
China's AI Gold Rush: Why MiniMax's Blockbuster IPO is a Game-Changer
Running in Circles: When Algorithmic Logic Meets Human Reality
The real story begins when the algorithm’s logic collides with the messy reality of the real world. Harford recounts several instances of what he calls “weird” behavior, all driven by his commitment to the algorithm’s commands.
One of the most striking examples was when his watch told him to run for exactly 55 minutes. As he neared home at the 53-minute mark, he faced a choice: stop early, or obey the machine. He chose obedience. For two minutes, he ran “back and forth on the pavement outside my own house like a caged lunatic” as he describes it. Why? Because the algorithm said so.
On another occasion, a “simple” one-hour run was scheduled. Partway through, the algorithm instructed him to slow down dramatically because his heart rate was too high. He ended up walking for a significant portion of the run. To a human coach, this might seem like a failed session. To the algorithm, it was a success—the heart rate parameter was met. This highlights a critical tension between process and outcome that anyone involved in programming or system design will recognize.
To understand the difference in these approaches, let’s compare the algorithmic method to traditional, intuition-based training.
| Training Aspect | Traditional Human Intuition | Algorithmic (Data-First) Approach |
|---|---|---|
| Pacing | Based on “feel,” perceived exertion, and past experience. Often leads to running too fast on easy days. | Strictly dictated by biometric data (e.g., heart rate). Often forces unintuitively slow paces to build aerobic base. |
| Consistency | Can be derailed by mood, weather, or a single bad run. Subjective. | Objective and relentless. The plan adapts based on data, not feelings, demanding adherence to the schedule. |
| Adaptation | Relies on the runner’s self-assessment, which can be flawed or biased. | Dynamically adjusts future workouts based on performance data from previous runs. A closed-loop feedback system. |
| Goal Focus | Focused on the end goal (e.g., marathon time), which can lead to overtraining. | Focused on the immediate process metric (e.g., stay in heart rate zone 2 for 60 mins). The goal is an emergent property of a well-executed process. |
This table illustrates the fundamental shift: the algorithm doesn’t care about the story you tell yourself. It only cares about the data you generate. And by trusting this process, Harford found success. He completed the marathon in three hours and 56 minutes, beating his goal.
From Marathon Training to Market Disruption: Lessons for Startups and Innovators
So, what does a journalist’s marathon journey have to do with the world of tech, startups, and innovation? Everything.
1. The Power of the “Good Enough” Algorithm
The Garmin coach is a reminder that you don’t always need cutting-edge, generative AI to create a revolutionary product. The real innovation here is not in the complexity of the code, but in the effectiveness of the human-computer feedback loop. The software created a system of trust and accountability that successfully modified human behavior over a long period. For startups, the lesson is clear: focus on solving a user’s core problem with the simplest effective solution. A well-designed system of automation can be more powerful than the most advanced but poorly implemented machine learning model.
2. Building Trust in the Black Box
Harford trusted the algorithm because it delivered small, consistent results. Each completed run, each incremental improvement, built his confidence in the system. He didn’t need to understand *why* he was running in circles; he just needed to believe it was part of a plan that worked. This is crucial for any company developing AI-driven products. Explainability is important, but consistent, reliable performance is the ultimate trust-builder. Your users may not understand the intricacies of your code, but they will understand results.
Nvidia's AI Empire: Genius Investment or a Trillion-Dollar Bubble?
3. Data Trumps Intuition (Sometimes)
One of the hardest parts of Harford’s journey was overriding his own instincts. As he put it, “My own judgment was hopeless. The algorithm knew better.” (source) This is a paradigm shift that is sweeping through every industry. From marketing and finance to HR and logistics, data-driven models are consistently outperforming human experts. The challenge for professionals is not to fight this trend, but to learn how to collaborate with these systems. The future belongs to those who can leverage the power of algorithmic insights without completely abdicating their own critical thinking.
The Finish Line: A New Partnership with AI
Tim Harford’s marathon story is more than just an amusing anecdote. It’s a glimpse into our shared future. We are all, in some way, learning to run alongside algorithms that guide, nudge, and occasionally confuse us.
The narrative isn’t one of man versus machine, but of man *with* machine. He outsourced the tedious calculations and the complex planning, freeing up his own mental energy to focus on the simple, difficult act of putting one foot in front of the other. The algorithm provided the map; he still had to run the race.
For those of us building the next generation of software and AI systems, this is our challenge and our opportunity. How can we create tools that don’t just automate tasks, but empower users? How do we build systems that earn trust, even when their logic is opaque? Harford’s story proves that when we get it right, the result isn’t just efficiency—it’s achieving things we never thought possible. Even if it means looking a little crazy in the process.
Paywalling Safety? The X Grok AI Controversy and the High Price of Innovation