Tesla’s Former Engineer Says Their AI Could Be Lethal on UK Roads—Inside the Explosive Warning
Traffic on the A406 North Circular moves with its typical tense rhythm on a rainy London afternoon. Instinctively calculating distance and speed, vans scuttle ahead, cyclists switch lanes, and pedestrians hover at intersections. Lukasz Krupski, a former Tesla employee, is concerned about this erratic choreography because he thinks the company’s AI-powered driving system might not be prepared for such roads. As you hear his warning, you get the impression that the threat he’s outlining is real and part of daily existence.
Krupski didn’t go out in silence. He leaked about 100 gigabytes of internal data after working at Tesla, including complaints from customers regarding software behavior and abrupt braking. These weren’t anomalies in abstract engineering. They were actual drivers, recounting actual frightful experiences. The software and hardware just weren’t ready, he later stated in public. As these claims come to light, it gets harder to distinguish between human risk and technological ambition.
| Category | Details |
|---|---|
| Company | Tesla |
| Technology | Autopilot and Full Self-Driving (AI-based driver assistance) |
| Whistleblower | Lukasz Krupski, Former Tesla Employee |
| Core Concern | AI and hardware not ready for safe public road deployment |
| Reported Issues | Phantom braking, customer safety complaints |
| Government Context | UK Automated Vehicles Bill under development |
| Investigations | U.S. Department of Justice probe into Tesla Autopilot claims |
| Tesla’s Position | CEO says Tesla has “best real-world AI” |
| Location of Concern | UK public roads and pedestrian environments |
| Reference | https://www.bbc.com/news/technology-67591311 |
Despite its name, Tesla’s Autopilot has never operated entirely on its own. In order to intervene, drivers must continue to keep their hands on the wheel. However, perception is shaped by language. Inside a Tesla cabin, with its glowing screen and simple dashboard, there’s a subtle temptation to put more faith in the machine than is perhaps wise. Drivers may have been pushed toward overconfidence by branding alone.
However, Krupski’s most unsettling argument had nothing to do with drivers. Everyone else was the focus.
He claimed that children and others were essentially a part of an experiment when they walked on pavements. That phrase sticks in your head. Experiments should be conducted in labs, not on suburban streets with school crossings and bus stops. Who gave their consent for this experiment is a question that regulators, investors, and drivers may not have fully addressed yet.
Statistics are cited by Tesla’s supporters. The company claims that compared to average drivers, cars with Autopilot have fewer accidents per mile. However, Tesla provided those figures, and there hasn’t been much independent confirmation. Investors are placing billions of dollars on the technology’s apparent maturation. However, certainty and belief are not synonymous.
“Phantom braking” is one particularly unnerving occurrence that drivers have reported. Vehicles abruptly slowing down or stopping in reaction to imaginary obstacles. It adds a level of uncertainty that software upgrades might not be able to completely remove, especially when it takes place on a busy UK motorway with lorries and impatient commuters all around.
Meanwhile, the UK government is drafting new laws to legalize autonomous vehicles. Politicians characterize it as modernization, a necessary development in transportation. However, there is a discernible conflict between engineering caution and political optimism. Whether regulatory timelines correspond with technological realities is still up for debate.
Even Tesla maintains its public confidence. Elon Musk, the CEO, has stated time and time again that the company has the most sophisticated real-world AI. There is a sense of sincere conviction when one watches Musk pacing stages under bright lights and speaking at events. However, history indicates that conviction does not always mean that risk is eliminated.
There has always been a peculiar mixture of excitement and anxiety surrounding autonomous driving. Videos of early Tesla demonstrations from years ago showed cars seemingly cruising roads with ease. Parts of those protests were meticulously staged, according to later testimony. Though it raised questions that persisted under every new announcement, that revelation did not halt progress.
There must be a great deal of ethical tension for engineers working for companies like Tesla. They are developing systems that are supposed to perform better than human drivers, but they are aware that these systems are still not perfect. When Krupski realized the consequences of what he had witnessed, he himself reported having trouble sleeping. Corporate earnings reports rarely show that kind of personal turmoil.
It becomes evident how difficult driving is when you’re strolling down a busy street and listening to the traffic surge and pause. Humans rely on intuition, subtle body language, and eye contact. Cameras and code are essential to machines. It might take longer than most people think to close that gap.
But the momentum keeps going. Autonomous features grow. Overnight, cars are updated. As they get used to help, drivers progressively relinquish more control. It seems as though society has already embarked on a course that might be challenging to change.
One of the key questions of this technological age is whether Tesla’s AI will ultimately turn out to be safer than people. However, for the time being, the cautions of former insiders reverberate in the background, soft but unabated, akin to the sound of tires on damp pavement—a reminder to everyone that progress and peril frequently go hand in hand.