What happens when a car’s AI determines the speed limit is more like a suggestion? That’s what federal regulators are asking Tesla after the revival of “Mad Max” mode in its Full Self-Driving system, which critics say is a feature meant to encourage operation in violation of traffic laws.

The NHTSA has confirmed it is investigating nearly three million Tesla vehicles equipped with FSD, following dozens of reports of traffic violations and crashes. According to the agency, the technology has “induced vehicle behavior that violated traffic safety laws,” including instances where Teslas ran red lights, entered intersections against signals, and drove against the flow of traffic. Six of these reports involved collisions after a Tesla, with FSD engaged, continued through a red light.
Mad Max mode, which was first introduced in 2018 for Autopilot and reintroduced this month in FSD Version 14.1.2, is meant to make quicker lane changes and travel at higher speeds than the system’s “Hurry” profile. Tesla’s own release notes tout “higher speeds and more frequent lane changes,” while social media clips show vehicles rolling stop signs and accelerating well above posted limits. One driver reported, “We are going 75 in a 50, I feel like we are racing down the street right now.”
Mad Max mode, from an engineering point of view, is an aggressive calibration of Tesla’s lane change and speed selection algorithms. FSD uses probabilistic models in making decisions, which have been trained on massive datasets regarding driving scenarios. These models reduce hesitation thresholds, shorten following distances, and increase permissible acceleration rates in aggressive profiles. In simulation studies of autonomous driving patterns, aggressive modes increase average travel speed by 40% and decrease waiting times by more than 50%, but they also increase the potential for conflicts involving human drivers and mixed traffic conditions.
The AI that underlies FSD works similarly to large language models: using statistical reasoning to predict the next driving action given recent inputs. While this can optimise flow in predictable conditions, it struggles with judgment under uncertainty. As documented in other autonomous systems, aggressive profiles misinterpret cues, such as assuming an oncoming car will turn when it won’t, leading to abrupt braking or unsafe manoeuvres. These are not theoretical risks: incidents with other AVs, like Cruise and Waymo, have shown how misjudgments at high speed or in high-density traffic can result in collisions.
Regulators are especially concerned about how those aggressive modes interact with U.S. traffic laws. While Tesla continues to say that FSD is a Level 2 driver-assistance system that requires constant human supervision, the branding of “Full Self-Driving” and “Mad Max” can blur user expectations. Insurance industry experts say that confusion makes liability assessments more difficult in cases where the behavior of the software is a direct contributor to the cause of an accident. “If drivers misunderstand what these systems can do, the line between human error and product liability starts to disappear,” said one senior underwriting executive.
The NHTSA’s investigation also echoes other safety oversight concerns. In the U.S., regulation too often occurs after incidents have happened, rather than setting boundaries about what semi-automated systems can and cannot do. Meanwhile, European regulators have imposed much stronger limits on automated lane changes and speed settings. If NHTSA finds that Mad Max mode encourages illegal driving, Tesla could see another software recall, as it did in 2022 when the company was forced to eliminate a “rolling stop” feature from 50,000 vehicles.
Behavioral adaptation research adds another layer to the debate. Studies in VR-enabled simulations of Level 4 autonomous driving have found that passengers quickly adapted to aggressive driving modes, showcasing a higher willingness to let the system operate without intervention. While this can improve traffic efficiency in all-AV environments, in mixed traffic, it risks over-reliance on automation and reduced situational awareness. In practice, what this means is that drivers may be slower to correct the system when it errs especially dangerous prospect at elevated speeds.
For Tesla, Mad Max mode’s reintroduction represents confidence in FSD development and an answer to driver requests for faster, more decisive movement through traffic. For regulators and safety experts, it raises urgent questions about how to balance innovation against public safety, the limits of AI judgment, and how current oversight frameworks can keep up in a world where software updates can change how cars behave overnight.

