
Uber’s new Tesla app integration may make it easier for the company to spot when drivers are leaning on “Full Self-Driving” during paid rides—while leaving every ounce of legal risk on the person behind the wheel.
Story Snapshot
- Uber drivers are increasingly using Tesla’s “Full Self-Driving” (FSD) during shifts, even though it remains a Level 2 driver-assistance system requiring constant human supervision.
- Uber’s integration with Tesla’s in-dash navigation streamlines pickups and drop-offs, but it also makes FSD use during rides more seamless and potentially more detectable.
- Crashes and public incidents—including a reported Las Vegas-area crash involving an Uber passenger—are renewing scrutiny of using semi-automation in commercial passenger service.
- Regulators continue to probe FSD behavior issues, while drivers face the real-world dilemma of fatigue relief versus liability and deactivation risk.
Uber’s Tesla Integration Makes FSD Use Harder to Hide
Uber’s recent integration with Tesla’s navigation display places ride details directly on the vehicle’s screen, reducing the friction between accepting a trip and letting the car handle more of the driving. Drivers describe a simple flow: load the Uber route into Tesla navigation, then engage FSD. That convenience is the point for productivity, but it also supports the headline claim that “your boss knows”—because the trip is now deeply tied to the car’s interface.
Uber and Lyft policies still put responsibility where it has always been: the driver must remain in control, keep hands on the wheel, and be ready to intervene. The practical problem is that integration can normalize automation as part of a routine shift. If a ride ends badly—an abrupt maneuver, a missed turn, or a crash—the platform can point back to policy language while the driver is left answering questions from passengers, insurers, and possibly law enforcement.
FSD Is Still Level 2, and That Legal Reality Matters
By 2026, Tesla’s FSD has advanced features and driver monitoring, but it remains categorized as Level 2 automation—meaning the human must supervise the system at all times. That detail is not just technical trivia; it shapes liability. Commercial driving with passengers is not a sandbox for “learning moments,” especially in dense pickup zones, airport loops, construction corridors, or any place where a wrong assumption can turn into a hard brake or a sudden swerve.
Regulatory attention has not gone away. Federal scrutiny has focused on reported behaviors such as wrong-side driving and red-light violations, and Tesla has faced major legal exposure from prior crashes. These background facts fuel the public tension: if a system needs constant supervision, why are busy rideshare drivers—often working long hours—tempted to outsource attention to software? The answer appears to be fatigue relief and efficiency, but the risk tradeoff remains unresolved.
Rideshare Drivers Say FSD Helps Fatigue—But Errors Make Passenger Use Risky
Drivers report real benefits: less fatigue, a calmer pace, and fewer exhausting micro-decisions during long shifts. Some also describe FSD as a conversation starter with riders, which can defuse awkward silence and even boost tips. One widely cited estimate claims a significant share of Tesla rideshare drivers use FSD regularly, yet the same driver commentary notes many avoid using it with passengers specifically because of unpredictable mistakes in complicated environments.
Those conflicting behaviors—regular use, but selective use around passengers—signal the core issue: trust is conditional. FSD may handle highways well, but rideshare work is heavy on messy edge cases: curbside pickups, sudden lane changes by other drivers, pedestrians, and last-second reroutes. When errors happen, the driver’s split attention becomes a serious concern. The platform gets a completed trip; the driver absorbs the stress, the rating hit, and the liability exposure.
Crashes and “Overtrust” Warnings Intensify the Accountability Question
A reported April 2026 crash in the Las Vegas suburbs involving a Tesla carrying an Uber passenger renewed scrutiny of how semi-automated driving is being used in paid rides. Separately, former Uber self-driving executive Anthony Levandowski described a Tesla FSD crash in San Francisco that left him concussed, using the incident to argue that near-perfect performance can condition drivers into a “passenger” mindset that is hard to snap out of when a sudden intervention is required.
That warning matters because it cuts across politics: conservatives have been burned for decades by institutions selling “safe” systems and then shifting blame downward. Whether it’s bureaucrats dodging accountability, corporations burying risk in fine print, or regulators reacting after the fact, the pattern looks familiar. Here, Uber’s position is straightforward—drivers are responsible—while Tesla’s system is marketed as highly capable. When those two realities collide, the constitutional concern isn’t abstract: due process and fair liability require clarity, not ambiguity.
For now, the best-supported conclusion from available reporting is limited: there is no clear evidence that Uber is actively “monitoring” FSD usage in real time beyond what its integration, data trails, and safety policies might allow after an incident. What is clear is that integration reduces friction, encourages routine use, and increases the odds that questions will be asked when something goes wrong. Until rules catch up, the person in the driver’s seat remains the legal and financial shock absorber.
Sources:
Uber Drivers Turn Self-Driving Tesla Into Robotaxis
Tales from an Uber driver using FSD
Former Uber exec Tesla FSD crash AI risk self-driving (2026)


























