Tesla Cybercab Manager Leaves, Boasts of ‘Pushing Safety’; xAI Exits

Tesla Cybercar & Cybervehicle Trademarks: What's Next?

The program manager for Tesla’s Cybercab left the company days before the first vehicle was shown to the world. His LinkedIn farewell praised a team that had “pushed the boundaries of efficiency, safety, and affordability,” a phrase that landed like a small, strange stone in a still pond. I read that line and felt the room tilt—because when safety is framed as an experiment, you and I both start to squint.

I’m Victor Nechita’s LinkedIn post away from you right now, and you can read it yourself: after nearly six years at Tesla, he said he was leaving to “start a new chapter back on the east coast.” He traces a climb from a Model 3 production-line internship in 2017 to leading the Cybercab program—Tesla’s purpose-built, pedal- and steering-wheel-free vehicle that only functions if full autonomy works perfectly.

Giga Texas pushed a photo into public view, and the internet asked questions

The first Cybercab photo posted on X on Feb. 17 was oddly shrouded; parts of the vehicle were hidden in shadow. That single image made a lot of observers pause. Electrek and other outlets highlighted the timing: Nechita’s exit notice appeared three days earlier, and Tesla’s social post came right after.

You should know the facts: the Cybercab was revealed as a concept in late 2024 and promised in a two-to-three-year window. Tesla says the first car rolled off a production line at Giga Texas, but the image and the silence around hardware readiness leave a wide gap between a staged moment and a product you can buy or trust on the road.

Why did the Tesla Cybercab program manager leave?

He cited six years and a move to the east coast in a LinkedIn post, but there’s context beyond the post. Tesla has seen several program managers depart recently—including leads on Cybertruck and Model Y—and Elon Musk’s xAI also lost many early staffers. Public exits like these create a question mark for any moonshot project. I read those departures as signals, not proofs: signal that internal pressure or strategic reshuffles are at play, and signal that programs so dependent on software and hardware integration can be fragile when people leave.

A hardware timeline reads like a calendar of hurdles

Tesla is building the Cybercab on AI4 hardware today, while AI5 is scheduled for mid-2027. That delay matters. I track AI chips the way others follow earnings calls—because a car that must drive itself without a wheel is only as good as the compute and software behind it.

AI4 hasn’t reliably produced unsupervised full self-driving in public tests; AI5 is promised to help, but it’s months away. That gap is a lot like a lighthouse whose light flickers—you might see a beam, but you wouldn’t steer your ship by it overnight. You should also factor in regulatory friction: state and city rules vary, and a Cybercab legal in Austin may not be street-legal in a neighboring state.

Is the Cybercab safe to operate without a steering wheel?

Safety depends on software maturity, sensor fusion, testing, and regulation. Right now, Tesla’s robotaxi crash rate in public-sourced studies appears higher than human drivers in similar conditions—roughly four times higher in some analyses. Those figures came from cars with human monitors; they don’t prove the Cybercab’s concept will work without manual controls. If you’re thinking about safety, the hardware timeline, test data, and certification process matter more than a staged rollout photo.

Seats, stewards, and the politics of permission

Local transit boards are pushing back in places like San Diego, where the MTS unanimously opposed Waymo expansions. That kind of political resistance is as real as any engineering problem.

You can see how the story fragments: the technical questions (AI4 vs AI5), the human questions (program managers leaving), and the political questions (local agencies, the U.S. Department of Transportation trying to build uniform rules). Each of those layers can delay or derail a product more effectively than a single missing bolt.

Tesla’s public narrative leans on authority cues—Elon Musk’s promises, dramatic product reveals, and rolling-off-the-line moments. Media sites like Electrek and mainstream outlets have pieced together departures and photos and repeatedly returned to the same stubborn question: can Tesla deliver a car that refuses human control while keeping people safe?

Small exits, large perception effects

Siddhant Awasthi, Emmanuel Lamacchia, and now Victor Nechita—when program leads leave, the optics are harsh. I pay attention to how departures cluster, because they change the probability you assign to on-time, on-target launches.

People notice patterns. Musk said xAI’s departures were a restructuring; that’s plausible. But when you’re selling a mission that depends on solving one of tech’s hardest problems—full autonomy—recurrent exits make customers, regulators, and investors ask whether the ship’s crew has the experience and stamina to finish the voyage. The Cybercab’s promise rests on hardware, software, regulation, and above all, trust.

The Cybercab is framed as a product that only works if Tesla has solved complete autonomy. That’s a high bar. The public rollout needs not just a certified vehicle but a legal framework, reliable compute, and a track record of tested outcomes. Right now, the timeline, the photo, and the departures create more questions than answers.

Elon Musk, LinkedIn, X, Electrek, Waymo, the U.S. Department of Transportation, and local transit boards are all active players in this story. I’ll keep watching the hardware cadence, the hires and exits, and the regulatory filings—because those are the things that actually change probabilities, not press photos or rosy headlines.

I’ll say this plainly: you should treat a single staged photo and a LinkedIn goodbye as early signals, not as proof that a radically different kind of vehicle is ready for your street. The Cybercab needs months—maybe years—of software validation, hardware rollouts, and political permission before it becomes something you’d invite into your neighborhood.

The program manager’s phrase about “pushing the boundaries of safety” reads to me like a tightrope walker without a net—brave, but also a reminder that bravery and reliability are not the same thing. Will Musk and Tesla turn this experiment into a dependable reality, or will the promise of a driverless future remain a series of dramatic moments and delayed timelines?