Friday, March 13, 2026

Raquel Urtasun on Level-4 Autonomous Trucks




Raquel Urtasun has spent 16 years in the self-driving space, long enough to navigate every metaphorical glorious hill and plunging valley. She took the trip from the early “pipe dream” dismissals, to the “we’re this close” certainty, and back again.

The industry is now riding a new wave of optimism and investment, including at Waabi Innovation Inc., the autonomous trucking company that Urtasun founded in 2001. The Spanish-Canadian professor at the University of Toronto, and former chief scientist of Uber’s Advanced Technologies Group, has helped make Waabi a key player. Beginning in fall 2023, theToronto-based startup has been running geofenced cargo routes from Dallas to Houston in a fleet of retrofitted Peterbilt semis, navigating even residential streets in loaded, 36,000-kilogram (80,000-pound) behemoths with no human aboard.

In October, the company reached a milestone by integrating its “Waabi Driver” physical-AI system in Volvo’s new VNL Autonomous truck, which the Swedish automaker is building in Virginia. That self-driving solution uses Nvidia’s Drive AGX Thor, an AI-based platform for autonomous and software-defined vehicles.

In January, the Toronto-based startup raised $750 million in its latest funding round to expand its self-driving system into the fiercely competitive robotaxi space. Backers include Khosla Ventures, Nvidia, and Volvo.

Urtasun says the Waabi Driver can scale across a full range of vehicles, geographies and environments — although snowstorms can still create a no-go zone for now. It’s powered by what Urtasun calls the industry’s most advanced neural simulator, allowing a “shared brain” that partners can transplant into cars, trucks, and pretty much anything on wheels. The idea is to grab a chunk of a global autonomous trucking business that McKinsey estimates could be worth more than $600 billion a year by 2035; with autonomous haulers responsible for 15 percent of total U.S. trucking miles as early as 2030.

Backed by an additional $250 million from Uber, Waabi plans to deploy at least 25,000 autonomous taxis through Uber’s ride-hailing service, whose world-dominating reach encompasses 70 countries, about 15,000 cities and more than 200 million monthly users.

Urtasun spoke with IEEE Spectrum about how Waabi is counting on sensors and simulation to prove real-world safety; and why the move to autonomy is a moral imperative that outweighs the disruption for human drivers—whether they’re driving trucks or family sedans. Our conversation was edited for length and clarity.

The Shift to Next-Gen Autonomous Vehicles

IEEE Spectrum: Until quite recently, autonomous tech seemed to have hit a wall, at least in the public’s mind. Now investors are flooding the zone again, and companies are all-in. What happened?

Raquel Urtasun: There were a lot of empty promises, or [people] not realizing the complexity of the problem. There was a realization that actually, this problem is harder than people anticipated. It’s also because of the type of technology that was developed at the time, what we call “AV 1.0”. These are hand-engineered systems that need to be brute-forced by humans. You need lots of capital and a massive amount of miles on the road just to get to the first deployment.

What you see with the next generation—AV 2.0 and systems that can reason—is that you finally have a solution that scales. When we started the company, this was a very contrarian view. But today, the breakthroughs in AI have made it clear that this is the next big revolution. It’s not just about more compute; it’s about building a brain that can generalize. That is the “aha moment” the industry is having now.

Even for someone who believes in the tech, seeing a driverless semi-trailer in your rear-view mirror might be unsettling. Now you’ve integrated your tech into the aerodynamic, diesel-powered Volvo VNL Autonomous truck. How do you convince regulators and the public that these trucks belong on the street?

Urtasun: Safety, when you think about carrying 80,000 pounds on this massive rig, is definitely top of mind. We believe the only way to do this safely is with a redundant platform that is fully developed and validated by the OEM, not with a retrofit. The OEM does a special type of truck that has all the redundant steering, power, and braking, so that no matter what happens, there is always a way we can interface and activate that truck in a safe manner. Then we are responsible for the sensors, the compute, and obviously the brain that drives those trucks.

AI’s Impact on Trucking Jobs

One of the biggest points of contention is the displacement of human drivers. As AI disrupts a range of workplaces, how do respond to people who say this will eliminate good-paying, blue-collar jobs?

Urtasun: The way we see this is that everybody who’s a truck driver today, and wants to retire as a truck driver, will be able to do so. This is physical AI; this is not like the digital world where suddenly you can switch immediately to this technology. That adoption and scaling is going to take time. There will also be many jobs created with this technology; remote operations, terminal operations, and other things. You have time to change the form of labor of being on the road, which is for weeks at a time—and it’s a really difficult and dehumanized job, let’s be honest—to something you can do locally. There was an interesting [U.S.] Department of Transportation study that showed because of this gradual adoption, there will be more jobs created than actually removed.

You’ve spoken about a personal motivation behind this. Why do you believe the advantages of autonomy outweigh any growing pains, including the potential for unexpected accidents or even deaths?

Urtasun: There are 2 million deaths on the road globally per year, and nobody’s questioning that. That’s the status quo. If you think the machines have to be perfect to deploy, you are actually sacrificing many humans along the way that you could have saved. Human error in accidents is between 90 percent and 96 percent. Those could be preventable accidents. Some accidents will always be unavoidable; a tire could blow for a machine the same as it could for a human. But the important comparison is how much safer we are. This technology is the answer to many, many things.

Most of the industry is focused on “hub-to-hub” highway driving. But you’ve argued that Waabi’s AI can handle the complexity of local streets.

Urtasun: The rest of the industry has gone with this business model where you need hubs next to the highway. This adds a lot of friction and cost. Thanks to our verifiable end-to-end AI system, we can drive in surface [local] streets. We can do unprotected lefts, traffic lights, and tight turns. These core capabilities enable us to drive all the way to the end customer. We are already hauling commercial loads for customers like Samsung through our Uber Freight partnership.

You’ve mentioned that Waabi doesn’t like to talk about “number of miles” driven as a metric. For an engineering audience, that sounds counterintuitive. How does your “simulation-first” approach replace the need for real-world road time?

Urtasun: In the industry, miles have been used as a proxy for advancement. How many miles does Tesla need to drive to see any of these situations? But we are a simulation-first company. Waabi World can simulate all the sensors, the behaviors of humans, everything. It is the only simulator where you can mathematically prove that testing and driving in simulation is the same as driving in the real world. You can expose the system to billions of simulations in the cloud. This is what allows us to be so capital efficient and fast.

Verifiable AI vs. Black Box Systems

What is the difference between your “interpretable” AI and the “black box” systems we see elsewhere?

Urtasun: We’ve seen an evolution on passenger cars for level- 2+ systems to end-to-end, black box architectures. But those are not verifiable. You cannot validate and verify those systems, which is a massive problem when you think about regulators and OEMs trusting that technology.

What Waabi has built is end-to-end, but fully verifiable. The system is forced to interpret what it is perceiving and use those interpretations for reasoning, so that it can understand the consequences of every action. It is much more akin to how our brain actually works; your “Type 2” thinking, where you start thinking about cause and effect and consequences, and then you typically do a much better choice in your maneuver.

Tesla is famously, and controversially, relying on camera data almost exclusively to run and improve its self-driving systems. You’re not a fan of that approach?

Urtasun: We use multiple sensors: lidar, camera, and radar. That’s very important because failure modes of those sensors are very different and they’re very complementary. We don’t compromise safety to reduce the bill- of- materials cost today.

Those (passenger car) level-2+ systems are not architected for level 4, where there’s no human on board. People don’t necessarily realize there is a huge difference in terms of the bar when there is no human to rely on. It’s not, “Well, if I don’t have a lot of system interventions, I’m almost there.” That’s not a metric. We are native level 4. We decide which areas the system can drive in, and in what conditions. We are building technology that can drive different form factors—trucks or robotaxis—with the same brain.

Reference: https://ift.tt/7x3wu9B

No comments:

Post a Comment

Video Friday: These Robots Were Born to Run

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a w...