Why Runway is eyeing the robotics industry for future revenue growth


Runway has spent the past seven years building visual-generating tools for the creative industry. Now, it sees a new opportunity for its technology: robotics.

New York-based Runway is known for its video and photo generation AI world models, or large language models that create a simulated version of the real world. Most recently, the company released Gen-4, its video-generating model, in March and Runway Aleph, its video editing model, in July.

As Runway’s world models started to improve — and get more realistic — the company began to receive inbound interest from robotics and self-driving car companies looking to use the tech, Anastasis Germanidis, Runway co-founder and CTO, told TechCrunch in an interview.

“We think that this ability to simulate the world is broadly useful beyond entertainment, even though entertainment is an ever increasing and big area for us,” Germanidis said. “It makes it much more scalable and cost effective to train [robotic] policies that interact with the real world whether that’s in robotics or in self driving.”

Germanidis said working with robotics and self-driving car companies was not something Runway initially envisioned when it launched back in 2018. It wasn’t until robotics and other companies in other industries reached out, that the company realized their models had much broader use cases than they originally thought, he said.

Robotics companies are using Runway’s tech for training simulations, Germanidis said. He added that just training robots and self-driving cars in real-world scenarios is costly for companies, takes a long time, and is hard to scale.

While Runway knows it isn’t going to replace real-world training by any means, Germanidis said companies can get a lot of value running simulations on Runway’s models because they have the ability to get incredibly specific.

Techcrunch event

San Francisco
|
October 27-29, 2025

Unlike in real-world training, using these models makes it easier to test for specific variables and situations without changing anything else in the scenario, he added.

“You can take a step back and then simulate the effect of different actions,” he said. “If the car took this turn over this, or perform this action, what will be the outcome of that? Creating those rollouts from the same context, is a really difficult thing to do in the physical world, to basically keep all the other aspects of the environment the same and only test the effect of the specific action you want to take.”

Runway isn’t the only company looking to tackle this. For instance, Nvidia released the latest version of its Cosmos world models, in addition to other robot training infrastructure, earlier this month.

The company doesn’t anticipate releasing a “completely separate line of models” for its robotics and self-driving car customers, Germanidis said. Instead, Runway will fine-tune its existing models to better serve these industries. The company is also building a dedicated a robotics team.

Germanidis added that while these industries weren’t in the company’s initial pitches to investors, they are on board with this expansion. Runway has raised more than $500 million from investors like Nvidia, Google and General Atlantic at a $3 billion valuation.

“The way we think of the company, is really built on a principle, rather than being on the market,” Germanidis said. “That principle is this idea of simulation, of being able to build a better and better representation of the world. Once you have those really powerful models, then you can use them for a wide variety of different markets, a variety of different industries. [The] industries we expect are there already, and they’re going to change even more as a result of the power of generative models.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *