Runway releases its first world model, adds native audio to latest video model



The race to release world models is on as AI image and video generation company Runway joins an increasing number of startups and big tech companies by launching its first one. Dubbed GWM-1, the model works through frame-by-frame prediction, creating a simulation with an understanding of physics and how the world actually behaves over time, the company said.

A world model is an AI system that learns an internal simulation of how the world works so it can reason, plan, and act without needing to be trained on every scenario possible in real life.

Runway, which earlier this month launched its Gen 4.5 video model that surpassed both Google and OpenAI on the Video Arena leaderboard, said its GWM-1 world model is more “general” than Google’s Genie-3 and other competitors. The firm is pitching it as a model that can create simulations to train agents in different domains like robotics and life sciences.

Runway released specific slants to the new world model called GWM-Worlds, GWM-Robotics, and GWM-Avatars.

Image Credits:Runway

GWM-Worlds is an app for the model that lets you create an interactive project. Users can set a scene through a prompt, and as you explore the space, the model generates the world with an understanding of geometry, physics, and lighting. Runway said that while Worlds could be useful for gaming, it’s also well positioned to teach agents how to navigate and behave in the physical world.

With GWM-Robotics, the company aims to use synthetic data enriched with new parameters like changing weather conditions or obstacles. Runway says this method could also reveal when and how robots might violate policies and instructions in different scenarios.

Runway is also building realistic avatars under GWM-Avatars to simulate human behavior. Companies like D-ID, Synthesia, Soul Machines, and even Google have worked on creating human avatars that look real and work in areas like communication and training.

Techcrunch event

San Francisco
|
October 13-15, 2026

Besides releasing a new world model, the company is also updating its foundational Gen 4.5 model released earlier in the month. The new update brings native audio and long-form, multi-shot generation capabilities to the model. The company said that with this model, users can generate one-minute videos with character consistency, native dialogue, background audio, and complex shots from various angles.

The Gen 4.5 update nudges Runway closer to competitor Kling’s all-in-one video suite, which also launched earlier this month, particularly around native audio and multi-shot storytelling. It also signals that video generation models are moving from prototype to production-ready tools.

Runway’s updated Gen 4.5 model will be available to enterprise customers first and then to all paid plan users in the coming weeks.

Image Credits: RunwayImage Credits:Runway

The company said that it will make GWM-Robotics available through an SDK. It added that it is in active conversation with several robotics firms and enterprises for the use of GWM-Robotics and GWM-Avatars.




Source