Runway claims that its latest text-to-video model produces even more accurate visuals than previous models. In a blog post on Monday, Runway says its Gen-4.5 model can produce “cinematic and highly realistic output,” potentially making it even more difficult to distinguish between real and AI.
“Gen-4.5 achieves unprecedented anatomical precision and visual accuracy,” Runway’s announcement said. It says the new AI model is better at following signals, allowing it to produce detailed scenes without compromising video quality. Runway says AI-generated objects “move with realistic weight, speed, and force,” while fluids “flow with appropriate dynamics.”
According to Runway, the Gen-4.5 model is gradually becoming available to all users and will offer the same speed and efficiency as its predecessor. However, there are still some limitations, as the model may experience problems with object permanence and causal logic, meaning that the effect may occur before the cause, such as opening a door before someone uses the handle.
With Runway, OpenAI is stepping up efforts to make its AI-generated videos more lifelike. OpenAI highlighted upgrades in physics with the release of its Sora 2 text-to-video model in September, with Sora head Bill Peebles saying, “You can accurately backflip on top of a paddleboard over a body of water, and all the fluid dynamics and buoyancy are accurately modeled.”
Runway says its Gen-4.5 model is also better at handling different visual styles, allowing it to produce more consistent photorealistic, stylized and cinematic scenes. The startup claims that photorealistic scenes created with Gen-4.5 can be “indistinguishable from real-world footage with lifelike detail and accuracy.”
<a href