Be part of our each day and weekly newsletters for the most recent updates and unique content material on industry-leading AI protection. Study Extra
Runway AI Inc. launched its most superior AI video technology mannequin right now, coming into the subsequent section of competitors to create instruments that would remodel movie manufacturing. The brand new Gen-4 system introduces character and scene consistency throughout a number of pictures — a functionality that has evaded most AI video mills till now.
The New York-based startup, backed by Google, Nvidia and Salesforce, is releasing “Gen-4” to all paid subscribers and enterprise clients, with extra options deliberate for later this week. Customers can generate 5 and ten-second clips at 720p decision.
The discharge comes simply days after OpenAI launched a brand new picture technology function that additionally permits character consistency throughout its photographs. The discharge created a cultural phenomenon, with thousands and thousands of customers requesting Studio Ghibli-style photographs by means of ChatGPT. It was partly the consistency of the Ghibli type throughout chats that created the furor.
The viral pattern grew to become so standard that it quickly crashed OpenAI’s servers, with CEO Sam Altman tweeting that “our GPUs are melting” as a result of unprecedented demand. The Ghibli-style photographs additionally sparked heated debates about copyright, with many questioning whether or not AI firms can legally mimic distinctive creative types.
Visible continuity: The lacking piece in AI filmmaking till now
So if character consistency led to large viral progress for OpenAI’s picture function, might the identical occur for Runway in video?
Character and scene consistency — sustaining the identical visible components throughout a number of pictures and angles — has been the Achilles’ heel of AI video technology. When a personality’s face subtly adjustments between cuts or a background ingredient disappears with out clarification, the factitious nature of the content material turns into instantly obvious to viewers.
The problem stems from how these fashions work at a basic stage. Earlier AI mills handled every body as a separate artistic process, with solely free connections between them. Think about asking a room filled with artists to every draw one body of a movie with out seeing what got here earlier than or after — the outcome can be visually disjointed.
Runway’s Gen-4 seems to have tackled this drawback by creating what quantities to a persistent reminiscence of visible components. As soon as a personality, object, or setting is established, the system can render it from completely different angles whereas sustaining its core attributes. This isn’t only a technical enchancment; it’s the distinction between creating fascinating visible snippets and telling precise tales.
Utilizing visible references, mixed with directions, Gen-4 permits you to create new photographs and movies with constant types, topics, places and extra. Permitting for continuity and management inside your tales.
To check the mannequin’s narrative capabilities, we have now put collectively… pic.twitter.com/IYz2BaeW2U
— Runway (@runwayml) March 31, 2025
In response to Runway’s documentation, Gen-4 permits customers to offer reference photographs of topics and describe the composition they need, with the AI producing constant outputs from completely different angles. The corporate claims the mannequin can render movies with life like movement whereas sustaining topic, object, and elegance consistency.
To showcase the mannequin’s capabilities, Runway launched a number of brief movies created totally with Gen-4. One movie, “New York is a Zoo,” demonstrates the mannequin’s visible results by putting life like animals in cinematic New York settings. One other, titled “The Retrieval,” follows explorers looking for a mysterious flower and was produced in lower than per week.
From facial animation to world fashions: Runway’s AI filmmaking evolution
Gen-4 builds on Runway’s earlier instruments. In October, the corporate launched Act-One, a function that enables filmmakers to seize facial expressions from smartphone video and switch them to AI-generated characters. The next month, Runway added superior 3D-like digicam controls to its Gen-3 Alpha Turbo mannequin, enabling customers to zoom out and in of scenes whereas preserving character types.
This trajectory reveals Runway’s strategic imaginative and prescient. Whereas opponents give attention to creating ever extra life like single photographs or clips, Runway has been assembling the parts of an entire digital manufacturing pipeline. The strategy feels extra akin to how precise filmmakers work — addressing issues of efficiency, protection, and visible continuity as interconnected challenges quite than remoted technical hurdles.
The evolution from facial animation instruments to constant world fashions suggests Runway understands that AI-assisted filmmaking must comply with the logic of conventional manufacturing to be really helpful. It’s the distinction between making a tech demo and constructing instruments professionals can truly incorporate into their workflows.
AI video’s billion-dollar battle heats up
The monetary implications are substantial for Runway, which is reportedly elevating a new funding spherical that will worth the corporate at $4 billion. In response to monetary studies, the startup goals to succeed in $300 million in annualized income this 12 months following the launch of latest merchandise and an API for its video-generating fashions.
Runway has pursued Hollywood partnerships, securing a take care of Lionsgate to create a customized AI video technology mannequin primarily based on the studio’s catalog of greater than 20,000 titles. The corporate has additionally established the Hundred Movie Fund, providing filmmakers as much as $1 million to provide films utilizing AI.
“We consider that the perfect tales are but to be instructed, however that conventional funding mechanisms typically overlook new and rising visions throughout the bigger {industry} ecosystem,” Runway explains on its fund’s web site.
Nevertheless, the expertise raises considerations for movie {industry} professionals. A 2024 research commissioned by the Animation Guild discovered that 75% of movie manufacturing firms which have adopted AI have diminished, consolidated, or eradicated jobs. The research tasks that greater than 100,000 U.S. leisure jobs shall be affected by generative AI by 2026.
Copyright questions comply with AI’s artistic explosion
Like different AI firms, Runway faces authorized scrutiny over its coaching information. The corporate is at the moment defending itself in a lawsuit introduced by artists who allege their copyrighted work was used to coach AI fashions with out permission. Runway has cited the honest use doctrine as its protection, although courts have but to definitively rule on this utility of copyright legislation.
The copyright debate intensified final week with OpenAI’s Studio Ghibli function, which allowed customers to generate photographs within the distinctive type of Hayao Miyazaki’s animation studio with out express permission. In contrast to OpenAI, which refuses to generate photographs within the type of residing artists however permits studio types, Runway has not publicly detailed its insurance policies on type mimicry.
This distinction feels more and more arbitrary as AI fashions turn out to be extra refined. The road between studying from broad creative traditions and copying particular creators’ types has blurred to close invisibility. When an AI can completely mimic the visible language that took Miyazaki many years to develop, does it matter whether or not we’re asking it to repeat the studio or the artist himself?
When questioned about coaching information sources, Runway has declined to offer specifics, citing aggressive considerations. This opacity has turn out to be customary observe amongst AI builders however stays some extent of rivalry for creators.
As advertising and marketing companies, academic content material creators, and company communications groups discover how instruments like Gen-4 might streamline video manufacturing, the query shifts from technical capabilities to artistic utility.
For filmmakers, the expertise represents each alternative and disruption. Impartial creators achieve entry to visible results capabilities beforehand accessible solely to main studios, whereas conventional VFX and animation professionals face an unsure future.
The uncomfortable reality is that technical limitations have by no means been what prevents most individuals from making compelling movies. The power to keep up visible continuity gained’t abruptly create a technology of storytelling geniuses. What it would do, nevertheless, is take away sufficient friction from the method that extra individuals can experiment with visible narrative without having specialised coaching or costly tools.
Maybe probably the most profound side of Gen-4 isn’t what it may possibly create, however what it suggests about our relationship with visible media going ahead. We’re coming into an period the place the bottleneck in manufacturing isn’t technical ability or funds, however creativeness and objective. In a world the place anybody can create any picture they will describe, the necessary query turns into: what’s price displaying?
As we enter an period the place creating a movie requires little greater than a reference picture and a immediate, probably the most urgent query isn’t whether or not AI could make compelling movies, however whether or not we are able to discover one thing significant to say when the instruments to say something are at our fingertips.