NEW YORK: Meta Platforms, the parent company of Facebook, has announced the development of a new artificial intelligence model called Movie Gen, which can generate realistic video clips with synchronized sound in response to text-based user prompts.
The announcement positions Meta’s technology as a potential rival to tools from other AI innovators like OpenAI, ElevenLabs, and Runway.
According to Meta, Movie Gen can produce videos that are up to 16 seconds long, with accompanying audio that can extend up to 45 seconds. The model can create background music and sound effects that are in sync with the generated visuals. Meta also shared that the model allows users to edit existing video clips.
In a series of demonstrations, Meta showcased the capabilities of Movie Gen with a variety of examples. One video showed animals engaging in activities like swimming and surfing, while others used real photographs of people to depict them performing tasks such as painting on a canvas.
READ ALSO: Iran Prepares Response to Possible Israeli Attack: Media
Another example revealed how Movie Gen could edit a video by inserting pom-poms into the hands of a man running through the desert, while another clip changed a dry parking lot into a waterlogged area where a man skateboarded through splashing puddles.
Meta emphasized the competitive performance of Movie Gen through blind tests, which indicated that its outputs were comparable to those from other prominent AI startups, including OpenAI, ElevenLabs, and Runway.
Earlier this year, OpenAI showcased its Sora AI tool, capable of creating feature film-like videos based on text prompts.
Meta’s latest model is not yet available for open use by developers. Instead, the company has signaled that it is collaborating directly with content creators and the entertainment industry to explore practical applications of Movie Gen.