Use words and images to generate new videos out of existing ones

About Gen-1

With Gen-1, Runway is launching Video to Video, a form of generative AI that uses words and images to generate new videos out of existing ones. Runway is also the original startup behind Stable Diffusion.

AI systems for image and video synthesis are quickly becoming more precise, realistic and controllable. Runway Research is at the forefront of these developments and is dedicated to ensuring the future of creativity is accessible, controllable and empowering for all.

The Gen-1 explainer video that shows the five initial use cases:

  • Mode 1 Stylization: Transfer the style of any image or prompt to every frame of your video.
  • Mode 02 Storyboard: Turn mockups into fully stylized and animated renders.
  • Mode 03 Mask: Isolate subjects in your video and modify them with simple text prompts.
  • Mode 04 Render: Turn untextured renders into realistic outputs by applying an input image or prompt.
  • Mode 05 Customization: Unleash the full power of Gen-1 by customizing the model for even higher fidelity results.

Gen-1 screenshots

Ready to start building?

At Apideck we're building the world's biggest API network. Discover and integrate over 12,000 APIs.

Check out the API Tracker