Memes Come Alive with Stable Video Diffusion: The AI that Turns Simple Images into Captivating Animations

Memes Come Alive with Stable Video Diffusion: The AI that Turns Simple Images into Captivating Animations

Cellular Stockpile Tech Articles Leave a Comment

Los memes cobran vida gracias a Stable Video Diffusion, la IA que crea animaciones de simples imágenes

Artificial intelligence is increasingly found in more programs and applications, making it a greater part of our daily lives. In particular, generative AI has generated the most interest while also inciting fear. This type of AI is trained to create content, with well-known examples being Stable Diffusion and Midjourney for generating images using AI. Now, a new AI model called Stable Video Diffusion (SVD) has emerged, which allows us to create videos from these AI-generated images.

Most people have likely used generative AI to create content, such as images or artwork, on various occasions. These tools include free offerings like Bing AI, Dall-E, Stable Diffusion, and Midjourney in its free version. While many other AI models and examples exist to choose from, we’ve highlighted the ones that are often more widely known on PCs, especially Stable Diffusion, which sets itself apart from the rest.

Stable Video Diffusion enables users to create short videos from AI-generated images

Stable Diffusion uses the resources of your PC, specifically your graphics card, to generate AI images. This has its advantages and drawbacks, as Midjourney and others handle the process externally, but you rely on their service. With Stable Diffusion, you execute everything locally on your PC without needing a subscription plan. This makes it free, and the results depend entirely on you and your PC.

Stable Diffusion has improved rapidly since its inception, leading to the release of a new AI model capable of creating animations. Stable Video Diffusion is a free tool that converts images into videos using AI.

An NVIDIA RTX 3060 took 30 minutes to generate 14 animated frames

Before you get too excited and try it out, you should know an NVIDIA GPU is required for this process. This was already the case in earlier versions of Stable Diffusion, but later, methods were found to make it work reliably on AMD GPUs. Returning to Stable Video Diffusion, it can transform any static image into a short video clip. It utilizes two AI models: one that converts the image into a 14-frame video called SVD, and another that creates a 25-frame video named SVD-XT.

These models can be set to function at 3 to 30 frames per second and can produce an MP4 video lasting 2 to 4 seconds with a 576 x 1,024-pixel resolution. Ars Technica conducted tests showing two-second video examples, albeit at a low FPS, as it would take more time. Using an NVIDIA RTX 3060, the process took 30 minutes to generate 14 frames. To train this Stable Video Diffusion model, approximately 580 million video clips, or 212 years’ worth of content, were used.

If you want to try Stable Video Diffusion, the AI model is available on their GitHub repository, and the remaining necessary files can be found on the Hugging Face website. If the framerate seems too low, you can always use another AI, such as DAIN, to interpolate frames.

This article first appeared on El Chapuzas Informático.


Need Help?

Can't Find What Your Looking For? Just fill out our form and we'll ask our trusted distributors

Contact Cellular Stockpile.

Wholesale Inquiries Only.

Email Cellular Stockpile
Tel: | Whatsapp:

Leave a Reply

Your email address will not be published. Required fields are marked *