Learn how to harness the power of AI for lip-syncing videos with the Wav2Lip model

Getting Creative with AI: Mastering the Wav2Lip Model

Learn how to harness the power of AI for lip-syncing videos with the Wav2Lip model
Make a picture of a person "speak" whatever you want with the wav2lip model.

Whether you're working on a dubbed movie project, producing a music video, or creating engaging educational content, matching lip movements with audio can be a daunting task. Here, the AI model Wav2Lip comes into play. It provides a sophisticated solution that uses audio input to generate a lip-synced video, making it a game-changer in the realm of content creation. Just upload a picture of your desired speak and the audio recording you want them to "speak" - the model will give you a video that shows them lip-syncing to the audio!

Subscribe or follow me on Twitter for more content like this!

This guide will walk you through the nuances of using the Wav2Lip model, developed by the creator devxpy and currently ranking #35 in popularity on AIModels.fyi. We'll dive deep into its features, understand its inputs and outputs, and step-by-step, learn how to use it to produce lip-synced videos. Additionally, we'll also explore how to utilize AIModels.fyi to discover similar models and choose the one that best fits your needs. So, let's get started.

About the Wav2Lip Model

The Wav2Lip model, created by devxpy, offers a unique solution for creating lip-synced videos from an audio source. You can upload an image and an audio file, and the model will turn the two into a lip-synced video, with the subject of the picture appearing to speak the words of the audio file.

You can view an example in this video here (also embedded below).

As you'll see on the model's detail page, Wav2Lip is an Audio-to-Video model that runs on the powerful Nvidia A100 (40GB) GPU hardware. With an average runtime of just 7 seconds and a cost per run at a mere $0.0161 USD, it offers quick and cost-effective solutions for content creators.

The model enjoys considerable popularity with over 576,015 runs, making it the 35th most run model on AIModels.fyi, while devxpy holds the 25th position in the creator rank.

Understanding the Inputs and Outputs of the Wav2Lip Model

Before we dive into how to use the Wav2Lip model, let's explore the inputs it requires and the outputs it generates.


The Wav2Lip model requires the following inputs:

  1. Face: A video or image file that contains the faces to use.
  2. Audio: A video or audio file to use as the raw audio source.
  3. Pads: A string input for padding the detected face bounding box. You may need to adjust this to include at least the chin. The format is "top bottom left right".
  4. Smooth: A boolean input to decide whether to smooth face detections over a short temporal window.
  5. fps: This can be specified only if the input is a static image.
  6. Resize_factor: An integer input to reduce the resolution by a certain factor. Sometimes, the best results are obtained at 480p or 720p.


The model's output follows a specific schema:

  "type": "string",
  "title": "Output",
  "format": "uri"

With these inputs and outputs defined, we are now ready to get our hands on the model and create a lip-synced video.

Using the Wav2Lip Model

Whether you're a coding enthusiast or prefer a more visual approach, the Wav2Lip model has got you covered. For those who shy away from

coding, the model provides a user-friendly interface on Replicate. You can use the demo link to interact directly with the model, play with its parameters, and get immediate feedback.

For those who want to dive into the code, follow the steps below to use the Wav2Lip model.

Step 1: Install the Node.js client

First, install the Node.js client by running npm install replicate in your terminal.

Step 2: Authenticate with your API token

Next, copy your API token and authenticate by setting it as an environment variable in your terminal with export REPLICATE_API_TOKEN=your_api_token.

Step 3: Run the Model

With the Node.js client installed and authenticated, you can now run the Wav2Lip model with the following code:

import Replicate from "replicate";

const replicate = new Replicate({
  auth: process.env.REPLICATE_API_TOKEN,

const output = await replicate.run(
    input: {
      face: "face_input",
      audio: "audio_input",
      // Other parameters as needed

Step 4: Set up a Webhook (Optional)

You can also specify a webhook URL to be called when the prediction is complete. This can be done as follows:

const prediction = await replicate.predictions.create({
  version: "8d65e3f4f4298520e079198b493c25adfc43c058ffec924f2aefc8010ed25eef",
  input: {
    face: "face_input",
    audio: "audio_input",
    // Other parameters as needed
  webhook: "https://example.com/your-webhook",
  webhook_events_filter: ["completed"]

Setting up a webhook allows you to receive a notification when the prediction is complete, which can be particularly useful for long-running tasks.

Taking it Further - Finding Other Audio-to-Video Models with AIModels.fyi

AIModels.fyi is a fantastic resource for discovering AI models that cater to various creative needs. It's a fully searchable, filterable, tagged database of all the models on Replicate, allowing you to compare models, sort by price, or explore by creator.

If you're interested in finding models similar to Wav2Lip, follow these steps:

Step 1: Visit AIModels.fyi

Head over to AIModels.fyi to begin your search for similar models.

Use the search bar at the top of the page to search for models with specific keywords, such as "Audio-to-Video". This will show you a list of models related to your search query.

Step 3: Filter the Results

On the left side of the search results page, you'll find several filters that can help you narrow down the list of models. You can filter and sort by model type (Image-to-Image, Text-to-Image, etc.), cost, popularity, or even specific creators.


In this guide, we explored the remarkable capabilities of the Wav2Lip model. We dove into its features, learned about its inputs and outputs, and walked through the process of using it to create lip-synced videos

. We also discussed how to leverage the search and filter features in AIModels.fyi to find similar models and compare their outputs.

This guide should inspire you to explore the creative possibilities of AI and bring your imagination to life. Don't forget to subscribe to AIModels.fyi's notes for more tutorials, updates on new and improved AI models, and a wealth of inspiration for your next creative project.

You can also follow me on Twitter for regular updates and insights into the world of AI.

Keep creating, keep exploring, and enjoy the journey through the world of AI with AIModels.fyi!

Subscribe or follow me on Twitter for more content like this!