Change poses in images with AI: A complete guide to DragGAN

Let's use AI to change the way subjects pose in pictures

Change poses in images with AI: A complete guide to DragGAN
An example dragGAN output: making a cat behave with AI

Ever wanted to precisely manipulate an image, controlling each pixel with utmost precision? Wanted to experiment with the pose, shape, expression, and layout of generated objects, or people from existing images? Wanted to use AI to make your friends hug you so you don't feel so lonely?

Subscribe or follow me on Twitter for more content like this!

This guide walks you through the journey of using DragGAN, a transformative AI model that allows you to 'drag' any points of the image to reach target points interactively.

In this guide, you'll learn how to leverage DragGAN with Replicate (a model hosting service) to bring your imagination to life. With an average run time of just a few seconds, the DragGAN model we'll work with ranks 581st in popularity on This guide will also illustrate how to find similar models using

So, let's dive in and explore the realm of DragGAN.

About DragGAN

The DragGAN model we'll work with is, a creation of zsxkib. It's a remarkable model that allows users to deform an image with precise control over where pixels go, thus manipulating the pose, shape, expression, and layout of diverse categories. These manipulations are performed on the learned generative image manifold of a GAN, ensuring realistic outputs even for challenging scenarios.

One of the most striking aspects of DragGAN is its ability to hallucinate occluded content and deform shapes that consistently follow the object's rigidity. This means the model can basically understand what objects that are obstructed or behind other objects might look like. For more in-depth details about the model, you can check out its details page.

The paper for the original model claims that the initial dragGAN implementation outperforms prior approaches in the tasks of image manipulation and point tracking. The model also boasts impressive manipulation of real images through GAN inversion. It's really a pretty interesting new development because you can now manipulate images in much less time and get higher-quality outputs.

Understanding the Inputs and Outputs of DragGAN

Before we delve into using DragGAN, it's essential to understand its inputs and outputs. The Replicate implementation of the model expects serval input parameters.


  • only_render_first_frame: A boolean value. If true, only the first frame will be rendered, providing a preview of the initial and final positions.
  • show_points_and_arrows: Another boolean that toggles the display of arrows and points denoting the interpolation path in the generated video.
  • stylegan2_model: The chosen StyleGAN2 model to perform the operation. The model on Replicate has a predefined list of models.
  • source_x_percentage and source_y_percentage: These percentages define the starting x and y-coordinates, respectively.
  • target_x_percentage and target_y_percentage: Similar to the source coordinates, these percentages define the final x and y-coordinates.
  • learning_rate: Controls how quickly the model learns to drag the path from the initial to the final position.
  • maximum_n_iterations: Defines the maximum number of iterations allowed for the operation, limiting how long the path dragging can continue.


The output of the DragGAN model is a string formatted as a URI. It represents the manipulated image after running the model.

Now that we have understood the basics of inputs and outputs, let's explore using the model.

A Step-by-Step Guide to Using DragGAN

Whether you are a code enthusiast or prefer a UI, we've got you covered. You can interact directly with the DragGAN "demo" on Replicate via the web UI, tweaking the parameters and validating the output. However, if you prefer a more code-based approach, or if you intend to build a product based on DragGAN, follow along as we guide you through setting up the model with Node.js.

Step 1: Install Node.js client and Set API Token

First, you will need to install the Node.js client. Using the command prompt, run the following command:

npm install replicate

Next, copy your API token and authenticate it by setting it as an environment variable:

export REPLICATE_API_TOKEN=r8_*************************************

Step 2: Run the Model

Now, it's time to run the model. Use the following code:

import Replicate from "replicate";

const replicate = new Replicate({
  auth: process.env.REPLICATE_API_TOKEN,

const output = await
    input: {
      only_render_first_frame: "..."

Remember, you can also set a webhook URL to be called when the prediction is complete. This can be done as follows:

const prediction = await replicate.predictions.create({
  version: "7e2c9c3440593761e924ce2a87ba52ae986238c760b069e48eb66ac08695eccf",
  input: {
    only_render_first_frame: "..."
  webhook: "",
  webhook_events_filter: ["completed"]

Taking it Further - Finding Other Image-to-Video Models with is an incredible resource for discovering AI models catering to various creative needs, including image generation, image-to-image conversion, and much more. It allows you to compare models, explore by the creator, and sort by price.

Interested in finding similar models to DragGAN? Here's how.

Step 1: Visit

Head over to to begin your search for similar models.

Use the search bar at the top of the page to search for models with specific keywords, such as "Image-to-Video". This will show you a list of models related to your search query.

Step 3: Filter the Results

On the left side of the search results page, you'll find several filters that can help you narrow down the list of models. You can filter and sort models by type (Image-to-Image, Text-to-Image, etc.), cost, popularity, or even specific creators.

By applying these filters, you can find the models that best suit your specific needs and preferences.


In this guide, we introduced the remarkable DragGAN and explained its potential in transforming images by dragging points interactively. We walked through using the model, both via coding and using Replicate's UI. We also discussed how to leverage the search and filter features in to find similar models and compare their outputs, allowing us to broaden our horizons in the world of AI-powered image manipulation.

I hope this guide has inspired you to explore the creative possibilities of AI and bring your imagination to life. For more tutorials, updates on new and improved AI models, and a wealth of inspiration for your next creative project, don't forget to subscribe to and follow me on Twitter. Enjoy the journey of exploring the world of AI with!

Subscribe or follow me on Twitter for more content like this!