Skip to main content
The Queue API makes it easy to manage inference jobs for your deployed ComfyUI workflows. You can use it to submit new jobs, track their progress, retrieve results, and cancel them whenever necessary.

Queue endpoints

Base URL: https://api.runcomfy.net
EndpointMethodDescription
/prod/v1/deployments/{deployment_id}/inferencePOSTRun workflow
/prod/v1/deployments/{deployment_id}/requests/{request_id}/statusGETCheck job status
/prod/v1/deployments/{deployment_id}/requests/{request_id}/resultGETGet job result
/prod/v1/deployments/{deployment_id}/requests/{request_id}/cancelPOSTCancel job

Common Path Parameters

  • deployment_id: String (required). This is the unique ID for your deployed workflow. You get it when you create a deployment. It tells the server which workflow to use.
  • request_id: String (required for non‑submit endpoints). This is the unique ID for a specific job, given back when you submit a request.

Submit a Request

POST /prod/v1/deployments/{deployment_id}/inference
Use this endpoint to start a new inference job. When you deploy a workflow on RunComfy, the platform stores its workflow_api.json as the base configuration for that deployment. By default, each POST to this endpoint runs that cloud-saved workflow and lets you tweak inputs via overrides. For advanced cases, you can instead include a complete different workflow_api.json inline and skip the cloud-saved workflow entirely.

Use Cloud-Saved Workflow

In this mode, the server uses the deployment’s cloud-saved workflow’s workflow_api.json and applies any per-request changes you send in overrides. Unspecified values fall back to the saved defaults. overrides reference the workflow_api.json, target specific nodes (by ID) and updates only their inputs. For guidance on valid ranges, types, and defaults, refer to the object_info.json. Read more about workflow_api.json and object_info.json here. The following snippet shows two nodes from a sample flux_workflow_api.json file, illustrating typical structure with inputs like text prompts and seeds. View the complete example here: runcomfy-flux-workflow-api.json.
{
  "6": {
    "inputs": {
      "text": "Add ASCII style text only the single word \"Kontext\" no additional letters to the display",
      "speak_and_recognation": {
        "__value__": [
          false,
          true
        ]
      },
      "clip": [
        "38",
        0
      ]
    },
    "class_type": "CLIPTextEncode",
    "_meta": {
      "title": "CLIP Text Encode (Positive Prompt)"
    }
  },
  "31": {
    "inputs": {
      "seed": 736220757721744,
      "steps": 20,
      "cfg": 1,
      "sampler_name": "euler",
      "scheduler": "simple",
      "denoise": 1,
      "model": [
        "37",
        0
      ],
      "positive": [
        "35",
        0
      ],
      "negative": [
        "135",
        0
      ],
      "latent_image": [
        "124",
        0
      ]
    },
    "class_type": "KSampler",
    "_meta": {
      "title": "KSampler"
    }
  }
}
Based on the above snippet, overrides can be written as follows to update the prompt and seed without altering other parts of the workflow:
{
  "overrides": {
    "6": {
      "inputs": {
        "text": "futuristic cityscape"
      }
    },
    "31": {
      "inputs": {
        "seed": 987654321
      }
    }
  }
}

Request Example - Basic

For example, this request overrides the seed in KSampler (ID 31) and the text in CLIPTextEncode (ID 6) from flux_kontext_workflow_api.json.
curl --request POST \
  --url https://api.runcomfy.net/prod/v1/deployments/{deployment_id}/inference \
  --header "Content-Type: application/json" \
  --header "Authorization: Bearer <token>" \
  --data '{
    "overrides": {
      "31": {
        "inputs": {
          "seed": 987654321
        }
      },
      "6": {
        "inputs": {
          "text": "futuristic cityscape"
        }
      }
    }
  }'
Overrides lets you replace specific node input values in the deployed workflow.
  • Keys: Node IDs as strings (e.g., 31), matching your workflow’s API JSON exactly.
  • Values: Objects with an inputs field containing input names and updated values (e.g., { "inputs": { "seed": 987654321 } }).
  • Behavior & validation: Only the provided inputs are changed (others stay as defined); keys, input names, and value types must conform to your workflow’s API JSON schema.

Request Example - Image/Video

You can provide image/video in two ways: via URL or via Base64 data URI. Via URL: Overrides the image/video input with a public URL. Recommended limits: images ≤50MB (around 4K); videos ≤100MB (about 2–5 minutes at 720p).
curl --request POST \
  --url https://api.runcomfy.net/prod/v1/deployments/{deployment_id}/inference \
  --header "Content-Type: application/json" \
  --header "Authorization: Bearer <token>" \
  --data '{
    "overrides": {
      "189": {
        "inputs": {
          "image": "https://example.com/new-image.jpg"
        }
      }
    }
  }'
Via Base64 Data URI: Overrides the image/video input with a Base64-encoded data URI. Large files may slow down requests. Recommended limits: images ≤256KB (around 512x512); videos ≤1MB for short clips (around 480p).
curl --request POST \
  --url https://api.runcomfy.net/prod/v1/deployments/{deployment_id}/inference \
  --header "Content-Type: application/json" \
  --header "Authorization: Bearer <token>" \
  --data '{
    "overrides": {
      "189": {
        "inputs": {
          "image": "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQEASABIAAD..."
        }
      }
    }
  }'

Request Example - API Nodes

If your workflow uses ComfyUI API Nodes that require an API key, include it in the request body as extra_data.api_key_comfy_org. The key is passed to the node at runtime, used only for the duration of the request, and discarded upon completion. Note: extra_data and overrides are top-level sibling fields in the JSON payload, keep them side by side.
curl --request POST \
  --url https://api.runcomfy.net/prod/v1/deployments/{deployment_id}/inference \
  --header "Content-Type: application/json" \
  --header "Authorization: Bearer <token>" \
  --data '{
    "overrides": {
      "10": {
        "inputs": {
          "prompt": "a golden retriever playing in a park"
        }
      }
    },
    "extra_data": {
      "api_key_comfy_org": "<comfy_org_api_key>"
    }
  }'

Response Example

{
  "request_id": "{request_id}",
  "status_url": "https://api.runcomfy.net/prod/v1/deployments/{deployment_id}/requests/{request_id}/status",
  "result_url": "https://api.runcomfy.net/prod/v1/deployments/{deployment_id}/requests/{request_id}/result",
  "cancel_url": "https://api.runcomfy.net/prod/v1/deployments/{deployment_id}/requests/{request_id}/cancel"
}
Successful requests return a 200 OK status with a JSON object providing job tracking details.
  • request_id (string): Unique identifier for the job.
  • status_url (string): URL to poll for job progress.
  • result_url (string): URL to fetch outputs once the job completes.
  • cancel_url (string): URL to cancel the job.

Send Dynamic Workflow or Any workflow Json

If you need to run a different workflow than the one you cloud saved, provide a different ComfyUI Workflow API JSON in the top-level workflow_api_json field and omit overrides. In this mode, the underlying ComfyUI instance executes the inline workflow and ignores the cloud-saved one for that request. This allows you to send any workflow to your deployment endpoint, for instance, one that dynamically inserts an upscaler node based on user input. The only requirement is that the deployed machine template already contains the nodes, models, and ComfyUI version needed by the new workflow.

Request Example

curl --request POST \
  --url https://api.runcomfy.net/prod/v1/deployments/{deployment_id}/inference \
  --header "Content-Type: application/json" \
  --header "Authorization: Bearer <token>" \
  --data '{
    "workflow_api_json": {your-full-workflow-api-json-here}
  }'
Other request semantics are unchanged: image/video uploads and API node credentials behave the same as in the cloud-saved mode.

Response Example

The response format is identical to the cloud-saved workflow mode:
{
  "request_id": "{request_id}",
  "status_url": "https://api.runcomfy.net/prod/v1/deployments/{deployment_id}/requests/{request_id}/status",
  "result_url": "https://api.runcomfy.net/prod/v1/deployments/{deployment_id}/requests/{request_id}/result",
  "cancel_url": "https://api.runcomfy.net/prod/v1/deployments/{deployment_id}/requests/{request_id}/cancel"
}

Monitor Request Status

GET /prod/v1/deployments/{deployment_id}/requests/{request_id}/status
Use this endpoint to check the current state of a queued or running job. Poll this endpoint periodically for updates on progress.

Request Example

curl --request GET \
  --url https://api.runcomfy.net/prod/v1/deployments/{deployment_id}/requests/{request_id}/status \
  --header "Authorization: Bearer <token>"

Response Example

{
  "request_id": "{request_id}",
  "status": "in_queue",
  "queue_position": 0,
  "result_url": "https://api.runcomfy.net/prod/v1/deployments/{deployment_id}/requests/{request_id}/result",
  "status_url": "https://api.runcomfy.net/prod/v1/deployments/{deployment_id}/requests/{request_id}/status",
  "instance_id": "{instance_id}"
}
Successful requests return a 200 OK status with a JSON object describing the job’s state.
  • status (string): Current state (e.g., in_queue, in_progress, completed, or cancelled).
  • For in_queue:
    • queue_position (integer): Your position in the queue.
    • result_url (string): URL where results will be available.
  • For in_progress, completed, or cancelled:
    • result_url (string): The URL to get the final outputs (available only when completed or cancelled).
    • instance_id (string): Identifier of the live instance processing/that processed this job. Use it with the Instance Proxy to call ComfyUI native endpoints. See Instance Proxy Endpoints.

Retrieve Request Results

GET /prod/v1/deployments/{deployment_id}/requests/{request_id}/result
Use this endpoint to fetch outputs once the job status is completed. Note: Output media are retained for 7 days; after that they are no longer available.

Request Example

curl --request GET \
  --url https://api.runcomfy.net/prod/v1/deployments/{deployment_id}/requests/{request_id}/result \
  --header "Authorization: Bearer <token>"

Response Example

{
  "request_id": "{request_id}",
  "status": "succeeded",
  "outputs": {
    "136": {
      "images": [
        {
          "url": "https://example.com/ComfyUI_00001_.png",
          "filename": "ComfyUI_00001_.png",
          "subfolder": "",
          "type": "output"
        }
      ]
    }
  },
  "created_at": "2025-07-22T13:05:16.143086",
  "finished_at": "2025-07-22T13:13:03.624471",
  "instance_id": "{instance_id}"
}
Successful requests return a 200 OK status with a JSON object containing the job’s final details.
  • status (string): Outcome (e.g., succeeded, failed, in_queue , in_progress, cancelled).
  • For succeeded:
    • outputs (array): Generated files, each with:
      • url (string): Direct link to the output file (e.g., https://example.com/outputs/ComfyUI_00001_.png).
      • filename (string): Name of the output file.
    • created_at (string): When the request was created.
    • finished_at (string): When the request completed.
  • For failed:
    • error: Details on why the job failed.
    • created_at (string): When the request was created.
    • finished_at: When the request failed or was cancelled.
  • For in_queue, in_progress, or cancelled:
    • created_at (string): When the request was created.
    • finished_at (string, only for cancelled): When the request was cancelled.

Cancel a Request

POST /prod/v1/deployments/{deployment_id}/requests/{request_id}/cancel
Use this endpoint to cancel a queued or running job.

Request Example

curl --request POST \
  --url https://api.runcomfy.net/prod/v1/deployments/{deployment_id}/requests/{request_id}/cancel \
  --header "Authorization: Bearer <token>"

Response Example

{
  "request_id": "{request_id}",
  "status": "completed",
  "outcome": "cancelled"
}
Successful requests return a 202 Accepted status with a JSON object containing the cancellation outcome.
  • outcome (string): Outcome (e.g., cancelledif accepted, not_cancellable if the job is completed or otherwise cannot be cancelled).
By using these endpoints, you can fully manage the lifecycle of your ComfyUI inference jobs, from submission and monitoring to retrieving results and canceling tasks, directly through the API.