Skip to main content
Detailed reference for all 10 tools exposed by the RunComfy MCP server. Each tool maps directly to a Serverless API (ComfyUI) endpoint.

Deployment management

list_deployments

List all Serverless API deployments in your account. Backs: GET /prod/v2/deployments
ParameterTypeRequiredDescription
idsstring[]NoFilter to specific deployment IDs
include_payloadbooleanNoInclude workflow_api_json, overrides, and object_info_url
include_readmebooleanNoInclude the deployment’s README markdown
Example arguments:
{}
Example response (structuredContent):
{
  "ok": true,
  "deployments": [
    {
      "id": "a1b2c3d4-...",
      "name": "text-to-image",
      "workflow_id": "00000000-...",
      "workflow_version": "v1",
      "hardware": ["AMPERE_48"],
      "min_instances": 0,
      "max_instances": 1,
      "status": "standby",
      "is_enabled": true
    }
  ]
}

get_deployment

Get one deployment by ID, optionally including its full workflow graph. Backs: GET /prod/v2/deployments/{deployment_id}
ParameterTypeRequiredDescription
deployment_idstringYesThe deployment’s UUID
include_payloadbooleanNoInclude workflow_api_json and node schemas — use this to discover node IDs for submit_request
include_readmebooleanNoInclude the deployment’s README
Tip: Call with include_payload=true to see every node’s ID and input names. Use those to build the overrides object for submit_request.
Example arguments:
{
  "deployment_id": "a1b2c3d4-...",
  "include_payload": true
}

create_deployment

Create a new Serverless API (ComfyUI) deployment from a cloud-saved workflow. Backs: POST /prod/v2/deployments
ParameterTypeRequiredDefaultDescription
namestringYesHuman-readable name
workflow_idstringYesUUID of the ComfyUI workflow
workflow_versionstringYesVersion label (e.g., "v1")
hardwarestringNo"AMPERE_48"GPU SKU (see hardware table below)
min_instancesintegerNo0Warm instance floor (0–30). Billable if > 0.
max_instancesintegerNo1Concurrency ceiling (1–60)
queue_sizeintegerNo1Pending requests per instance before scaling
keep_warm_duration_in_secondsintegerNo60Idle timeout before scale-down
Hardware SKUs:
TURING_16  |  AMPERE_24  |  AMPERE_48  |  ADA_48_PLUS
AMPERE_80  |  ADA_80_PLUS  |  HOPPER_141
Example arguments:
{
  "name": "my-flux-deployment",
  "workflow_id": "00000000-0000-0000-0000-000000001234",
  "workflow_version": "v1",
  "hardware": "AMPERE_48",
  "min_instances": 0,
  "max_instances": 2
}
For LoRA deployments, create via the RunComfy UI (Trainer > LoRA Assets > Deploy), then use list_deployments to get the deployment_id.

update_deployment

Partially update a deployment. Only pass the fields you want to change. Backs: PATCH /prod/v2/deployments/{deployment_id}
ParameterTypeRequiredDescription
deployment_idstringYesThe deployment’s UUID
namestringNoNew name
workflow_versionstringNoNew version label
hardwarestringNoNew GPU SKU
min_instancesintegerNoNew warm floor
max_instancesintegerNoNew concurrency ceiling
queue_sizeintegerNoNew queue threshold
keep_warm_duration_in_secondsintegerNoNew idle timeout
is_enabledbooleanNofalse to pause, true to resume
Example — pause a deployment:
{
  "deployment_id": "a1b2c3d4-...",
  "is_enabled": false
}

delete_deployment

Permanently delete a deployment. This cannot be undone. Backs: DELETE /prod/v2/deployments/{deployment_id}
ParameterTypeRequiredDescription
deployment_idstringYesThe deployment’s UUID
Consider update_deployment with is_enabled=false to pause instead of deleting.

Inference

submit_request

Submit an async inference request to a deployment. Backs: POST /prod/v1/deployments/{deployment_id}/inference
ParameterTypeRequiredDescription
deployment_idstringYesTarget deployment
overridesobjectNoPartial workflow graph keyed by node ID (see example below)
workflow_api_jsonobjectNoAdvanced: run a different workflow inline without updating the deployment
extra_dataobjectNoE.g., {"api_key_comfy_org": "comfyui-..."} for ComfyUI Core API nodes
webhook_urlstringNoURL for push-based status updates
webhook_intermediate_statusbooleanNoFire webhooks on every status change, not just terminal
wait_for_completionbooleanNoIf true, poll until done and return the result inline
timeout_secondsintegerNoMax wait (default 300) when wait_for_completion=true
Example — text-to-image with overrides:
{
  "deployment_id": "a1b2c3d4-...",
  "overrides": {
    "6": {
      "inputs": {
        "text": "a futuristic cityscape at sunset"
      }
    },
    "31": {
      "inputs": {
        "seed": 42
      }
    }
  }
}
File inputs — pass a public URL or Base64 data URI directly in the overrides value:
{
  "deployment_id": "a1b2c3d4-...",
  "overrides": {
    "189": {
      "inputs": {
        "image": "https://example.com/input-photo.jpg"
      }
    }
  }
}
Or using Base64:
{
  "deployment_id": "a1b2c3d4-...",
  "overrides": {
    "189": {
      "inputs": {
        "image": "data:image/jpeg;base64,/9j/4AAQ..."
      }
    }
  }
}
Use get_deployment with include_payload=true to discover the node IDs and input names for your workflow.

get_request_status

Poll a request’s current status. Backs: GET /prod/v1/deployments/{deployment_id}/requests/{request_id}/status
ParameterTypeRequiredDescription
deployment_idstringYesThe deployment that owns this request
request_idstringYesThe request ID returned by submit_request
Status lifecycle: in_queue > in_progress > completed (or cancelled). Example response:
{
  "ok": true,
  "status": {
    "request_id": "5f1ba692-...",
    "status": "in_progress",
    "queue_position": null,
    "instance_id": "1697cb1a-..."
  }
}

get_request_result

Fetch the final outputs of a completed request. Backs: GET /prod/v1/deployments/{deployment_id}/requests/{request_id}/result
ParameterTypeRequiredDescription
deployment_idstringYesThe deployment that owns this request
request_idstringYesThe request ID
Output URLs are hosted for 7 days after success. Download or copy them to your own storage for longer retention. Example response:
{
  "ok": true,
  "result": {
    "request_id": "5f1ba692-...",
    "status": "succeeded",
    "outputs": {
      "9": {
        "images": [
          {
            "url": "https://serverless-api-storage.runcomfy.net/.../ComfyUI_00001_.png",
            "filename": "ComfyUI_00001_.png"
          }
        ]
      }
    },
    "created_at": "2026-04-16T03:40:52.093Z",
    "finished_at": "2026-04-16T03:44:18.401Z"
  },
  "output_urls": [
    {
      "node_id": "9",
      "channel": "images",
      "url": "https://serverless-api-storage.runcomfy.net/.../ComfyUI_00001_.png",
      "filename": "ComfyUI_00001_.png"
    }
  ]
}

cancel_request

Cancel a queued or running request. Backs: POST /prod/v1/deployments/{deployment_id}/requests/{request_id}/cancel
ParameterTypeRequiredDescription
deployment_idstringYesThe deployment that owns this request
request_idstringYesThe request ID
Returns cancelled if accepted, or not_cancellable if the request has already completed. Example response:
{
  "ok": true,
  "cancel": {
    "request_id": "5f1ba692-...",
    "status": "completed",
    "outcome": "cancelled"
  }
}

Advanced

call_instance_proxy

Call a ComfyUI backend endpoint on a live instance. Backs: POST /prod/v2/deployments/{deployment_id}/instances/{instance_id}/proxy/{comfy_backend_path}
ParameterTypeRequiredDescription
deployment_idstringYesThe deployment
instance_idstringYesThe running instance (from get_request_status when status is in_progress)
comfy_backend_pathstringYesThe ComfyUI backend route, e.g., api/free
request_bodyobjectNoJSON body to send to the ComfyUI endpoint
Example — unload models to free GPU memory:
{
  "deployment_id": "a1b2c3d4-...",
  "instance_id": "1697cb1a-...",
  "comfy_backend_path": "api/free",
  "request_body": {
    "unload_models": true,
    "free_memory": true
  }
}
Instance IDs are ephemeral — they are only valid while the instance is running. If the instance shuts down, submit a new request to get a fresh instance.