https://mcp.runcomfy.com/mcp
Transport: Streamable HTTP
What you get
With the RunComfy MCP server, your AI assistant can:- List and inspect deployments in your account, including workflow graphs and node schemas
- Create, update, and delete deployments with full control over hardware and autoscaling
- Run inference on any deployment using the async queue (submit, poll, fetch results)
- Cancel queued or running requests to stop unnecessary GPU usage
- Call ComfyUI backend endpoints on live instances via the instance proxy (e.g., free memory, unload models)
Available tools
The MCP server exposes 10 tools across three categories:Deployment management
| Tool | Description |
|---|---|
list_deployments | List all deployments in your account |
get_deployment | Get a deployment’s details, including its workflow graph and node schemas |
create_deployment | Create a new deployment from a cloud-saved ComfyUI workflow |
update_deployment | Update a deployment’s hardware, scaling, or enabled status |
delete_deployment | Permanently delete a deployment |
Inference
| Tool | Description |
|---|---|
submit_request | Submit an async inference request to a deployment |
get_request_status | Poll a request’s current status (in_queue, in_progress, completed) |
get_request_result | Fetch the final outputs (hosted URLs) of a completed request |
cancel_request | Cancel a queued or running request |
Advanced
| Tool | Description |
|---|---|
call_instance_proxy | Call a ComfyUI backend endpoint on a live instance (e.g., api/free to unload models) |
Examples
Here are typical workflows an AI assistant performs with the RunComfy MCP:Generate an image
“Generate an image of a mountain landscape using my Flux deployment”
- The assistant calls
list_deploymentsto find your deployments - It calls
get_deploymentwithinclude_payload=trueto inspect the workflow’s node IDs and input names - It calls
submit_requestwith the appropriateoverrides(e.g.,{"6": {"inputs": {"text": "a mountain landscape at sunset"}}}) - It calls
get_request_resultto fetch the output image URL
Create and run a new deployment
“Deploy my upscaler workflow and run it on this image”
- The assistant calls
create_deploymentwith yourworkflow_id,workflow_version, and hardware choice - It calls
submit_requeston the new deployment with image input as a public URL in overrides - It polls
get_request_statusuntil the job completes - It calls
get_request_resultto return the upscaled image URL
Check and cancel a running job
“What’s the status of my last request? Cancel it if it’s still queued.”
- The assistant calls
get_request_statuswith thedeployment_idandrequest_id - If the status is
in_queue, it callscancel_request - The cancel response confirms
cancelledornot_cancellable(if already running)
How it works
- Your AI assistant sends MCP tool calls to
https://mcp.runcomfy.com/mcpwith your API token in theAuthorization: Bearerheader. - The MCP server translates tool calls into RunComfy Serverless API requests (
api.runcomfy.net) using your token — so you see only your deployments and billing is attributed to your account. - Results flow back to the assistant as structured JSON with output URLs, status fields, and metadata.
File inputs
When a workflow requires image, video, or audio inputs, pass them directly in theoverrides:
- Public URL:
"image": "https://example.com/photo.jpg" - Base64 data URI:
"image": "data:image/jpeg;base64,/9j/4AAQ..."
Next steps
- Quickstart — Set up the MCP server in your AI assistant
- Tool Reference — Detailed parameters and examples for all 10 tools
- FAQ — Common questions about the MCP server
