instance_id once the instance is active. With that ID, you can send authenticated requests through a proxy path to perform operational tasks such as unloading models or freeing GPU memory.
When to use: Start proxy calls once the job status becomes in_progress (after cold start) or completed and you can read instance_id from the Status endpoint in Async Queue Endpoints: Monitoring Request Status.
Instance Proxy Endpoint
Base URL:https://api.runcomfy.net
Path Parameters
deployment_id: string (required)instance_id: string (required)comfy_backend_path: string (required). The target ComfyUI backend route, e.g.,api/free.
Using the Proxy
The proxy forwards your request to the live ComfyUI instance. You can call:- ComfyUI Backend endpoints (e.g.,
GET /object_info,POST /api/prompt) - ComfyUI Manager endpoints (e.g.,
POST /api/free)
Free Memory / Unload Models
You can release GPU memory or unload models via ComfyUI Manager’s nativePOST /api/free endpoint. This is useful between jobs in long-running sessions to ensure the next workflow starts from a clean state.
Request example
unload models only

unload models and free memory

Response example
Lifecycle Notes and Errors
- An
instance_idis valid only while its instance is running. If the instance shuts down due to keep-warm/idle timeout, subsequent proxy calls will fail. Submit a new job to start a fresh instance and obtain a newinstance_id. - Immediately after submitting a request, proxy calls may fail until the job status shows
in_progress(after cold start) orcompletedin the Status endpoint. Poll the status in Async Queue Endpoints: Monitoring Request Status and retry the proxy call once it transitions.
