Skip to main contentWhen an API call fails, RunComfy returns an HTTP error status and a JSON body that may include an error_code and message.
This page lists common errors you may see when calling Serverless API (LoRA).
11007 InferenceServiceConnectionError
Meaning: The gateway could not reach the inference backend for your deployment.
What to try:
- Check the deployment is enabled and has capacity to start.
- If your deployment can scale to zero (
min_instances = 0), the first request may need a cold start — retry after a short delay.
- If the problem persists, contact support with your
deployment_id and request_id.
11008 InferenceServiceInferenceRequestError
Meaning: Your request was rejected as a client error (4xx).
What to try:
- Validate your payload against the deployment’s input schema (Deployment > API tab).
- For file inputs, make sure URLs are publicly accessible over HTTPS and do not require cookies/auth.
11004 InferenceServiceInferenceUnexpectedError
Meaning: The request could not be submitted due to an unexpected server-side error.
What to try:
- Retry with exponential backoff.
- If it repeats, share the full error response with support.
11011 InferenceServiceExecutionError
Meaning: The deployment started processing the request, but the run failed at execution time.
What to try:
- Inspect the error details returned by the API.
- Confirm that parameters (prompt, sizes, steps, etc.) are within the allowed ranges for your deployment.
11012 InferenceServiceRequestMissing
Meaning: The request_id could not be found (for example it expired, was never accepted, or tracking state was lost).
What to try:
- Confirm you are polling the correct deployment and
request_id.
- Resubmit the request if needed.
11005 InferenceServicePollingResultUnexpectedError
Meaning: Status polling failed unexpectedly.
What to try:
- Retry after a short delay.
- If it persists, check deployment health in the dashboard.
11006 InferenceServiceResultRetrievalUnexpectedError
Meaning: Result retrieval failed unexpectedly.
What to try:
- Retry after a short delay (outputs may still be uploading).
- If the request is
succeeded but outputs are missing, contact support.
Getting help
If you hit an error not listed here, contact [email protected] with the full error response plus your deployment_id and request_id.