Skip to main content
When you deploy a ComfyUI workflow as Serverless API (ComfyUI), you’ll work with three JSON files:
  • workflow.json — full UI export (graph + layout)
  • workflow_api.json — execution graph optimized for API calls (what deployments run)
  • object_info.json — schema registry for all nodes in a running ComfyUI instance
Examples in this guide use the RunComfy/FLUX workflow.

Quick comparison

FileContainsTypical use
workflow.jsonNodes + links + canvas layout (groups, positions, UI metadata)Sharing/editing in the ComfyUI UI
workflow_api.jsonOnly what’s required to execute (node inputs + connections)Referenced by overrides when calling a Deployment
object_info.jsonInput/output schemas for every node in the running instanceValidating inputs, building tools/UIs, debugging

workflow.json

workflow.json is the full workflow export. It includes nodes, positions, links, and UI elements like groups. To download workflow.json:
  1. Open your workflow in the ComfyUI interface on RunComfy.
  2. Click the Workflow menu in the top-left.
  3. Select Export.
Alt download ComfyUI workflow.json The file’s main content is in the "nodes" array, which lists each node as an object. Each node object includes keys like "id" (a unique number for the node), "type" (the node’s class, e.g., “SamplerCustomAdvanced”), "pos" (an array [x, y] for canvas position), "size" (an array [width, height] for node dimensions), "flags" (an object for node states like collapsed), "order" (execution order index), "mode" (node mode, often 0 for active), "inputs" (an array of input objects with “name”, “type”, and “link” to a connection ID), "outputs" (an array of output objects with “name”, “type”, “slot_index”, and “links” array of connection IDs), "properties" (an object for node-specific settings), and "widgets_values" (an array of widget values if any). For example:
{
  "id": 13,
  "type": "SamplerCustomAdvanced",
  "pos": [
    842,
    215
  ],
  "size": [
    355.20001220703125,
    106
  ],
  "flags": {},
  "order": 10,
  "mode": 0,
  "inputs": [
    {
      "name": "noise",
      "type": "NOISE",
      "link": 37
    },
    {
      "name": "guider",
      "type": "GUIDER",
      "link": 30
    },
    {
      "name": "sampler",
      "type": "SAMPLER",
      "link": 19
    },
    {
      "name": "sigmas",
      "type": "SIGMAS",
      "link": 20
    },
    {
      "name": "latent_image",
      "type": "LATENT",
      "link": 23
    }
  ],
  "outputs": [
    {
      "name": "output",
      "type": "LATENT",
      "slot_index": 0,
      "links": [
        24
      ]
    },
    {
      "name": "denoised_output",
      "type": "LATENT",
      "links": null
    }
  ],
  "properties": {
    "Node name for S&R": "SamplerCustomAdvanced"
  },
  "widgets_values": []
}

For the complete example, you can check flux_workflow.json.

workflow_api.json

workflow_api.json is a streamlined workflow export designed for API execution. It removes UI-related details (node positions, sizes, groups) and keeps only:
  • node types (class_type)
  • node inputs (inputs)
  • connections between nodes
When you deploy a workflow on RunComfy, the platform stores this file internally and uses it as the basis for serverless API calls. During API calls, RunComfy references this stored file and applies your overrides without requiring you to resend the whole workflow. To get workflow_api.json:
  1. Open your workflow in the ComfyUI interface on RunComfy.
  2. Click the Workflow menu in the top-left.
  3. Select Export (API).
Alt download ComfyUI workflow_api.json The file is a single JSON object where:
  • keys are node IDs (as strings)
  • values are node definitions (inputs, class_type, and optional _meta)
For example:
{
  "5": {
    "inputs": {
      "width": 1024,
      "height": 1024,
      "batch_size": 1
    },
    "class_type": "EmptyLatentImage",
    "_meta": {
      "title": "Empty Latent Image"
    }
  },
  "6": {
    "inputs": {
      "text": "n old tv with the word \\"FLUX\\" on it, sitting in an abandoned workshop environment, created in Unreal Engine 5 with Octane render in the style of ArtStation.",
      "clip": [
        "11",
        0
      ]
    },
    "class_type": "CLIPTextEncode",
    "_meta": {
      "title": "CLIP Text Encode (Prompt)"
    }
  }
  // other nodes
}

For the complete example, you can check flux_workflow_api.json.

object_info.json

object_info.json is a schema catalog for a running ComfyUI instance. It includes each node’s:
  • required/optional inputs
  • accepted types and ranges
  • output types
  • tooltips/metadata
Use this file to validate inputs (for example in your own UI), or to build tools that dynamically generate/modify workflows. Fetch it from a running server:
  1. Launch a ComfyUI instance on RunComfy.
  2. Note the server ID.
  3. Visit https://<server_id>-comfyui.runcomfy.com/object_info in your browser.
Alt download ComfyUI object_info.json For example:
{
  "KSampler": {
    "input": {
      "required": {
        "model": [
          "MODEL",
          {
            "tooltip": "The model used for denoising the input latent."
          }
        ],
        "seed": [
          "INT",
          {
            "default": 0,
            "min": 0,
            "max": 18446744073709551615,
            "control_after_generate": true,
            "tooltip": "The random seed used for creating the noise."
          }
        ],
        "steps": [
          "INT",
          {
            "default": 20,
            "min": 1,
            "max": 10000,
            "tooltip": "The number of steps used in the denoising process."
          }
        ],
        "cfg": [
          "FLOAT",
          {
            "default": 8.0,
            "min": 0.0,
            "max": 100.0,
            "step": 0.1,
            "round": 0.01,
            "tooltip": "The Classifier-Free Guidance scale balances creativity and adherence to the prompt. Higher values result in images more closely matching the prompt however too high values will negatively impact quality."
          }
        ],
        "sampler_name": [
          [
            "euler",
            "euler_cfg_pp",
            "euler_ancestral",
            "euler_ancestral_cfg_pp",
            "heun",
            "heunpp2",
            "dpm_2",
            "dpm_2_ancestral",
            "lms",
            "dpm_fast",
            "dpm_adaptive",
            "dpmpp_2s_ancestral",
            "dpmpp_2s_ancestral_cfg_pp",
            "dpmpp_sde",
            "dpmpp_sde_gpu",
            "dpmpp_2m",
            "dpmpp_2m_cfg_pp",
            "dpmpp_2m_sde",
            "dpmpp_2m_sde_gpu",
            "dpmpp_3m_sde",
            "dpmpp_3m_sde_gpu",
            "ddpm",
            "lcm",
            "ipndm",
            "ipndm_v",
            "deis",
            "res_multistep",
            "res_multistep_cfg_pp",
            "res_multistep_ancestral",
            "res_multistep_ancestral_cfg_pp",
            "gradient_estimation",
            "gradient_estimation_cfg_pp",
            "er_sde",
            "seeds_2",
            "seeds_3",
            "sa_solver",
            "sa_solver_pece",
            "ddim",
            "uni_pc",
            "uni_pc_bh2"
          ],
          {
            "tooltip": "The algorithm used when sampling, this can affect the quality, speed, and style of the generated output."
          }
        ],
        "scheduler": [
          [
            "simple",
            "sgm_uniform",
            "karras",
            "exponential",
            "ddim_uniform",
            "beta",
            "normal",
            "linear_quadratic",
            "kl_optimal"
          ],
          {
            "tooltip": "The scheduler controls how noise is gradually removed to form the image."
          }
        ],
        "positive": [
          "CONDITIONING",
          {
            "tooltip": "The conditioning describing the attributes you want to include in the image."
          }
        ],
        "negative": [
          "CONDITIONING",
          {
            "tooltip": "The conditioning describing the attributes you want to exclude from the image."
          }
        ],
        "latent_image": [
          "LATENT",
          {
            "tooltip": "The latent image to denoise."
          }
        ],
        "denoise": [
          "FLOAT",
          {
            "default": 1.0,
            "min": 0.0,
            "max": 1.0,
            "step": 0.01,
            "tooltip": "The amount of denoising applied, lower values will maintain the structure of the initial image allowing for image to image sampling."
          }
        ]
      }
    },
    "input_order": {
      "required": [
        "model",
        "seed",
        "steps",
        "cfg",
        "sampler_name",
        "scheduler",
        "positive",
        "negative",
        "latent_image",
        "denoise"
      ]
    },
    "output": [
      "LATENT"
    ],
    "output_is_list": [
      false
    ],
    "output_name": [
      "LATENT"
    ],
    "name": "KSampler",
    "display_name": "KSampler",
    "description": "Uses the provided model, positive and negative conditioning to denoise the latent image.",
    "python_module": "nodes",
    "category": "sampling",
    "output_node": false,
    "output_tooltips": [
      "The denoised latent."
    ]
  }
  // other nodes
}

For the complete example, you can check flux_object_info.json.

Files in API calls

When making API requests to a deployed workflow:
  • use workflow_api.json to find node IDs and inputs
  • send only the values you want to change under overrides (you don’t include the full file in your request)
This keeps requests efficient and makes it easy to evolve your workflow over time. For exact formatting and examples, refer to Async Queue Endpoints.