RunComfy allows you to build custom ComfyUI workflows and deploy them as scalable, serverless API endpoints. To work with these workflows programmatically, you need to understand three JSON files: workflow.json, workflow_api.json, and object_info.json. These files define your workflow’s structure, inputs, and node details. This guide explains each file, including what it does, how to get it, and how to read its contents. Examples come from the RunComfy/FLUX workflow.

workflow.json

The workflow.json file holds the full layout of your ComfyUI workflow. It includes nodes, positions, links, and UI elements like groups. To download workflow.json, open your workflow in the ComfyUI interface on RunComfy, then click the Workflow menu in the top-left corner and select Export from the dropdown options. Alt download ComfyUI workflow.json The file’s main content is in the "nodes" array, which lists each node as an object. Each node object includes keys like "id" (a unique number for the node), "type" (the node’s class, e.g., “SamplerCustomAdvanced”), "pos" (an array [x, y] for canvas position), "size" (an array [width, height] for node dimensions), "flags" (an object for node states like collapsed), "order" (execution order index), "mode" (node mode, often 0 for active), "inputs" (an array of input objects with “name”, “type”, and “link” to a connection ID), "outputs" (an array of output objects with “name”, “type”, “slot_index”, and “links” array of connection IDs), "properties" (an object for node-specific settings), and "widgets_values" (an array of widget values if any). For example:
{
  "id": 13,
  "type": "SamplerCustomAdvanced",
  "pos": [
    842,
    215
  ],
  "size": [
    355.20001220703125,
    106
  ],
  "flags": {},
  "order": 10,
  "mode": 0,
  "inputs": [
    {
      "name": "noise",
      "type": "NOISE",
      "link": 37
    },
    {
      "name": "guider",
      "type": "GUIDER",
      "link": 30
    },
    {
      "name": "sampler",
      "type": "SAMPLER",
      "link": 19
    },
    {
      "name": "sigmas",
      "type": "SIGMAS",
      "link": 20
    },
    {
      "name": "latent_image",
      "type": "LATENT",
      "link": 23
    }
  ],
  "outputs": [
    {
      "name": "output",
      "type": "LATENT",
      "slot_index": 0,
      "links": [
        24
      ]
    },
    {
      "name": "denoised_output",
      "type": "LATENT",
      "links": null
    }
  ],
  "properties": {
    "Node name for S&R": "SamplerCustomAdvanced"
  },
  "widgets_values": []
}

For the complete example, you can check flux_workflow.json.

workflow_api.json

The workflow_api.json file is a streamlined version of your workflow designed specifically for API integration. It removes all UI-related details, such as node positions and sizes, and retains only the essential node types, inputs, and connections needed for execution. This makes it lightweight and suitable for programmatic use. When you deploy a workflow on RunComfy, the platform stores this file internally and uses it as the foundation for serverless API endpoints. During API calls, RunComfy references this stored file to process requests, allowing you to override specific inputs without resubmitting the entire workflow. To get workflow_api.json, open your workflow in the ComfyUI interface on RunComfy, then click the “Workflow” menu in the top-left corner and select “Export (API)” from the dropdown options. Alt download ComfyUI workflow_api.json The file is a single JSON object where the keys are node IDs (as strings), and each value is an object describing the node. Each node object includes keys like "inputs" (an object mapping input names to their values, which can be scalars like numbers or strings, or arrays like [“other_node_id”, output_index] to reference outputs from another node), "class_type" (the node’s type as a string, e.g., “CLIPTextEncode”), and an optional "_meta" (an object for metadata, typically containing a “title” for the node). For example:
{
  "5": {
    "inputs": {
      "width": 1024,
      "height": 1024,
      "batch_size": 1
    },
    "class_type": "EmptyLatentImage",
    "_meta": {
      "title": "Empty Latent Image"
    }
  },
  "6": {
    "inputs": {
      "text": "n old tv with the word \\"FLUX\\" on it, sitting in an abandoned workshop environment, created in Unreal Engine 5 with Octane render in the style of ArtStation.",
      "clip": [
        "11",
        0
      ]
    },
    "class_type": "CLIPTextEncode",
    "_meta": {
      "title": "CLIP Text Encode (Prompt)"
    }
  }
  // other nodes
}

For the complete example, you can check flux_workflow_api.json.

object_info.json

The object_info.json file provides a comprehensive catalog of all available nodes in your running ComfyUI instance. It details each node’s input requirements, output types, descriptions, and other metadata, serving as a schema reference for the entire system. Use this file to check node requirements, such as mandatory inputs and their types, or to build custom tools and integrations that dynamically generate or modify workflows. Fetch it from a running server: Launch a ComfyUI instance on RunComfy, note the server ID, and visit https://<server_id>-comfyui.runcomfy.com/object_info in your browser to retrieve the JSON directly. Alt download ComfyUI object_info.json The file is a single JSON object where the keys are node class names (as strings, e.g., “KSampler”), and each value is an object describing the node’s interface and properties. Each node object includes keys like "input" (an object with "required" and optionally "optional", where each input is mapped to an array [type, ] detailing defaults, mins, maxes, tooltips, etc.), "input_order" (an object specifying the order of required and optional inputs), "output" (an array of output types like [“LATENT”]), "output_is_list" (a boolean array indicating if each output is a list), "output_name" (an array of output names), "name" (the node’s internal name), "display_name" (the user-friendly name), "description" (a brief explanation of the node’s function), "python_module" (the module where the node is defined), "category" (the node’s category for organization), "output_node" (a boolean indicating if it’s an output node), and optionally other fields like "api_node" for API-specific nodes. For example:
{
  "KSampler": {
    "input": {
      "required": {
        "model": [
          "MODEL",
          {
            "tooltip": "The model used for denoising the input latent."
          }
        ],
        "seed": [
          "INT",
          {
            "default": 0,
            "min": 0,
            "max": 18446744073709551615,
            "control_after_generate": true,
            "tooltip": "The random seed used for creating the noise."
          }
        ],
        "steps": [
          "INT",
          {
            "default": 20,
            "min": 1,
            "max": 10000,
            "tooltip": "The number of steps used in the denoising process."
          }
        ],
        "cfg": [
          "FLOAT",
          {
            "default": 8.0,
            "min": 0.0,
            "max": 100.0,
            "step": 0.1,
            "round": 0.01,
            "tooltip": "The Classifier-Free Guidance scale balances creativity and adherence to the prompt. Higher values result in images more closely matching the prompt however too high values will negatively impact quality."
          }
        ],
        "sampler_name": [
          [
            "euler",
            "euler_cfg_pp",
            "euler_ancestral",
            "euler_ancestral_cfg_pp",
            "heun",
            "heunpp2",
            "dpm_2",
            "dpm_2_ancestral",
            "lms",
            "dpm_fast",
            "dpm_adaptive",
            "dpmpp_2s_ancestral",
            "dpmpp_2s_ancestral_cfg_pp",
            "dpmpp_sde",
            "dpmpp_sde_gpu",
            "dpmpp_2m",
            "dpmpp_2m_cfg_pp",
            "dpmpp_2m_sde",
            "dpmpp_2m_sde_gpu",
            "dpmpp_3m_sde",
            "dpmpp_3m_sde_gpu",
            "ddpm",
            "lcm",
            "ipndm",
            "ipndm_v",
            "deis",
            "res_multistep",
            "res_multistep_cfg_pp",
            "res_multistep_ancestral",
            "res_multistep_ancestral_cfg_pp",
            "gradient_estimation",
            "gradient_estimation_cfg_pp",
            "er_sde",
            "seeds_2",
            "seeds_3",
            "sa_solver",
            "sa_solver_pece",
            "ddim",
            "uni_pc",
            "uni_pc_bh2"
          ],
          {
            "tooltip": "The algorithm used when sampling, this can affect the quality, speed, and style of the generated output."
          }
        ],
        "scheduler": [
          [
            "simple",
            "sgm_uniform",
            "karras",
            "exponential",
            "ddim_uniform",
            "beta",
            "normal",
            "linear_quadratic",
            "kl_optimal"
          ],
          {
            "tooltip": "The scheduler controls how noise is gradually removed to form the image."
          }
        ],
        "positive": [
          "CONDITIONING",
          {
            "tooltip": "The conditioning describing the attributes you want to include in the image."
          }
        ],
        "negative": [
          "CONDITIONING",
          {
            "tooltip": "The conditioning describing the attributes you want to exclude from the image."
          }
        ],
        "latent_image": [
          "LATENT",
          {
            "tooltip": "The latent image to denoise."
          }
        ],
        "denoise": [
          "FLOAT",
          {
            "default": 1.0,
            "min": 0.0,
            "max": 1.0,
            "step": 0.01,
            "tooltip": "The amount of denoising applied, lower values will maintain the structure of the initial image allowing for image to image sampling."
          }
        ]
      }
    },
    "input_order": {
      "required": [
        "model",
        "seed",
        "steps",
        "cfg",
        "sampler_name",
        "scheduler",
        "positive",
        "negative",
        "latent_image",
        "denoise"
      ]
    },
    "output": [
      "LATENT"
    ],
    "output_is_list": [
      false
    ],
    "output_name": [
      "LATENT"
    ],
    "name": "KSampler",
    "display_name": "KSampler",
    "description": "Uses the provided model, positive and negative conditioning to denoise the latent image.",
    "python_module": "nodes",
    "category": "sampling",
    "output_node": false,
    "output_tooltips": [
      "The denoised latent."
    ]
  }
  // other nodes
}

For the complete example, you can check flux_object_info.json.

Files in API Calls

When making API requests to a deployed workflow, reference the workflow_api.json file to identify node IDs and their inputs. You don’t need to include the full workflow_api JSON in your request, instead, provide only the variable parameters you want to modify using the overrides object in the request body. This overrides approach keeps your API calls efficient and focused on changes. For exact formatting and examples of overrides in requests, refer to the Async Queue Endpoints documentation in the API Reference. Additionally, if you’re exposing parameters externally in your application and need details on their valid ranges, defaults, types, or constraints, consult the object_info.json file for the relevant node schemas to ensure accurate validation and user guidance.