Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.runcomfy.com/llms.txt

Use this file to discover all available pages before exploring further.

Install the CLI, sign in, and generate your first image — three commands.

1. Install

The fastest path is npx, which downloads and runs the CLI without a global install:
npx -y @runcomfy/cli --version
For repeat use, install globally or use the curl installer — see Install for all four options.

2. Sign in

runcomfy login
This prints a verification code in your terminal and opens https://www.runcomfy.com/cli-auth in your browser. Type or paste the code from the terminal into the page, then click Authorize. The CLI saves a token to ~/.config/runcomfy/token.json (mode 0600). If you already have an API token from your Profile, set RUNCOMFY_TOKEN=<token> and skip runcomfy login entirely. See Authentication for the full flow. Verify:
runcomfy whoami
# 📛 you@example.com
#    token type: cli
#    user id: ...

3. Generate an image

runcomfy run openai/gpt-image-2/text-to-image \
  --input '{"prompt": "a small purple cat at sunset, photorealistic"}'
What you’ll see:
⏳ Submitting request to openai/gpt-image-2/text-to-image
   request_id: 8a3f...
⏳ Polling status (every 2s)...
   in_queue
   in_progress
   completed
✅ completed
{
  "images": [
    "https://playgrounds-storage-public.runcomfy.net/.../result.png"
  ]
}
📥 Downloading 1 file(s) to .
   ./result.png
The result file is in your current directory. Override with --output-dir ./out or skip downloading with --no-download.

What’s next

  • Browse models at runcomfy.com/models — every model page shows its model_id and Input schema.
  • See runcomfy run for --input-file, --no-wait, and --output-dir.
  • Pipe-friendly mode: runcomfy --output json run ... --no-wait | jq -r .request_id returns just the id for scripting.