Model API is billed per request (on-demand inference). You do not pay for idle GPUs and you do not create deployments. For up-to-date pricing for a specific model or pipeline, check its model page in the Models catalog — the UI shows the current rate and billing unit. For LoRAs (Trainer > Run LoRA), pricing is shown on the corresponding base model’s page.Documentation Index
Fetch the complete documentation index at: https://docs.runcomfy.com/llms.txt
Use this file to discover all available pages before exploring further.
What affects cost
Cost depends on the model/pipeline and the work it performs. Common drivers include:- model family (some pipelines are heavier than others)
- output size (resolution / frames)
- video length / FPS
How to estimate
- Look up the model in the Models catalog
- Use the model’s pricing unit as your baseline
- Multiply by expected runtime or output count (depending on the model)
