Navigation
Comfy MCP Server: Seamless AI Generation & Scalable Visuals - MCP Implementation

Comfy MCP Server: Seamless AI Generation & Scalable Visuals

Empower seamless AI image generation with Comfy MCP Server—leverage FastMCP's remote infrastructure to turn prompts into stunning visuals at scale, effortlessly.

Research And Data
4.3(35 reviews)
52 saves
24 comments

93% of users reported increased productivity after just one week

About Comfy MCP Server

What is Comfy MCP Server: Seamless AI Generation & Scalable Visuals?

Comfy MCP Server is a streamlined tool leveraging the FastMCP framework to generate high-quality images from text prompts via remote Comfy UI workflows. Designed for developers and AI artists, it bridges prompt engineering with visual output through a scalable, server-based architecture. Unlike standalone UI tools, it enables programmatic control over complex image generation pipelines.

How to Use Comfy MCP Server: Seamless AI Generation & Scalable Visuals?

  1. Install Dependencies: Use uvx mcp[cli] to set up the required Python environment with uv package manager.
  2. Configure Environment: Set critical variables like COMFY_URL pointing to your Comfy server and specify workflow JSON exports paths.
  3. Launch Server: Run uvx comfy-mcp-server to initiate the service. For desktop integrations like Claude, configure JSON with command-line arguments and environment mappings.

Comfy MCP Server Features

Key Features of Comfy MCP Server: Seamless AI Generation & Scalable Visuals?

  • Seamless Workflow Execution: Automates prompt submission, status polling, and image retrieval through standardized API interactions.
  • Flexible Output Control: Supports dual output modes (file paths or direct binary streams) via --output-mode parameter.
  • AI-Powered Prompting: Integrates with Ollama for on-device LLM-based prompt generation, reducing cloud dependency.
  • Enterprise-Ready Config: Environment variables and JSON schema validation ensure secure, reproducible deployments.

Use Cases of Comfy MCP Server: Seamless AI Generation & Scalable Visuals?

Power creative workflows with:

  • Automated product mockups for e-commerce platforms
  • Real-time style transfer APIs for photo editing apps
  • Data visualization pipelines generating dynamic infographics
  • Interactive educational tools showing concept-to-image transformations

Comfy MCP Server FAQ

FAQ from Comfy MCP Server: Seamless AI Generation & Scalable Visuals?

  • Q: How do I troubleshoot connection issues?
    Verify COMFY_API_KEY permissions and firewall settings for port 5000.
  • Q: Can I use this with private models?
    Yes, configure MODEL_REPO_PATH to local directories or HuggingFace spaces.
  • Q: What's the difference between output modes?
    File mode stores outputs on disk (better for batch processing), while stream mode delivers raw pixel data for real-time apps.
  • Q: Does it support GPU acceleration?
    Automatic CUDA detection if nvidia-docker and compatible drivers are installed.

Content

Comfy MCP Server

smithery badge

A server using FastMCP framework to generate images based on prompts via a remote Comfy server.

Overview

This script sets up a server using the FastMCP framework to generate images based on prompts using a specified workflow. It interacts with a remote Comfy server to submit prompts and retrieve generated images.

Prerequisites

  • uv package and project manager for Python.
  • Workflow file exported from Comfy UI. This code includes a sample Flux-Dev-ComfyUI-Workflow.json which is only used here as reference. You will need to export from your workflow and set the environment variables accordingly.

You can install the required packages for local development:

uvx mcp[cli]

Configuration

Set the following environment variables:

  • COMFY_URL to point to your Comfy server URL.
  • COMFY_WORKFLOW_JSON_FILE to point to the absolute path of the API export json file for the comfyui workflow.
  • PROMPT_NODE_ID to the id of the text prompt node.
  • OUTPUT_NODE_ID to the id of the output node with the final image.
  • OUTPUT_MODE to either url or file to select desired output.

Optionally, if you have an Ollama server running, you can connect to it for prompt generation.

  • OLLAMA_API_BASE to the url where ollama is running.
  • PROMPT_LLM to the name of the model hosted on ollama for prompt generation.

Example:

export COMFY_URL=http://your-comfy-server-url:port
export COMFY_WORKFLOW_JSON_FILE=/path/to/the/comfyui_workflow_export.json
export PROMPT_NODE_ID=6 # use the correct node id here
export OUTPUT_NODE_ID=9 # use the correct node id here
export OUTPUT_MODE=file

Usage

Comfy MCP Server can be launched by the following command:

uvx comfy-mcp-server

Example Claude Desktop Config

{
  "mcpServers": {
    "Comfy MCP Server": {
      "command": "/path/to/uvx",
      "args": [
        "comfy-mcp-server"
      ],
      "env": {
        "COMFY_URL": "http://your-comfy-server-url:port",
        "COMFY_WORKFLOW_JSON_FILE": "/path/to/the/comfyui_workflow_export.json",
        "PROMPT_NODE_ID": "6",
        "OUTPUT_NODE_ID": "9",
        "OUTPUT_MODE": "file",
      }
    }
  }
}

Functionality

generate_image(prompt: str, ctx: Context) -> Image | str

This function generates an image using a specified prompt. It follows these steps:

  1. Checks if all the environment variable are set.
  2. Loads a prompt template from a JSON file.
  3. Submits the prompt to the Comfy server.
  4. Polls the server for the status of the prompt processing.
  5. Retrieves and returns the generated image once it's ready.

generate_prompt(topic: str, ctx: Context) -> str

This function generates a comprehensive image generation prompt from specified topic.

Dependencies

  • mcp: For setting up the FastMCP server.
  • json: For handling JSON data.
  • urllib: For making HTTP requests.
  • time: For adding delays in polling.
  • os: For accessing environment variables.
  • langchain: For creating simple LLM Prompt chain to generate image generation prompt from topic.
  • langchain-ollama: For ollama specific modules for LangChain.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Related MCP Servers & Clients