Navigation
Claude-LMStudio-Bridge: Seamless Cross-Platform AI Collaboration - MCP Implementation

Claude-LMStudio-Bridge: Seamless Cross-Platform AI Collaboration

Claude-LMStudio-Bridge: Seamlessly connect Claude with your local LLMs via MCP, supercharging cross-platform AI collaboration for smarter workflows and endless creativity.

Developer Tools
4.8(10 reviews)
15 saves
7 comments

This tool saved users approximately 7233 hours last month!

About Claude-LMStudio-Bridge

What is Claude-LMStudio-Bridge: Seamless Cross-Platform AI Collaboration?

Claude-LMStudio-Bridge is a Model Control Protocol (MCP) server designed to enable seamless communication between Anthropic’s Claude and locally hosted LLM models via LM Studio. It acts as an intermediary, allowing users to leverage their on-premise models alongside Claude’s capabilities through a unified interface.

How to Use Claude-LMStudio-Bridge: Seamless Cross-Platform AI Collaboration?

  1. Install dependencies and start LM Studio with your preferred model loaded.
  2. Run the bridge server using python lmstudio_bridge.py.
  3. In Claude’s MCP settings, configure the server to point to the bridge’s local address (default: http://localhost:8000).
  4. Use MCP commands like chat_completion or list_models directly in your Claude conversations to interact with local models.

Claude-LMStudio-Bridge Features

Key Features of Claude-LMStudio-Bridge: Seamless Cross-Platform AI Collaboration?

  • Real-time model switching and response comparison with Claude.
  • Support for niche models not available via cloud APIs, enhancing task-specific performance.
  • End-to-end local processing for sensitive queries, bypassing cloud dependencies.
  • Dynamic configuration options for LM Studio’s API endpoint and port.

Use Cases of Claude-LMStudio-Bridge: Seamless Cross-Platform AI Collaboration?

Claude-LMStudio-Bridge FAQ

FAQ from Claude-LMStudio-Bridge: Seamless Cross-Platform AI Collaboration?

  • Why use this bridge instead of native APIs? – Ideal for low-cost local inference, model testing, or scenarios requiring strict data isolation.
  • My LM Studio instance runs on a non-default port. How do I adjust the bridge? – Modify LMSTUDIO_API_BASE in lmstudio_bridge.py to reflect your custom endpoint.
  • Dependencies failing to install? Try the simplified pip commands listed in the troubleshooting section.
  • Can I chain multiple local models in one prompt? – Yes, by sequentially invoking get_current_model and chat_completion with model selection steps.

Content

Claude-LMStudio-Bridge

A simple Model Control Protocol (MCP) server that allows Claude to communicate with locally running LLM models via LM Studio.

Overview

This bridge enables Claude to send prompts to locally running models in LM Studio and receive their responses. This can be useful for:

  • Comparing Claude's responses with other models
  • Accessing specialized local models for specific tasks
  • Running queries even when you have limited Claude API quota
  • Keeping sensitive queries entirely local

Prerequisites

Installation

  1. Clone this repository:

    git clone https://github.com/infinitimeless/Claude-LMStudio-Bridge_V2.git

cd Claude-LMStudio-Bridge_V2
  1. Create a virtual environment:

    python -m venv venv

source venv/bin/activate  # On Windows: venv\Scripts\activate
  1. Install the required packages (choose one method):

Using requirements.txt:

    pip install -r requirements.txt

Or directly install dependencies:

    pip install requests "mcp[cli]" openai anthropic-mcp

Usage

  1. Start LM Studio and load your preferred model.

  2. Ensure LM Studio's local server is running (usually on port 1234 by default).

  3. Run the bridge server:

    python lmstudio_bridge.py

  4. In Claude's interface, enable the MCP server and point it to your locally running bridge.

  5. You can now use the following MCP tools in your conversation with Claude:

* `health_check`: Check if LM Studio API is accessible
* `list_models`: Get a list of available models in LM Studio
* `get_current_model`: Check which model is currently loaded
* `chat_completion`: Send a prompt to the current model

Example

Once connected, you can ask Claude to use the local model:

Claude, please use the LM Studio bridge to ask the local model: "What's your opinion on quantum computing?"

Claude will use the chat_completion tool to send the query to your local model and display the response.

Configuration

By default, the bridge connects to LM Studio at http://localhost:1234/v1. If your LM Studio instance is running on a different port, modify the LMSTUDIO_API_BASE variable in lmstudio_bridge.py.

Troubleshooting

If you encounter issues with dependencies, try installing them directly:

pip install requests "mcp[cli]" openai anthropic-mcp

For detailed installation instructions and troubleshooting, see the Installation Guide.

License

MIT

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

Related MCP Servers & Clients