Navigation
Knowledge Base MCP Server: Centralize & Scale Enterprise Knowledge - MCP Implementation

Knowledge Base MCP Server: Centralize & Scale Enterprise Knowledge

Knowledge Base MCP Server: Mirror, manage, and scale enterprise knowledge with real-time access, seamless collaboration, and enterprise-grade security for modern workflows.

Research And Data
4.8(121 reviews)
181 saves
84 comments

59% of users reported increased productivity after just one week

About Knowledge Base MCP Server

What is Knowledge Base MCP Server: Centralize & Scale Enterprise Knowledge?

A centralized infrastructure designed to unify fragmented knowledge repositories across enterprises. This server enables organizations to aggregate, index, and query diverse document formats through a unified API interface. By leveraging semantic search and modular configuration, it provides a scalable solution for maintaining up-to-date knowledge assets while minimizing redundancy and accessibility barriers.

How to Use Knowledge Base MCP Server: Centralize & Scale Enterprise Knowledge?

1. Setup Configuration: Define knowledge base directories and indexing parameters via environment variables. 2. Index Initialization: The server automatically processes supported file types (Markdown, JSON, TXT) during startup. 3. Query Execution: Use the standardized API endpoints to either list available knowledge bases or perform contextual searches with adjustable relevance thresholds. 4. Integration: Deploy within existing workflows using RESTful APIs or CLI tools for seamless integration with documentation systems.

Knowledge Base MCP Server Features

Key Features of Knowledge Base MCP Server

  • Semantic Search Engine: Uses FAISS for vectorized document representation and similarity scoring.
  • Dynamic Indexing: Automatically detects file modifications and updates search indices without service disruption.
  • Granular Control: Specify search scope per knowledge base and adjust relevance thresholds (default ≤2.0).
  • Format Agnosticism: Supports text-based formats through customizable parsers.
  • Secure Configuration: Environment variable-driven secrets management for API keys and storage paths.

Use Cases of Knowledge Base MCP Server

1. Enterprise Documentation: Unify technical specs, SOPs, and policy documents across departments. 2. Customer Support: Power chatbot knowledge bases with real-time product updates. 3. Training Platforms: Create self-updating learning repositories for onboarding materials. 4. Research Collaboration: Maintain synchronized literature reviews across distributed teams. 5. Compliance Management:

Knowledge Base MCP Server FAQ

FAQ from Knowledge Base MCP Server

Q: How does the server handle versioned documents?
A: Maintains file hashes to track changes and includes version metadata in search results. Q: Can I prioritize specific knowledge bases in searches?
A: Yes - specify the kb_name parameter to restrict searches to designated repositories. Q: What's the maximum document size supported?
A: By default 5MB per file, adjustable via MAX_FILE_SIZE environment variable. Q: How often does the index auto-update?
A: Real-time via inotify (Linux) or polling (cross-platform fallback), configurable through INDEXING_INTERVAL. Q: Does it support distributed storage?
A: Supports mounting S3/GCS buckets through AWS_S3_BUCKET and GOOGLE_CLOUD_STORAGE settings.

Content

Knowledge Base MCP Server

This MCP server provides tools for listing and retrieving content from different knowledge bases.

Knowledge Base Server MCP server

Setup Instructions

These instructions assume you have Node.js and npm installed on your system.

Prerequisites

  • Node.js (version 16 or higher)
  • npm (Node Package Manager)
  1. Clone the repository:

    git clone

cd knowledge-base-mcp-server
  1. Install dependencies:

    npm install

  2. Configure environment variables:

* The server requires the `HUGGINGFACE_API_KEY` environment variable to be set. This is the API key for the Hugging Face Inference API, which is used to generate embeddings for the knowledge base content. You can obtain a free API key from the Hugging Face website (<https://huggingface.co/>).
* The server requires the `KNOWLEDGE_BASES_ROOT_DIR` environment variable to be set. This variable specifies the directory where the knowledge base subdirectories are located. If you don't set this variable, it will default to `$HOME/knowledge_bases`, where `$HOME` is the current user's home directory.
* The server supports the `FAISS_INDEX_PATH` environment variable to specify the path to the FAISS index. If not set, it will default to `$HOME/knowledge_bases/.faiss`.
* The server supports the `HUGGINGFACE_MODEL_NAME` environment variable to specify the Hugging Face model to use for generating embeddings. If not set, it will default to `sentence-transformers/all-MiniLM-L6-v2`.
* You can set these environment variables in your `.bashrc` or `.zshrc` file, or directly in the MCP settings.
  1. Build the server:

    npm run build

  2. Add the server to the MCP settings:

* Edit the `cline_mcp_settings.json` file located at `/home/jean/.vscode-server/data/User/globalStorage/saoudrizwan.claude-dev/settings/`.
* Add the following configuration to the `mcpServers` object:

    "knowledge-base-mcp": {
  "command": "node",
  "args": [
    "/path/to/knowledge-base-mcp-server/build/index.js"
  ],
  "disabled": false,
  "autoApprove": [],
  "env": {
    "KNOWLEDGE_BASES_ROOT_DIR": "/path/to/knowledge_bases",
    "HUGGINGFACE_API_KEY": "YOUR_HUGGINGFACE_API_KEY",
  },
  "description": "Retrieves similar chunks from the knowledge base based on a query."
},


* Replace `/path/to/knowledge-base-mcp-server` with the actual path to the server directory.
* Replace `/path/to/knowledge_bases` with the actual path to the knowledge bases directory.
  1. Create knowledge base directories:
* Create subdirectories within the `KNOWLEDGE_BASES_ROOT_DIR` for each knowledge base (e.g., `company`, `it_support`, `onboarding`).
* Place text files (e.g., `.txt`, `.md`) containing the knowledge base content within these subdirectories.
  • The server recursively reads all text files (e.g., .txt, .md) within the specified knowledge base subdirectories.
  • The server skips hidden files and directories (those starting with a .).
  • For each file, the server calculates the SHA256 hash and stores it in a file with the same name in a hidden .index subdirectory. This hash is used to determine if the file has been modified since the last indexing.
  • The file content is splitted into chunks using the MarkdownTextSplitter from langchain/text_splitter.
  • The content of each chunk is then added to a FAISS index, which is used for similarity search.
  • The FAISS index is automatically initialized when the server starts. It checks for changes in the knowledge base files and updates the index accordingly.

Usage

The server exposes two tools:

  • list_knowledge_bases: Lists the available knowledge bases.
  • retrieve_knowledge: Retrieves similar chunks from the knowledge base based on a query. Optionally, if a knowledge base is specified, only that one is searched; otherwise, all available knowledge bases are considered. By default, at most 10 document chunks are returned with a score below a threshold of 2. A different threshold can optionally be provided using the threshold parameter.

You can use these tools through the MCP interface.

The retrieve_knowledge tool performs a semantic search using a FAISS index. The index is automatically updated when the server starts or when a file in a knowledge base is modified.

The output of the retrieve_knowledge tool is a markdown formatted string with the following structure:

## Semantic Search Results

**Result 1:**

[Content of the most similar chunk]

**Source:**
```json
{
  "source": "[Path to the file containing the chunk]"
}
```

---

**Result 2:**

[Content of the second most similar chunk]

**Source:**
```json
{
  "source": "[Path to the file containing the chunk]"
}
```

> **Disclaimer:** The provided results might not all be relevant. Please cross-check the relevance of the information.

Each result includes the content of the most similar chunk, the source file, and a similarity score.

Related MCP Servers & Clients