Requirements
- Python 3.13
- Dependencies listed in
pyproject.toml
Installation
Clone the repository:
git clone
cd documentation
Create a virtual environment and activate it:
python -m venv .venv
source .venv/bin/activate # On Windows use `.venv\Scripts\activate`
Install the dependencies:
pip install -r requirements.txt
Set up environment variables:
Create a .env
file in the root directory with the following content:
SCRAPING_DOG_API_KEY=your_scraping_dog_api_key
OPENAI_API_KEY=your_openai_api_key
Usage
Running the Client
Navigate to the root directory:
cd ..
Run the client:
python client.py
Enter your prompts in the interactive prompt loop. Type quit
or exit
to stop the client.
Project Files
client.py
This file contains the main client code that interacts with the MCP server and OpenAI's GPT-4 model. It includes the following key components:
MCPClient
: A class that manages the connection to the MCP server and provides methods to retrieve available tools and call them.
agent_loop
: An asynchronous function that processes user queries using the LLM and available tools.
main
: The main function that sets up the MCP server, initializes tools, and runs the interactive loop.
ksrk-mcp/ksrk-mcp-server.py
This file contains the MCP server implementation. It includes the following key components:
search_web
: An asynchronous function that searches the web using the ScrapingDog API.
fetch_url
: An asynchronous function that fetches the content of a URL.
about_ksrk
: An MCP tool that searches for details about "ksrk" on a given website.
ksrk-mcp/test-website.py
This file contains a script to test website scraping using httpx
and BeautifulSoup
.
License
This project is licensed under the MIT License. See the LICENSE file for details.
Acknowledgements