Quick Start with MCP-Use ๐
Welcome to the MCP-Use universe โ the easiest way to plug any LLM into powerful, real-world tools! In this short guide you'll go from zero to a running agent in under five minutes.
This tutorial is the perfect place to start if you've just installed MCP-Use or if you want a refresher on the basics. When you're done, jump to the other tutorials in this folder for deeper dives.
0. Why MCP before MCP-Use?
Before diving into code let's understand the playing field.
MCP (Model-Context Protocol) is an open standard that lets an AI model call external tools in a safe, typed way โ similar to how your smartphone apps request permissions. A tool can be anything:
- open a web page and click a button
- run a shell command
- query a SQL database
- generate a Blender mesh
MCP defines a common language so the model (host) says "I want to call browser.search
with query=best ramen berlin
" and an MCP server executes it, returning JSON results.
๐ In short: MCP turns any black-box LLM into a programmable agent that can actually get stuff done.
Real-world Stories
Persona | Goal | How MCP helps |
---|---|---|
โ๏ธ Travel hacker | Compare Airbnb prices, then check Skyscanner flights, stash results in Notion. | Servers: airbnb , playwright , filesystem |
๐ฐ Research analyst | Daily crawl of news sites, summarize top stories, dump into Slack. | Servers: browser , filesystem , custom slack |
๐ ๏ธ 3D artist | Procedurally create Blender scenes with an LLM, render thumbnails. | Server: blender |
1. Enter MCP-Use ๐ ๏ธ
MCP-Use is the Python glue that makes all of the above dead-simple:
- Spin up or connect to many MCP servers โ๏ธ
- Wrap them in an ergonomic async client โ๏ธ
- Feed tool schemas to your favourite LangChain LLM โ๏ธ
- Provide guard-rails like
max_steps
, allowed tools, automatic server routing โ๏ธ
If MCP is the protocol, MCP-Use is the power socket adapter.
1. Prerequisites
โ Requirement | Why you need it |
---|---|
Python 3.11+ | MCP-Use targets modern Python for async-await bliss. |
LLM provider keys | Any model that supports tool/function calling via LangChain chat models works. Examples: OPENAI_API_KEY , ANTHROPIC_API_KEY . |
Node 18+ (optional but recommended) | Many official MCP servers are distributed as npx <package> , so having Node installed unlocks instant servers like Playwright or Airbnb. |
2. Installation
# 1. Core library pip install mcp-use # 2. Pick an LLM provider โ here we use OpenAI pip install langchain-openai # 3. (Optional) Playwright browser MCP server npm install -g @playwright/mcp # or just use npx in the config later
Add your API keys to a .env
file in your project root:
OPENAI_API_KEY="sk-โฆ"
MCP-Use will automatically pick them up via python-dotenv
.
3. Your First Agent โ "Find me coffee โ"
Create a new file called hello_mcp.py
and paste the following code:
import asyncio import os from dotenv import load_dotenv from langchain_openai import ChatOpenAI from mcp_use import MCPAgent, MCPClient load_dotenv() # ๐ Load API keys # 1๏ธโฃ Describe which MCP servers you want. Here we spin up Playwright in a headless browser. CONFIG = { "mcpServers": { "playwright": { "command": "npx", "args": ["@playwright/mcp@latest"], "env": {"DISPLAY": ":1"} # required if you run inside Xvfb / CI } } } async def main(): client = MCPClient.from_dict(CONFIG) llm = ChatOpenAI(model="gpt-4o") # 2๏ธโฃ Wire the LLM to the client agent = MCPAgent(llm=llm, client=client, max_steps=20) # 3๏ธโฃ Ask something that requires real web browsing result = await agent.run("Find the best specialty coffee in Berlin using Google Search") print("\n๐ฅ Result:", result) # 4๏ธโฃ Always clean up running MCP sessions await client.close_all_sessions() if __name__ == "__main__": asyncio.run(main())
Run it:
python hello_mcp.py
If everything is set up correctly you'll watch MCP-Use boot the Playwright server, let the LLM pick browser actions, and finally print a human-readable answer.
4. How it Works (TL;DR)
- MCPClient starts the external server in a separate process.
- The server exposes tools (e.g.
browser.search
,browser.click
). - MCPAgent sends the available tools to your LLM.
- The LLM decides which tool to call, returns a JSON tool invocation, and MCP-Use executes it.
- Steps 3-4 repeat until the agent decides it's "done" or hits
max_steps
.
5. Next Steps โถ๏ธ
- Build your own agent โ check out
first-agent.md
for a fully-commented walkthrough. - Multiple servers โ want to combine Airbnb + Browser + Filesystem? Hop over to
multi-server.md
. - Debugging โ learn pro tips for logging and tracing in
debugging.md
.
Happy hacking โ and don't forget to โญ the project on GitHub if this saved you time!
(GitHub Repo)