A simple MCP implementation
I have been working in the software and AI industry for a few years now, and even though the demand for AI-based applications is still going on the rise, I myself have been a bit skeptical of how much impact AI can have on the way we work and the way we interact with digital systems.
Throughout these years, I've seen people attaching foundational AI models to applications mostly as chatbots. Hundreds of self-proclaimed founders have slapped a ChatGPT wrapper on top of OpenAI’s API and called it a startup. Everyone and their mother is building a SaaS side-hustle. If not, they want to sell you a course on how to create a SaaS business. For this very reason, the study that says that 95% of AI pilots fail is not that surprising. The focus should not be to have AI just because "AI", but rather use it as a tool that helps you extract value out of data.
Most of my skepticism around large language models was not about their capability (they are quite good at finding patterns in text and with enough data, their capabilities are impressive) but rather on how to integrate these models into our digital systems. For quite some time, whenever anyone was talking about building agentic systems, more often than not, they were talking about creating agents that were, simply put, enhanced prompts or glorified LLM wrappers. Integrating language models with, say, tools or functions and documents was possible but the plumbing was very ad-hoc and messy, and more often that not, the tooling was based in heuristics, which felt more like traditional automation.
For most of the second half of the year, my work has evolved around building enterprise-ready agents and multi-agent systems, and on that one, there is a piece that in my opinion has been quite a game changer: Model Context Protocol, or MCP for friends. MCP is a standard (recently open-sourced and donated to the Linux Foundation) to connect AI applications to external systems - a clean, standardized way to give models access to the world outside their context window. In other words, it is an API protocol for AI systems.
For the regular AI enthusiast, I am not sure if MCP brings any particular advantage over using code to expose particular tools, given these are connected to third-party APIs. Do not get me wrong, using verified and trusted MCP servers can allow you to expose some tools to your AI application out of the box, but there are some risks that you should take into account before you decide to onboard an MCP server in your codebase.
In enterprise applications, however, the potential of MCP is significantly broader and deeper. Most enterprises have their own platforms, data storage solutions, APIs and integrations, running in secure environments with proper network isolation and authentication / authorisation flows. I do not want to bother you with all the technical details about the MCP standard - I am sure Anthropic's original references [ 1, 2 ] will do that better than I could possibly do. Instead, let’s talk about what MCP actually exposes and why it matters.
Primitives exposed by MCP: tools, data and prompts
- Tools: Think of these as functions the AI can call. Instead of hoping the model hallucinates the correct API call, an MCP server exposes a
send_email(t o, subject, body)orquery_database(sql)tool with a defined schema. The AI sees these as available actions and can invoke them directly. - Resources: These are data references—files, database records, API endpoints, real-time streams. MCP lets servers expose them as readable (and sometimes writable) resources with unique IDs and metadata. This turns “I think I remember something about a file…” into “Read
file://reports/Q1_summary.md.” - Prompts: Pre-built, parameterizable prompt templates can be served via MCP, enabling consistent, governed interactions. Need a “summarize this document” or “generate a support response” pattern? Expose it as a prompt template and ensure every agent uses the same vetted starting point.
Together, these primitives turn an isolated LLM into a connected operator, aware of and able to act on live, contextual data. Of course, in theory.
The AI-enabling layer
When coupled with enterprise systems, MCP acts as a secure bridge layer. Instead of every AI project building custom, brittle integrations to your CRM, data warehouse, or internal tools, you can build or adopt MCP servers that act as sanctioned gateways.
This means:
- Security & Governance: Access controls, audit logs, and data policies are enforced at the MCP server level, not in every prompt.
- Consistency: Different agents or applications can use the same tools and resources, ensuring uniformity.
- Developer Efficiency: AI engineers focus on agent logic and security, not on rewriting the same API clients or parsing obscure CSV formats.
MCP becomes the clean, standardized “nervous system” connecting AI brains to enterprise muscle.
A Simple Implementation: Agno + DeepSeek + FastMCP
Let’s make this concrete. Here’s a minimal setup to see MCP in action:
- The Agent Engine: We’ll use Agno - a lightweight, Python-based agent framework that I have enjoyed using a lot lately. It will be our “client,” the AI application that uses MCP.
- The LLM: I’ll power it with DeepSeek via its API - a capable and cost-effective model. Agno is provider agnostic and plugging models is incredibly easy, so feel free to use an alternative of your choice as well.
- The MCP Server: We’ll build a simple server using FastMCP, a Python SDK from the official MCP toolkit that will get us up and running in the blink of an eye. Let’s create a server that exposes two things:
- A Tool called
get_weatherthat takes acityparameter and returns mock (or real, via an API) weather data. - A Resource pointing to a static company FAQ document (
file://docs/faq.txt).
- A Tool called
The flow looks like this:
- The Agno agent, configured as an MCP client, starts up and connects to our FastMCP server.
- The agent receives the list of available tools and resources from the server.
- When a user asks, “What’s the weather in London and what’s our refund policy?”, the DeepSeek model (via Agno) decides to:
- Call the
get_weathertool withcity="London". - Read the resource
file://docs/faq.txtto find the refund policy.
- Call the
- The MCP server executes the tool call and provides the resource content.
- Agno synthesizes the data into a coherent, grounded response for the user.
Now, the code:
Set up a virtual environment with the following libraries:
# requirements.txt
agno>=0.1.0
fastmcp>=0.1.0
aiohttp>=3.9.0First, the code for the FastMCP server:
# mcp_server.py
import asyncio
from datetime import datetime
from fastmcp import FastMCP
# Initialize MCP server
mcp = FastMCP("Weather and FAQ Server")
# Mock weather data (in reality, you'd connect to a weather API)
WEATHER_DATA = {
"london": {"temp": 15, "condition": "Cloudy", "humidity": "78%"},
"new york": {"temp": 22, "condition": "Sunny", "humidity": "65%"},
"tokyo": {"temp": 18, "condition": "Rainy", "humidity": "85%"}
}
# Define a tool that the AI can call
@mcp.tool()
def get_weather(city: str) -> str:
"""Get current weather for a given city.
Args:
city: The city name (e.g., 'London', 'New York')
"""
city_lower = city.lower().strip()
if city_lower in WEATHER_DATA:
data = WEATHER_DATA[city_lower]
return f"Weather in {city}: {data['temp']}°C, {data['condition']}, Humidity: {data['humidity']}"
else:
return f"Weather data not available for {city}. Try London, New York, or Tokyo."
# Define a static resource (company FAQ)
@mcp.resource("file://docs/company_faq.txt")
def get_company_faq() -> str:
"""Company FAQ document with policies and information."""
return """
COMPANY FAQ
===========
Refund Policy:
- Full refunds within 30 days of purchase
- Partial refunds (50%) between 30-60 days
- No refunds after 60 days
Support Hours:
- Monday-Friday: 9AM-6PM EST
- Saturday: 10AM-4PM EST
- Closed Sundays
Contact Information:
- Email: support@company.com
- Phone: +1-800-COMPANY
Shipping Policy:
- Standard: 5-7 business days
- Express: 2-3 business days
- Overnight: 1 business day
"""
# Run the server
if __name__ == "__main__":
print("Starting MCP Server on stdio...")
print("Exposing tools: get_weather")
print("Exposing resources: file://docs/company_faq.txt")
mcp.run(transport="stdio")Now, on a separate .py file, we will run the agent + MCP client code:
# agno_agent.py
import asyncio
from agno import Agent
from agno.models import DeepSeek
from agno.tools.mcp import MCPServer
async def main():
# Initialize the MCP server connection
mcp_server = MCPServer(
command="python",
args=["mcp_server.py"],
# For production, you might use a long-running server via stdio or streamable HTTP
)
# Create the agent with DeepSeek
agent = Agent(
model=DeepSeek(id="deepseek-chat"),
# Connect to our MCP server
tools=[mcp_server],
# A minimal system prompt
instructions="""You are a helpful assistant with access to weather data and company information.
Use the available tools to get current weather and access company FAQs when needed.
Be concise and helpful in your responses."""
)
print("🤖 Agent started with MCP integration")
print("Type 'quit' to exit\n")
while True:
try:
# Get user input
user_input = input("\nYou: ").strip()
if user_input.lower() in ['quit', 'exit', 'q']:
print("Goodbye!")
break
if not user_input:
continue
print("\nAgent: ", end="", flush=True)
# Stream the response
async for response_chunk in agent.run(user_input, stream=True):
print(response_chunk, end="", flush=True)
print()
except KeyboardInterrupt:
print("\n\nInterrupted. Closing...")
break
except Exception as e:
print(f"\nError: {e}")
if __name__ == "__main__":
asyncio.run(main())This simple pipeline demonstrates the core value: the AI didn’t need prior knowledge of our weather API or FAQ document. It discovered and used them dynamically at runtime via a standardized protocol. Scaling this pattern across dozens of tools and data sources is where MCP transforms from a neat demo into an enterprise architecture.
In summary, MCP moves us from a world of one-off, fragile AI integrations toward a composable, maintainable ecosystem. It’s not about replacing code with magic—it’s about replacing spaghetti wiring with a well-designed socket. And in the enterprise, where data is vast, systems are complex, and security is non-negotiable, that’s not just an improvement. It’s a necessity.