Product Engineering • MCP Expertise

Build an MCP server: connect your SaaS to AI and agents

Your product becomes usable by Claude, ChatGPT, Cursor, and every MCP-compatible AI agent. Your data, features, and workflows, consulted directly inside a conversation with an AI, no rebuild required.

The new entry point of a SaaS

In 2012 you needed a mobile app, in 2016 a public API, and in 2026 it's the MCP server.

An MCP server is a distribution channel for AI. An AI assistant can answer your users' questions using your data and your SaaS tools. Without an MCP server, the AI responds with its generic data. With one, it queries your product and uses your real data.

The stake is to be present in the conversation your users have with their AI, instead of waiting for them to open your app.

What is an MCP server?

The Model Context Protocol (MCP) is an open standard released by Anthropic in November 2024. An MCP server exposes tools, each with a name, a description, and parameters, that an AI agent reads, picks, and calls directly inside a conversation.

It's like a REST API, but the consumer is not a developer, it's a LLM. A classic API expects specific calls and parameters. An MCP server lets the AI read the list of tools and decide which one to invoke based on what the user asks.

Your data becomes cited and your users can consume you from Claude, Cursor or ChatGPT without ever opening your app.

Essential vocabulary

Host
The AI application that talks to the user (Claude, ChatGPT, Cursor).
Client
The component that speaks directly to the MCP server, often integrated into the host app.
Server
What we build. It exposes the tools of your product.
Tools
The capabilities exposed to the LLM. Each has a name, description, and parameters.

Official spec on modelcontextprotocol.io

Use cases

Three scenarios where an MCP server changes the game for a product:

Expose your SaaS to AI agents

Your business data becomes queryable in natural language from Claude, ChatGPT, or Cursor.

Make a document base queryable

Notes, specs, articles, documents: an AI agent connected via MCP summarizes, searches, and links them in natural language.

Plug your product into AI workflows

n8n, Claude, ChatGPT, or Cursor: every MCP client can automate tasks from your product without custom integration work.

My 4-step approach

From idea to deployed MCP server.

1/4

Scoping

Which tools to expose, to which AI audience, for which critical data. Clarity on the capabilities that matter.

2/4

Tool design

JSON schema, naming, descriptions written as prompts, LLM-actionable errors.

3/4

Development

Stack tailored to the need: FastMCP in Python, the official TypeScript SDK, OAuth 2.1 for remote multi-user servers, with client registration adapted to the context.

4/4

Deploy & iterate

Production rollout, real-call monitoring, description tuning based on observed agent behavior.

Stack & standards

The technical building blocks I use, picked by project context:

Protocols

  • MCP spec
  • JSON-RPC 2.0
  • OAuth 2.1

Languages

  • Python (FastMCP)
  • TypeScript (MCP SDK)

Infrastructure

  • Cloudflare Workers
  • Vercel
  • Supabase
  • Firebase

Transports

  • stdio (local)
  • Streamable HTTP (remote)

The real question: what do you expose?

An MCP server is not a copy of your internal API. The more tools you expose, the more the AI gets lost, burns tokens for nothing, and hallucinates off-context calls. The good rule is to start small and then increase.

On my two MCP servers in production, I deliberately created only 3 tools:

Begonia.pro

  • audit_local_business
  • get_business_scan
  • search_local_seo_knowledgebase

Fude.md

  • list_projects
  • search_documents
  • get_document

The rule: one tool = one clear responsibility. No duplicates, no "just-in-case".

Writing a tool's description is writing a prompt

The tool description is read by an LLM, not a human. It's a prompt. If it's vague, the AI will never call it, or worse, will call it incorrectly.

Four reflexes to keep:

Name
A verb + an object (search_documents rather than documents).
Description
What it does, when to use it, what it returns. Three sentences max.
Parameters
Typed, with an example when the format isn't obvious.
Errors
LLM-actionable ("Project not found, use list_projects") rather than HTTP codes.

Lessons from two MCP servers in production

I run two MCP servers in production on my own SaaS:

Begonia.pro MCP

A user asks Claude "audit the Google profile of Bakery X in Lyon" and gets a real report with Begonia's data, not a generic answer.

Begonia.pro

Fude.md MCP

The MCP is used to expose Markdown documents. The user can request « write an article for my blog on the topic XXXX by analyzing the writing style of my other articles ».

Fude.md

FAQ

Questions about MCP servers:

Ready to expose your product to AI agents?

Whether you run an existing SaaS or just an idea worth exploring, let's talk. I help you pick the right tools, design a useful MCP server, and ship it cleanly.