If you don’t know what an MCP Server is, start by reading up at Model Context Protocol .
MCPs basically give your LLM access to real tools to interact with systems like Jira, databases, and anything else you can expose as a service.
This article assumes you have been using MCP servers in one way or another. I am focusing on use in VSCode, but these examples can be used anywhere you want to limit host system installs to run tooling. That’s one of the aspects of Docker I really like. It’s not just for CI/CD and deployments. You can use it to containerize development tooling, environments, and CLI commands that have dependencies. The issue is that sometimes, it’s not always obvious how to make something run in a container.
Where can I find more MCP servers? #
If you’re looking at where to find MCP servers, here are a couple good options.
Why Docker for MCP Servers? #
I don’t like installing software on my machine. Sometimes I do because I either have to, it’s just easier, or because I had a lapse in judgment.
“What? It’s your dev machine”, you might say. “Why not install whatever you need?”
“Well,” I would respond, “for the same reason we don’t like to spin up dedicated application servers.”
Differing dependencies between projects, updates, breaking updates, and general bloat. So, I don’t like it. I like development containers and compartmentalization and NixOS style package management.
Using Docker for MCP servers has several advantages
- Isolation: Each server runs in its own container, preventing dependency conflicts
- Portability: Works the same across all operating systems
- Zero local installation: No need to install Node.js, Python, or any other runtimes
- Version consistency: Everyone on your team gets the exact same server behavior
A word about the examples #
All of these examples are for .vscode/mcp.json
entries. There are various other ways to declare MCP servers in VSCode, such as running MCP servers within devcontainers (how cool?). Check out the VS Code documentation
for alternative configuration locations.
That said, this should still mostly apply, but may need a tweak here or there.
Container image already available #
Sometimes we get lucky and the engineers have an image available. I like to run with --rm
to ensure no cruft is left behind. I also like to name them, so I can see what’s what when I look at container names.
{
"servers": {
"playwright": { // https://github.com/microsoft/playwright-mcp
"type": "stdio",
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"--name",
"mcp-playwright",
"--init",
"mcp/playwright"
]
},
"fetch": {
"type": "stdio",
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"--name",
"mcp-fetch",
"mcp/fetch"
]
}
}
}
These configurations are straightforward. Both use the -i
flag to keep stdin open, and --rm
to automatically remove the container when it exits. The Playwright configuration also includes the --init
flag, which helps with proper signal handling - useful for browser automation tasks.
uvx Wrapper AND volume mapping example #
Here is a duckdb example. The docs require uvx
to run the server. In addition, we need access to the database file from within the container, so we need a -v
volume mapping to map a folder from the host to inside the container.
"duckdb": { // https://github.com/ktanaka101/mcp-server-duckdb
"type": "stdio",
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"--name",
"mcp-duckdb",
"-v",
"~/my-duckdb-folder:/data",
"python:3.11-slim",
"sh",
"-c",
"pip install --quiet mcp-server-duckdb uv && python -m pip install -U pip && uvx mcp-server-duckdb --db-path /data/my-database.db"
]
}
This setup is more complex but offers great flexibility:
- Maps a local directory (
~/my-duckdb-folder
) to/data
in the container for persistent storage - Uses a specific Python version (3.11-slim) as the base image
- Installs the required packages (
mcp-server-duckdb
anduv
) on container startup - Points to a specific database file at
/data/my-database.db
npx Wrapper AND Passing in Secrets #
This one is magical, in my opinion. In this example, we’re connecting to Confluent Cloud. That means we need credentials. I don’t want to put those in my mcp.json
file. Gross. Let’s create an env file at .vscode/.confluent-secrets.env
. Ensure that .env
files are excluded from git checkin!
# .confluent-secrets.env
# Confluent Cloud credentials - replace with actual values
BOOTSTRAP_SERVERS=pkc-v12gj.us-east4.gcp.confluent.cloud:9092
KAFKA_API_KEY=...
KAFKA_API_SECRET=...
KAFKA_REST_ENDPOINT=https://pkc-v12gj.us-east4.gcp.confluent.cloud:443
KAFKA_CLUSTER_ID=
KAFKA_ENV_ID=env-...
# Uncomment below if needed
# FLINK_ENV_ID=env-...
# FLINK_ORG_ID=
# FLINK_REST_ENDPOINT=https://flink.us-east4.gcp.confluent.cloud
# FLINK_ENV_NAME=
# FLINK_DATABASE_NAME=
# FLINK_API_KEY=
# FLINK_API_SECRET=
# FLINK_COMPUTE_POOL_ID=lfcp-...
CONFLUENT_CLOUD_API_KEY=
CONFLUENT_CLOUD_API_SECRET=
CONFLUENT_CLOUD_REST_ENDPOINT=https://api.confluent.cloud
SCHEMA_REGISTRY_API_KEY=...
SCHEMA_REGISTRY_API_SECRET=...
SCHEMA_REGISTRY_ENDPOINT=https://psrc-zv01y.northamerica-northeast2.gcp.confluent.cloud
Now we create the MCP server and have it use those secrets via the envFile
field.
"confluent": { // https://github.com/confluentinc/mcp-confluent
"type": "stdio",
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"--name",
"mcp-confluent",
"-e",
"NODE_ENV=production",
"-e",
"NPM_CONFIG_LOGLEVEL=error",
"-e",
"NPM_CONFIG_UPDATE_NOTIFIER=false",
"-e",
"BOOTSTRAP_SERVERS",
"-e",
"KAFKA_API_KEY",
"-e",
"KAFKA_API_SECRET",
"-e",
"KAFKA_REST_ENDPOINT",
"-e",
"KAFKA_CLUSTER_ID",
"-e",
"KAFKA_ENV_ID",
"-e",
"FLINK_ENV_ID",
"-e",
"FLINK_ORG_ID",
"-e",
"FLINK_REST_ENDPOINT",
"-e",
"FLINK_ENV_NAME",
"-e",
"FLINK_DATABASE_NAME",
"-e",
"FLINK_API_KEY",
"-e",
"FLINK_API_SECRET",
"-e",
"FLINK_COMPUTE_POOL_ID",
"-e",
"CONFLUENT_CLOUD_API_KEY",
"-e",
"CONFLUENT_CLOUD_API_SECRET",
"-e",
"CONFLUENT_CLOUD_REST_ENDPOINT",
"-e",
"SCHEMA_REGISTRY_API_KEY",
"-e",
"SCHEMA_REGISTRY_API_SECRET",
"-e",
"SCHEMA_REGISTRY_ENDPOINT",
"--entrypoint",
"sh",
"node:22",
"-c",
"exec npx --yes --quiet @confluentinc/mcp-confluent 2>/dev/null"
],
"envFile": "${workspaceFolder}/.vscode/.confluent-secrets.env"
},
This configuration:
- Passes through numerous environment variables from the host to the container
- Executes the Confluent MCP package via NPX without installing it globally
- Reads sensitive configuration from a
.env
file to avoid hardcoding secrets
The envFile
property is particularly useful here, as it lets you keep your Confluent Cloud credentials separate from your MCP server configuration.
Build an image on the fly with Docker Compose #
There are a couple of ways to do this one. You could create your own image and push it up to a registry. Sure, you could do that, but we want stuff that works without extra steps in these examples.
For this one, we’re going to use a .vscode/docker-compose.markitdown.yml
file to build the MarkItDown-MCP
server. To throw some extra mustard on it, we aren’t even going to pull down the Dockerfile, we are going to refer to it remotely from the GitHub repo. Building Docker images from a remote URL is a little risky, so do this at your own peril.
# docker-compose.markitdown.yml
services:
markitdown-mcp:
build:
context: https://github.com/microsoft/markitdown.git#main:packages/markitdown-mcp
dockerfile: Dockerfile
args:
BUILDKIT_PROGRESS: plain
image: markitdown-mcp:latest
volumes:
- ${WORKSPACE_FOLDER:-${PWD}}:/workdir
stdin_open: true
tty: false
restart: "no"
networks:
- default
environment:
- MARKITDOWN_ENABLE_PLUGINS=True
working_dir: /workdir
logging:
driver: "none"
networks:
default:
driver: bridge
And here is how we use that compose file. Notice the use of ${workspaceFolder}
in order to keep paths as general as possible. This way it can be checked in and used by other team members.
Also notice --quiet-pull
. The stdout
is interpreted as output of the MCP, so we want to keep the build steps quiet (unless you need to troubleshoot) in order to limit MCP client errors and warnings with this process.
"markitdown": { // https://github.com/microsoft/markitdown/tree/main/packages/markitdown-mcp
"type": "stdio",
"command": "docker",
"args": [
"compose",
"-f",
"${workspaceFolder}/.vscode/docker-compose.markitdown.yml",
"run",
"--rm",
"-i",
"--quiet-pull",
"--name",
"mcp-markitdown",
"markitdown-mcp"
],
"env": {
"WORKSPACE_FOLDER": "${workspaceFolder}"
}
}
This approach does a few unique things
- Builds the image directly from the GitHub repository
- Maps your workspace folder into the container so it can access your files
- Sets up appropriate networking and environment variables
- Disables logging to reduce noise
Things I Generally Do #
When setting up Docker-based MCP servers
- Use
-i
and--rm
flags to keep stdin open and clean up containers when they exit - Use environment variables for configuration rather than hardcoding values
- Consider volume mounts for persistent data or workspace access
- Use specific image versions rather than ’latest’ for consistency
- Redirect unnecessary output to
/dev/null
to keep logs clean
Conclusion #
Using Docker for MCP servers in VS Code works in most use cases. Sometimes an MCP server will need access to something that is annoying or maybe impossible to give to a container. Luckily, those are currently edge cases. As MCPs progress in their abilities, it may become more difficult to containerize them, or it may become the defacto standard to offer a container.
This approach has been working great for my projects, and I hope it helps you extend your Copilot capabilities without the headaches of managing dependencies directly on your system.
References #
- Model Context Protocol : Introduction to the Model Context Protocol
- VS Code MCP Servers : Official VS Code MCP servers documentation
- MCP Servers Repository : GitHub repository with various MCP server implementations
- Development Containers : Documentation for development containers standard
- NixOS : The NixOS Linux distribution website
- VS Code MCP Documentation : VS Code docs on MCP server configurations
- MarkItDown-MCP : GitHub repository for the MarkItDown MCP server