
Docker MCP shows why agent tools need boring infrastructure. The more AI agents do real work, the less comfortable I am with random local tool setup. MCP is useful because it gives agents a common way to connect to tools and data, but the moment those tools touch repositories, databases, tickets, browsers, or internal APIs, the boring questions become the important questions.
Docker’s MCP Catalog and Toolkit are interesting because they treat MCP servers like something that needs packaging, profiles, gateways, authentication, and management. That is the right direction. Running tool servers directly on a developer machine might be fine for experiments, but it is less fine when a team starts relying on them every day.
At that point, you want versioning, isolation, shared configuration, and a way to control what each project can use. People talk about model safety a lot, and they should, but tool safety matters too. An agent with no tools can mostly say wrong things. An agent with tools can do wrong things.
That is why I like seeing the conversation move from “look what this agent can do” to “how do we run this in a way a team can trust?” The future of AI development is not just better prompts. It is better infrastructure around the prompts, and that is much closer to what production teams actually need.