[RFC] Add support for a Model Context Protocol (MCP) server integration #31788
Replies: 6 comments 5 replies
-
Interesting! Thanks for writing this all down. btw What is Backstage and how is it relevant to Storybook? |
Beta Was this translation helpful? Give feedback.
-
Thanks for submitting @jfrazier08 , I'd be stoked to help enable agentic workflows with an MCP server. The requirements are shaping up but there are still lots of unknowns. For instance, you mention exposing source code from the MCP server. Do you envisage the MCP client does not already have access to the component and story source code? It would be helpful if you could share some prompts along with the agent behavior that you'd like to see enabled by a Storybook MCP server! |
Beta Was this translation helpful? Give feedback.
-
I think it would be cool if the storybook dev server basically had some remote mcp endpoints and had stuff like
I started doing some simpler stuff in my own mcp server but tighter integration is needed to do some of the more useful stuff however the playwright mcp together with a simple mcp like this can already navigate storybook pretty well, take screenshots and check console output etc |
Beta Was this translation helpful? Give feedback.
-
For anyone landing here, we're starting a research project about using agents with Storybook and/or design systems: #32276 Feel free to leave thoughts, feedback or input in there if you want to. |
Beta Was this translation helpful? Give feedback.
-
What do you mean by this? Do you mean it should be available as a remote MCP, or do you mean the tools should be designed to mimic REST APIs? If the latter, I've often read that MCPs perform better when they target specific tasks rather than let LLMs figure out which API endpoints to call in what order; both because they communicate usages better and because providing endpoints for specific needs can reduce the number of requests needed. |
Beta Was this translation helpful? Give feedback.
-
Here's my own MCP wishlist as a design system practitioner :) Fetching documentationUse cases:
Endpoints:
Listing componentsUse cases:
Endpoints:
One of the potential areas for improvement I see in Storybook is encouraging developers to provide structured data on the semantics of their components. How many users use the docs description parameter (which is kinda cumbersome to reach)? And even if they did, how would a poor MCP server fetch parameters for all stories or all CSF metas without running into the performance issues of old? Shouldn't that semantic description be available more globally as structured data to support AI workflows? Understanding how to use a componentUse cases:
Endpoints:
The only way this is better than source code is if we provide all the metadata relevant for integration (decorators, args mappings, render functions), specifically because this is what LLMs can't get out of source with a single query. A LLM would need to read file source, do a FS search to find where the file is imported, and read several such files to understand how to integrate a component that has implicit dependencies. A well-written story showing how to use a component in situ would provide all of that in a single query. The MCP server also has an opportunity to provide governance-related context through story descriptions and tags: something's unstable, or outdated, or known to be buggy. A specific example shows controlled vs uncontrolled usages. Another story shows best practice / conventions when integrating with a specific type of state store, etc. Even when LLMs do have access to code, MCP endpoints could hint that LLMs can get an overview of how all components work with fewer queries. In workflows like https://southleft.com/insights/design-systems/introducing-story-ui-accelerating-layout-generation-with-ai-mcp/, I suspect it's more efficient to have a single endpoint providing integration snippets for multiple components than to read each component's source code, saving credits and CO2. QA workflows - design syncUse cases:
Endpoints:
QA workflows - StorybookUse cases:
Endpoints:
This would need to be tested in e.g. Cline's plan mode as I expect it would also be very useful for end users to clarify what they want documented or to provide descriptions themselves once stories are written. This could also be an excellent way to encourage the writing of descriptions on all stories, which is relevant for many other MCP workflows. QA workflows - general qualityUse cases:
Endpoints:
Storybook augmentation with LLM outputsUse cases:
This might not MCP territory any more, this is more of a built-in AI feature I'd like to have. Likewise, if test providers can intersect with AI outputs, TJ's DS MCP could be useful to all Storybook users. Or a MCP server could provide a tool that takes a generic LLM output with a list of issues and/or successes and transforms it into a test provider output. I think this would only be useful if outputs can be made to persist and can be reviewed/edited by human users or agents though. Some LLMs could analyse code and act as test providers while others could query a Storybook instance for issues and pick up work to do. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Summary
Proposal: Add support for a Model Context Protocol (MCP) server integration in Storybook. This would enable Storybook to expose structured, machine-readable context from stories and documentation, allowing AI systems and developer tools to generate UI code and designs more effectively.
Problem Statement
Non-goals
Implementation
We propose the development of a native MCP server integration for Storybook that exposes structured context from stories and documentation. This would include:
This MCP server would expose a REST or GraphQL API that conforms to the Model Context Protocol specification, enabling AI systems to query and consume Storybook content as structured context for generation tasks.
High-Level Architecture
Introduce a new @storybook/mcp-server package.
This package runs alongside Storybook and exposes an MCP-compliant API endpoint (e.g., /mcp) that returns structured story data.
The server would hook into Storybook’s internal APIs to extract:
Prior Art
Deliverables
Risks
Mitigations:
Unresolved Questions
Using a to do list makes it easy to resolve the questions as we move the RFC along.
Should the MCP server be a standalone service or a plugin within the Storybook runtime?
What authentication or access control mechanisms should be supported?
Alternatives considered / Abandoned Ideas
Beta Was this translation helpful? Give feedback.
All reactions