Back to blog

myai is the context. Your agent is the hands.

April 26, 2026 · William VanBuskirk

We made a deliberate choice not to be Claude's MCP marketplace.

Claude already has the integrations. ChatGPT does too. Cursor, Grok, every serious agent ships with a growing ecosystem of tools they can call. We don't want to recreate that ecosystem — and we'd lose if we tried.

What none of those agents have out of the box is your context: the graph of your prior work, your customers' specific situations, the design decisions that got you here, the relationships between projects and people. That's what myai is. Plug it in via MCP and your agent of choice gets memory, judgment, and a working knowledge of how you work.

The last mile (Excel, PowerPoint, code, deck) belongs to whatever tool you already love.

myai is the context. Your agent is the hands.

A real example: a Kinaxis rollout

A consulting firm in our customer base was rolling out Kinaxis Maestro for a manufacturing client. Anyone who has done a Kinaxis implementation knows the shape of the work: a planning and orchestration layer pulling from ERP, demand signals, supply data, and a half-dozen other systems; deep customer-by-customer nuance; dozens of stakeholders across demand planning, supply, S&OP, plant operations, procurement, and finance; and requirements that have to capture tribal knowledge nobody has written down (we always expedite for customer X, lot-sizing exceptions, allocation logic). Full enterprise rollouts run twelve to twenty-four months. The user stories alone are a multi-week effort.

myai built the user stories. Not because it’s a writing tool, but because it knew the relationships. It pulled from the supply chain platform context, prior projects in the customer's portfolio, the design decisions that had already been made, and (this part still surprises me) it knew which stakeholder to ping when a story needed clarification, because the project-to-people relationships were already in the graph. The user stories came out of the system already grounded.

Each one arrived with its provenance attached: which prior project it drew from, which design decision it implemented, which stakeholder owned the underlying answer. When two stories conflicted because they pulled from different prior decisions, myai surfaced the conflict in the same draft, not as an afterthought a reviewer would have to catch later. That kind of provenance is the difference between a draft a reviewer can sign off on and a draft that comes back as a comment thread.

But the last mile was Claude Code. Pixel-perfect requirements documents. Manicured Excel templates the consultants had standardized on. The executive summary PowerPoint. The data flowed straight out of myai-via-MCP into Claude Code, which did what Claude Code is great at: shipped a polished artifact a consultant could put in front of a client.

The deliverable for the discovery phase was a deep user-story bundle and a stakeholder review deck. Both came out of the same myai context. Both landed in the consultant's standard templates. Neither was the kind of work the consultants would have hand-drafted from scratch a year ago.

myai didn't try to be Excel. myai didn't try to be PowerPoint. myai was the brain. Claude Code was the hands.

That's the pattern. It's not a special integration. It's the default.

We dogfood this every day

We use this internally. Constantly. Our knowledge base lives in myai (what we call the "hive mind"). When Mark or I are debugging a piece of myai itself, writing content for the platform, or trying to remember what we decided three weeks ago and why — we connect to our own myai-via-MCP from Claude. Our agent of choice gets our actual context. Eat your own cooking.

Concretely: when we're scoping the next dimension and need to know which artifacts already exist in that workstream. When a customer asks why we made a particular design decision and the answer is sitting in a work order from three months ago. When we're writing a blog post and want to know what we've already published on the topic so we don't repeat ourselves. The graph holds it. The agent retrieves it. We don't have to context-switch into a knowledge-base UI to go look.

This blog post is an example. While drafting, the agent writing alongside me connected to myai's MCP, pulled real artifacts from the graph, surfaced four artifact IDs I needed to read, and pushed back on framings the upstream draft had wrong. When that draft came back overstuffed with capitalized buzzwords, it was Claude (talking to myai over MCP) that flagged the voice mismatch by reading the most recent post and comparing it against the new draft. Then it proposed three structural reframes; two made it into the path forward.

That's the part we keep coming back to. We didn't build myai to be impressive in a demo. We built it because we needed it. The proof that it works is that we'd be slower without it.

Why we built it this way

Four points worth being explicit about:

  1. Every serious agent already has (or will soon have) a long list of MCP integrations. We don't have to build that list, and you don't have to wait for us. If your team lives in Notion, Linear, Slack, GitHub, Snowflake, or any of the other things Claude or ChatGPT can already reach, the integration is already there.

  2. What no agent has out of the box is you. Your prior work, your customers' specific situations, your team, the decisions that got the platform to its current state. Memory and judgment. That's not a connector. That’s a structured model of how you operate.

  3. A context spine that any agent can plug into is more valuable than a closed agent that owns its own integrations. Specialization beats duplication. We get better at being the context layer; you get to keep the agent you already love.

  4. The same myai instance is available to Claude on Monday, ChatGPT on Tuesday, and whatever wins the next round on Wednesday. Your context survives the agent churn. The model layer is going to keep moving fast. Your knowledge layer shouldn't have to move with it.

One thing we'll dig into in the next post: page-level RAG isn't context. Most "RAG-everything" pipelines dump documents into a vector store and pray. We summarize, synthesize, and link, so when your agent asks "what's our position on X," it gets the synthesis we built, not 30 pages of noisy chunks. More on that next.

Try it

If you want myai-as-context plugged into the agent you already use, the endpoint pattern is https://api-[instance].makeyourself.ai/api/mcp (replace [instance] with your tenant subdomain). The Connect via MCP docs page walks through plugging it into Claude, ChatGPT, and Cursor in a few lines of config. The platform page covers what's actually in the graph that you'd be exposing.

Two posts coming next in this series: how we treat RAG (synthesize-then-search, not search-then-pray), and why state matters more than context window length. Both are downstream of this one. Start here.