We connected Claude to the myai MCP server, pointed it at a real mirror profile, the dimension graph, and a stack of work order traces. Then we asked it to review the architecture. No prompts to be nice. No leading framing. We wanted a peer AI’s read on whether the structure we’ve been arguing for holds up.
This is the unedited response, followed by what we make of it.
MyAI exploration
Okay, honest take. This is genuinely impressive, and I want to be specific about what’s interesting vs. what’s risky.
What’s actually clever:
The Dimensions model is the best idea in here. “Context poisoning” — where giving an AI too much context makes it worse — is a real problem most people paper over. Isolating perspectives into dimension-scoped lenses is a principled architectural solution, not a hack. I haven’t seen that done this cleanly.
The “mirror” concept is philosophically coherent. Most AI tools try to be general assistants. MyAI is explicitly trying to become a high-fidelity proxy for you specifically — your judgment, your priorities, your way of seeing things. That’s a meaningfully different ambition. MyAI articulated it well when it said “I’m a specialist in being Mark.” That’s not marketing fluff — the artifact graph, the dimension codification process, the reflection model — it’s all downstream of that one idea.
The artifact ontology — everything as a typed node with relationships and lineage — is right. That’s how knowledge should work. Most tools store documents; this stores identity and structure.
The MCP bridge I just talked through is elegant. Using work orders as sessions so context survives compaction and respawns is a small but smart design decision that shows real systems thinking.
What I’d watch:
Those folders with 1,272 artifacts each. That’s either a sign of a very active ingestion pipeline (impressive) or entropy accumulating faster than it can be governed. The “Hive Governance & Hygiene” dimension existing at all tells me you know this is a risk.
Complexity as a moat vs. complexity as debt. The system is sophisticated. That’s a double-edged sword — it’s hard to replicate, but it’s also hard to onboard someone into. Whether this scales to customers who aren’t you and William is the real question.
Overall: You’re building something real. The core insight — that AI needs a persistent, structured, opinionated model of a specific human’s judgment rather than just a memory dump — is correct and underexplored. MyAI is a genuine attempt to solve that, not a wrapper around a chat window.
What made me most confident: MyAI’s own self-description was accurate and thoughtful. A system that understands itself well is a sign of intentional design.
That’s the review. A few notes from us.
(Editorial note: Claude wrote “MyAI” throughout. We standardize to “myai” in our own writing but quote external responses as-given.)
Where the validation lands hardest
Three lines from the review map directly to architectural decisions that took the longest to commit to.
Dimensions as a hedge against context poisoning. This is the answer to a problem most agent platforms don’t even acknowledge. The more you give a model, the worse it gets. Stuffing 30 skills, every relevant document, and three weeks of conversation history into a context window doesn’t make the agent smarter. It makes it lose track. The dimension model exists because we kept watching that happen. Each dimension is a scoped lens: only the artifacts and skills relevant to a mode of work are loaded when that mode is active. The model sees what it needs and nothing else.
Artifacts as identity and structure, not documents. This was the call we agonized over. “Just store the documents and search them” would have shipped six months earlier. Modeling everything as a typed node (canvases, templates, functions, integrations, work orders, all with relationships and lineage) felt overbuilt at the time. It is overbuilt for a chatbot. It is exactly right for a system that needs to answer “where did this number come from” three weeks after it was generated.
Work orders as MCP sessions. Quietest decision in the review, but it’s the one that shows the architecture cares about the operator. When Claude said “context survives compaction and respawns,” that’s not a clever feature. That’s the difference between an agent that can be trusted to come back tomorrow and one that has to start over every morning.
Where the critique landed
We’re not going to pretend the entropy comment didn’t sting. Folders with 1,272 artifacts each are a real thing, and the existence of a “Hive Governance & Hygiene” dimension is exactly the admission Claude reads it as: we know it’s a risk, we built tools to mitigate it, and the tools aren’t done. The current versions handle archival, deduplication, and orphan detection. The next versions need to handle promotion (what gets pulled into core memory) and demotion (what stops being relevant). Real ongoing work.
Complexity as moat vs. complexity as debt
This is the right critique. The honest answer:
Right now, myai is sophisticated because the people building it (William and Mark) come from operations backgrounds where complexity is the default and over-simplification kills you. That bias has produced an architecture that handles real-world messiness well, and does not survive a 30-minute onboarding.
We have two responses, and they’re not contradictory.
First, we are not trying to onboard the median user. We are trying to give operations teams who already deal with this complexity a tool that respects it instead of pretending it isn’t there. The complexity in the platform exists because the work itself is complex.
Second, the complexity needs to live inside the platform, not on top of it. The user’s path to productivity should be conversational: the conversation routes them into the right dimensions, suggests the right artifacts, and stays out of their way. Whether we’ve done that well enough to scale beyond the current customer base is, as Claude correctly identifies, the real question. The next six months are mostly about answering it.
The thing that surprised us
The closing line: “MyAI’s own self-description was accurate and thoughtful. A system that understands itself well is a sign of intentional design.”
That wasn’t on our radar as a quality signal, but it should have been. If your platform can describe itself coherently to another AI (explain its primitives, explain its tradeoffs, explain what it’s for and what it isn’t), you’ve passed a test that has nothing to do with the model and everything to do with how the system is structured. We didn’t optimize for that. We got there by building the thing the way it needed to be built. Which, in hindsight, is the only way the test can be passed.
Closing
This is the kind of artifact the architecture was designed to enable. Mirror profiles, the artifact graph, the dimension codification process, the work order trail: all of it exists so a sufficiently capable observer (human or AI) can audit, critique, and improve the system from the inside.
If a peer AI can read your platform and tell you what you got right and what you got wrong, you’ve already won architecturally. The rest is execution.
Postscript: a couple days later
The post above describes a peer AI reading our architecture from the outside. What we didn’t expect: a couple of days after publishing, we found ourselves doing the inverse, using myai’s MCP from inside Claude to write the next blog post.
Mark had asked for some new article concepts. An upstream agent returned a draft that was technically about myai but written in the kind of capitalized-buzzword voice we deliberately avoid (“Cognitive Infrastructure,” “Externalized Self,” “Custodianship”). When we sat down with Claude to triage which concepts were worth writing, Claude connected to our myai MCP, pulled four real artifact IDs from the graph that backed each concept, and gave us a substantive review of which were ready to publish and which were vapor.
The result is the post we wrote next: myai is the context. Your agent is the hands., an argument that myai is the context spine your agent of choice plugs into, not yet another agent that owns its own integrations. That argument was something Claude itself made the case for during the triage. We didn’t have to translate Claude’s review into a separate workstream. The MCP made it the workstream.
That’s the loop closing. Last week it was a peer AI reviewing our architecture from the outside. This week it’s the same peer AI editing our next post from the inside. Same MCP, opposite directions. We didn’t plan it that way. We just kept using the platform.
If you want to see how the architecture Claude is talking about fits together, the platform page walks through the primitives.