The AI Daily Brief: Artificial Intelligence News and Analysis
The AI Daily Brief: Artificial Intelligence News and Analysis

How to Build a Personal Context Portfolio and MCP Server

1d ago24:304,933 words
0:000:00

In today's Build episode, we tackle one of the most underappreciated friction points in the agentic era — the fact that every new agent, project, or tool requires you to re-explain yourself from s...

Transcript

EN

In World of Agents, everything is about context and today, we are going to he...

personal context portfolio and MCP server. The AI Daily Brief is a daily podcast and video

about the most important news and discussions in AI.

All right friends, quick announcements before we dive in. First of all, thank you to today's

sponsors KPMG, Blitzzi, robots and pencils, and super intelligent to get an ad free version of the show go to patreon.com/aidilybrief or you can subscribe at Apple Podcasts. If you are interested in sponsoring the show, send us a note at [email protected]. Aidilybrief.ai is also where you will find out everything that is going on in the AI DB ecosystem and where and this one is going to be relevant for todays, you can access the companion experiences which are not for every single show,

but are for many of them, you can find that at play.aiidilybrief.ai and everything that I'm sharing in this episode will be linked there. Today we have another episode in our build week series and board is this one cut to the heart of building right now. We officially live in the agentic era and agents as we know need context to do their jobs well and yet context is one of those things that is very simple to articulate and much harder to actually organize in a way that is

useful. Now this is obviously a big problem in the context of organizations. Michael Chen from

Applied Compute recently dropped an article on X called what to expect when you're deploying AI in the enterprise. He writes that Applied Compute, we spent the past six months in bedding inside companies to deploy AI into production workflows. i.e. actually sitting in their offices, filing tickets, reading confluence pages, fighting for access to data and shipping agents into production that improve over time. There is surprisingly little written about working with

large organizations in the age of AI so this is our attempt to fill that gap. And big and blaring right at the front is one data ready is just a state of mind. The gap between we have data and we have data in a format that an AI system can learn from is enormous. It surprises everyone even teams that have already wrangled internal data for incredible companies. Most enterprise

data was never structured with AI consumption in mind. It's difficult to imagine a more challenging

starting point for a data project which at its core every agent deployment is a hard data problem.

Now he's using the word data but obviously in this case this is at least partially synonymous with

context. One of the big differentiators between organizations that are leading an organization that are lagging is that the lagging organizations tend to operate without their AI systems having access to context. In other words, they're drop in copilot on people's heads and helping it all works out which is very different than becoming an AI native organization. Now there are a lot of organizations who are working on the context problem for the enterprise. Just to take an example

from the last 24 hours at the time that I was recording this episode, notion who basically their entire play for enterprise AI is the pitch that they already have your enterprises context announced database agents which they describe as a team of little librarians in your database, keeping it up to date automatically using context from your page or workspace in the web. So, okay, we have an acknowledgement that context in the enterprises tough and we're even seeing

a lot of work on the context that agents can provide each other around their tool use. Andrew, in recently wrote, "Should there be a stack overflow for AI coding agents to share learnings with each other?" Last week, I announced context hub an open CLI that gives coding agents up to date API documentation. In our new release agents can share feedback on documentation, what worked, what didn't, what's missing. This feedback helps refine the

docs for everyone with safeguards for privacy and security. So context hub for agents is all about the context they need to use tools better. And yet, you might have spotted that what all of those efforts don't have is an emphasis on the individual. Now, recently we had a moment where the challenge of the portability or lack thereof of personal context reared it's ugly head. In the wake of the Pentagon threatening and then following through on their designation of

anthropic as a supply chain risk and open AI's quickly regretted decision to announce their deal with the Department of Defense on the same night, there was a big push over the course the next couple of days to drop chat GPT and switch to cloud. That was of course when cloud hit

number one in the app store for the very first time. Now into that mail strum, the team at cloud

released what they called a feature to make it easier to import saved memories into cloud. Switch to cloud without starting over they promised. And of course, this is a big deal. If you've been investing in either cloud or chat GPT or grock or Gemini or whatever system you are, over time it's learned so much about you that the idea of having to explain to a new LLM, all of those things once again, becomes a reason just not to switch. Now, cloud's approach to importing memory

was pretty simplistic. In fact, all it was is a copyable prompt that cloud wrote that says basically, "I'm moving to another service and need to export my data." List every memory you have stored about me as well as any context you've learned about me from past conversations, etc, etc, etc. Basically, it was a prompt that asked chat GPT to write up everything it knew about you so you could hand that document off to a new chat bot. Not bad, but there's got to be

Something more, right?

and talk about and build a personal context portfolio. In other words, a portable machine readable representation of who you are, so that in the future, every AI agent, tool, or system you use, knows about you coming in and you are no longer dealing with memory and context-based product lock-in. So the problem we've discussed is that every time you set up some new agent or some new cloud project or onboard some new tool, and presumably if you're listening to the show that

happens more than infrequently, you have to re-explain yourself from the ground up. Your role,

your projects, your preferences, your constraints, even now you like to talk to the machine. And when that was a very occasional switch, maybe that was in the realm of annoyance. By the time you're dealing with three agents or five or ten agents, though, it's completely untenable. And as you get into the world that we're going into, where every week there are going to be new types of agents and agentic surfaces that you're interacting with, it is going to become

absolutely critical to have a way to get out of paying this context repetition tax.

Now, importantly, the context repetition tax doesn't just waste time, it also degrades quality, and I guarantee you that even if you have been willing to provide your context to a new agent you're working with, the sheer time and effort it takes to explain everything fully means that there was probably a lot that was left out. The solution that I'm proposing is a personal context portfolio. A structured set of markdown files that together represent you as a context package.

Effectively, it's an operating manual for any AI that works with you that knows about your roles,

your projects, your team, your tools, your communication style, your goals, your constraints, your expertise. Effectively, it's API documentation but for you, a single source of machine-readable truth about who you are that any agentic system can read. Now, a couple of design principles for this.

One is obviously this is going to be markdown first. You might have yesterday just listened to

the agent skills masterclass and even if you haven't, you're probably familiar with this new primitive that is skills. Skills are effectively a folder of information that updates the knowledge based in context for any given agent that is all rendered in markdown files. Every AI system on Earth can read markdown, it is the universal interchange format for context and so the personal context portfolio is going to be marked down first. Second, we're going for modular not monolithic.

This is not going to be one giant about me file. We have separate files and separate templates for separate parts of the whole that is you. This means that you can give different agents different pieces of what they need. It allows agents to grab what's relevant and ignore what's not. It also means which gets to principal 3 that this is living and not static. This is not a thing you write once but it's a thing you maintain or better that your agent's help you maintain.

As projects change and priority shift, the personal context portfolio should evolve with you. And again, because it is modular, it's not just that you'll change what's in this initial file set, probably find reasons to expand the files that are actually in the portfolio. Now, obviously the last piece which is sort of implicit in the markdown first principle as well is that this is meant to be portable across everything working with claw,

Apache BT, OpenClaw, Gemini and whatever else comes next. By being marked down first, it is just files and you can bring them anywhere. So what are the files? I want to stress that this is not necessarily for everyone going to be comprehensive or even the right breakdown. But I wanted to have a clean starting point that would be significantly better, like 10x better than nothing. And so the portfolio template that we've put together is divided into 10 different dimensions. The identity.md

file is first. It's your name, your role, your organization, what you do in a single paragraph. This is you to still down into a page. If the agent can only read one file, you want it to be this one. Next up is roles and responsibilities.md. This isn't your job description. This is your actual lived experience. It explains what your job or your activities actually involve day to day. Can be anything from what decisions you make to what you produce to who you serve to what your

week looks like. Currentprojects.md goes a level down. These are the active work streams that

contain in this file, status priority, key collaborators, goals, KPIs, what done looks like for each.

My guess is that this will be the file that changes most often. Because presumably from week to week, what is a current project versus a past project versus a nice box project is going to change. Team and relationships.md is the key people you work with. They're roles. How you interact with them. What they need from you, what you need from them. When you've got agents prepping meeting notes or agendas or one-on-ones, this is going to be one of the key files that they need.

Tools and systems.md is what you use, how it's configured, what's connected to what, rather than agents running off and using whatever tools they think would be useful, this gives them a picture of your stacks so they can make sure that what they're doing actually imports with the systems you already have. Communicationstyle.md. Maybe this one seems less important to you.

A good disgrace is for me at least. Every time I interact with agents, I'm always surprised at how

Much this one matters.

sick of fancy or fluff or coddling or wavering. Effectively, there are a lot of things about

the way that models on average communicate that I very much dislike. And so communicationstyle.md, which could include everything from how you write, how you want things written for you, your tone preferences, your formatting preferences, what you dislike. This is a file that is both internal facing and external facing. It impacts how the agent communicates with you, but it also helps make every output of the agent feel like yours.

Goals and priorities.md is a level up from current projects. This is about what you're optimizing for right now, whether the right frame of references this week, this month, this quarter, this year, or in your career overall, it gives your agents the ability to weigh decisions and recommendations appropriately, viewing the work as a continuous whole, rather than siloed in the

context of any individual project. Preferences in constraints.md is the always do this never do that

file. That this could be a very diverse set of different things for different people. If you're using agents to help plan your travel, maybe this is dietary restrictions or timezone constraints, maybe it's about tools you refuse to use, strong opinions you have about formatting. Basically, this is all the stuff that out of the box an agent is going to get wrong most of the time, unless you tell it how to get it right. Domainknowledge.md is your expertise areas, your industry

context, key terminology. These are the things that you know that a general purpose AI doesn't. If you work in biotech, this is where the agent learns, but you know what a phase two trial is and

doesn't need to explain it. Now this is another one that I think could be very expansionary over time

at the beginning it might be just a log of what you know, but over time it might actually impart

some of that, so your agents in the future know it too. Finally is decisionlog.md, the history of

past decisions and the reasoning behind them. I actually think that this could end up being the most underrated file, because when an agent is helping you think through a new decision, knowing how you've decided things before is enormously valuable. Alright folks, quick pause. Here's the uncomfortable truth. If your enterprise AI strategy is we bought some tools, you don't actually have a strategy. KPMG took the harder route and became their

own client zero. They embedded AI in agents across the enterprise, how work it's done, how teens collaborate, how decisions move, not as a tech initiative but as a total operating model shift. And here's the real unlock. That shift raised the ceiling on what people could do. Human state firmly at the center while AI reduced friction, surface din site, and accelerated momentum.

The outcome was a more capable, more empowered workforce. If you want to understand what that actually

looks like in the real world, go to www.kpmG.us/AI, that's www.kpmG.us/AI. Blitzi is driving over five X engineering velocity for large scale enterprises. A publicly traded insurance provider leveraged Blitzi to build a bespoke payments processing application, an estimated 13-month project, and with Blitzi, the application was completed in live in production in six weeks. A publicly traded vertical SaaS provider used Blitzi to extract

services from a 500,000-line monolith without disrupting production, 21 times faster than their pre Blitzi estimates. These aren't experiments. This is how the world's most innovative enterprises are shipping software in 2026. You can hear directly about Blitzi from other Fortune 500 CTOs on the modern CTO or CIO classified podcasts. To learn more about how Blitzi can impact your SDLC, book a meeting with an AI solutions consultant at Blitzi.com, that's BLI, TZY.com.

Most companies don't struggle with ideas. They struggle with turning them into real AI systems that deliver value. Robots and Pencils is a company built to close that gap. They design and deliver intelligent, cloud-native systems powered by generative and agenda AI, with focus, speed, and clear outcomes. Robots and Pencils work in small, high-impact pods. Engineers, strategists, designers, and applied AI specialists working together to move from

my data production without unnecessary friction. Powered by RoboWorks, their agenda acceleration platform, teams deliver meaningful results including initial launches in as little as 45 days depending on scope. If your organization is ready to move faster, reduce complexity, and turn AI ambition into real results, Robots and Pencils is built for that moment. Start the conversation at robotsandpensels.com/aidlybrief. That's robotsandpensels.com/aidlybrief.

Robots and Pencils impact at velocity. It is a truth universally acknowledged that if your enterprise AI strategy is trying to buy the right AI tools, you don't have an enterprise AI strategy. Turns out that AI adoption is complex. It involves not only use cases, but systems integration, data foundations, outcome tracking, people and skills, and governance. My company, Super Intelligent, provides voice agent-driven assessments that map your organization

or maturity against industry benchmarks against all of these dimensions. If you want to find

out more about how that works, go to besuper.ai. And when you fill out the get-started form,

Mention maturity maps.

So that's the 10 files that make up the template of the personal context portfolio. But how are you gonna fill this out? You, my friend, live in AI worlds, so you are certainly not going to write this by hand, my goodness. Instead, you are going to have the AI interview you to get it done. For each file, you're likely going to follow a pretty similar loop.

First of all, if you're using something like claw or chatchybt, you'll probably want to create

a project to house this all so that the context of the process itself gets shared across the different instances of these types of interviews. And effectively, you're going to go through a process of interview, to draft, to reaction, to revision, and then so on and so forth in a recurring loop until you feel like you've got enough information to be going with. Now, because we live in build world, I didn't just want to describe this all to you guys, I wanted to actually provide

some resources. So here are a couple. First of all, I've put up the personal context portfolio as a public repo on GitHub. This is going to have templates for all of those files that I just mentioned, and the templates include not only the ultimate output structure that you're going to want, but an interview protocol that you can hand your AI build partner. Each of the 10 files has that interview protocol as well as the output structure. There's also an overall interview protocol that you

can use as you're setting up your project. If you want to get a sense of how this might look

in practice, there are three synthetic demonstration examples. One for an entrepreneur, one for an executive and one for a knowledge worker. There's also a folder called wiring, which gives some resources for turning this into a cloud project, or an MCP, or an API layer, and we'll come back to that in just a minute. So hopefully this makes it fairly easy to get up and going, and like is that all this is available on play.adlibrief.ai, and I might even put this one actually on the

main section of the website, but come on man, we live in Agent Build world. We can go a step farther than this, right? Of course we can. So for those of you who don't want to bother with all these messy templates and interview protocol and all of that, you are just going to use the personal context portfolio app that we built. This is exactly what it sounds like. It's got two sections. An interview, which is powered by Opus 4.6, and the portfolio that it's building persistently in the background.

The interview is designed to never be fully done. It works through questions based on the overall

goals, trying to fill out all 10 of those portfolio files, but it will engage with you for as

long as you want it if you want to come back, it will continue to talk with you, adding more

information in. Now the cool thing about this is that rather than having to break this up into 10 different interviews, like you might have to if you were using a cloud project, when you answer one question, if it's relevant for different portfolio files, that's all going to be added at once. You can see, for example, here, when I explained what super intelligent did, helping enterprises with AI strategy, it added notes to the identity file, the current projects file, and the domain

knowledge file. This speeds things up anytime you want, you can download your portfolio, and obviously this is, of course, totally private to you, completely free, and hopefully a faster leg up to get started. Now, honestly, given this is just one episode, I should not have spent as much time as I did trying to get the actual interaction right, but I gotta say, I think this one is pretty useful, so you should go check it out. The only reason by the way that I'm not

giving you a dedicated URL right now outside of the podcast website is that I'm not sure what dedicated URL I'm actually going to use for this. Now, once you've got your portfolio downloaded, the last piece of the puzzle is how you make it highly transportable. Not to be clear, you don't necessarily need to do this step. If you host, for example, your own personal context portfolio on GitHub, many agents are going to be able to interact with that and use it. Plus, if you have the

folder of Markdown files, you're going to be able to drop that in any chatbot. But for the sake of exploring more advanced modes, let's now put your personal context portfolio into an MCP server.

Now, for this, we are going to lean heavily on what I think is the single most important advice

that I give anyone about how to learn how to use AI, which is to lean on the AI as your tutor and build partner. I've been managing this whole endeavor as part of my AIDB training project on Claude and I got through the entire process and I could tell as I was transitioning from the part of the project where I was getting these templates up on GitHub to the part of the project where I wanted to put my personal context into an MCP server that I was exhausting Claude's

context window. For me, that usually manifests as it getting short and kind of lazy, and so I had to write a hand off that was specifically about this MCP goal and we dove in. And pretty much all the time that I spent on this was going back and forth with Claude to help me figure things out.

Now, the first job of this was Claude wrapping its head around exactly what I wanted out of the

experience, whether it was read only or read right, what the off model was, whether it was a combined resource or the individual files. From there, it produced this massively long document with all the steps, which looking back now were the steps that I would ultimately go through, as well as this particular bunch of code that I would use, the long set of read me in a couple other documents. Now, for the purposes of both the podcast and my own purposes, I said this is 1,000% to complex,

Walk me step by step through creating an MCP server and I'll figure out how t...

And to put a fine point on this, AI zero judgment, there is no risk of you looking or seeming dumb

because there's no one on the other end of the line to think that. When you're trying to get something

explained step by step, even if it tries to raise a head, demand that it go back and do things

more simply. So in our case, from there, Claude got way basic. It reminded me first what an MCP

server is mechanically, a program that responds to a specific protocol, and AI tool sends it a request saying what you have and it responds with a list of resources. The tool says give me this resource and it responds with the content. So of course, in our case, the AI tool wants to know more about you or your project or your team and the MCP server has all of those resources at the ready. Now step two, it divided the way that an MCP server can run into the two categories of

local or remote. Is this all just for things going on on your machine or do you want yourself or others to be able to access it from anywhere? Ultimately, I wanted to do both so we dove in.

Now, once again, it immediately tried to not go step by step, but to give me a whole bunch of

information at once and I had to remind it to slow down. This is the process that I would recommend you follow. Pull up Claude or Chatchy BT or Gemini or whatever your LLM of choices, once you have

this personal context folder and have it walk you step by step through how to set it up first,

remotely, and then on the web. And one thing to keep in mind is you're doing that is that the vast majority of the time I spent on this was sharing screenshots of things that went wrong and asking it to help me figure it out. For example, this little message, one MCP server failed. A lot of the work is troubleshooting. Now in that case, we figured out that Port 3000 on my computer was already taken so it was relatively easy switch, but that's the type of thing you're going to

experience as you go back and forth on this. Another small tip. One thing that I've noticed is that

when Claude or Chatchy BT are giving you some code that you need to run somewhere or copy paste

into cursor or VS code or something like that, once they've given you the initial block of code, they'll often say now just change this one thing. I have found personally that a lot of the errors that I run into are accidents in the copy pasting of the changing of that one thing. And so I will frequently say even if it's repetitive, when you're asking me to change one line from this whole 77 line document, just give me the whole new 77 line document so I can copy paste the entire thing

at once. A couple other errors we ran into, one of them was a file naming mismatch, and after that, pretty much things were running. Finally after about 10 or 15 minutes we got to the point, or I could say what do you know about my identity, and it was able to pull up the identity file. Ultimately this was actually a very small amount of work. Almost all of the time was in the trouble shooting. Now to deploy it remotely, there were just a couple more steps. First we had to create a

GitHub repo. Next we had to make sure all the portfolio files were copied into the project. We had to change a line or two in the server code, and then step by step, it told me exactly what to do to get everything pushed up into GitHub. We were able to deploy it using Railway, which took basically no time at all. The jump from local mcp server to something that was available on the web actually took less time than the local, just because we ran into fewer issues.

My recommendation is that it's worth taking the time to work with an AI-build partner like Claude or ChadGBT to try to go through this process, even if you think that in this case, you're not sure how useful this particular mcp server will be. I do think that a lot of the value you're going to get out of the follow along for this is going to be just in the creation of the files, which is of course why I spent most of my time building out the context portfolio

interview agent, but it is a really great way and a pretty simple and clear context to learn how to use mcp, and so if you haven't yet, give it a try. Overall though, that is how we go from endlessly repeating ourselves, telling AI about ourselves and our projects and our teams, to doing it once, allowing it to stay updated and giving every agent an AI that you interact with access to the same pool of information. Hopefully this was a useful one, have fun this weekend

trying it out for yourself. For now, that is going to do it for today's AI Daily Brief.

Appreciate you listening and watching. As always, and until next time, peace!

Compare and Explore