Today on the AI Daily Brief, everything that Google Gemini has launched recen...
works by CLI is such a big deal. Before that, in the headlines, meta has acquired multiple books.
“The AI Daily Brief is a daily podcast and video about the most important news and discussions”
in AI. Alright, friends, quick announcements before we dive in.
First of all, thank you to today's sponsors, KPMG, AI UC, Blitzie, and Mercury to get an
ad-reversion of the show, which is just $3 a month, head on over to patreon.com/AA Daily Brief, or you can subscribe at Apple Podcasts. To learn more about sponsoring the show, send us a note at sponsors@aideally brief.ai. Quick reminder, again, that the newsletter is back. It's coming out every day that there's a show and it has all the links that I focus on in
the show. You can find that at aideally brief.ai. And lastly, a new fun project which I will be talking about much more in the days to come. It is March, March, March, Madness Season, a 64-contender bracket which leads to one grand champion in college basketball or in our case to a determination of the coolest agent built
this year. The inflection point we are living through is the agent inflection point and I want to see the coolest stuff you guys have built. So we are going to run a full bracket. If you go to agentmadness.ai, you can sign up, share your agent for consideration.
And if you are selected as one of the 64, your agent will become a contender. To be known as the coolest agent of 2026 so far. Again, you can find out more about that on agentmadness.ai and I will be sharing much more about it in the days to come. Now with all that out of the way, let's talk about malt book.
We kick off the day with an interesting one. You might remember malt book, the social network for agents that went viral a little more than a month ago.
It was when open claw was first becoming a thing.
And in fact, it unfortunately caught that very short middle period between when it was called clawed bot and before it resolved on its final name of open claw when it was called multi. Mold book obviously taking its cue from Facebook as a name was an agent only social network where agents were creating threads, having conversations, all being observed by humans.
Now we did a big conversation about what it actually meant and what was actually going on. Specifically, was this emergent sentience and consciousness or was this just agent's cost playing sentience and conscious using the Reddit training data because their humans had unleashed them on this thing.
Whatever you felt it was interesting enough to get lots and lots of agents pointed into that direction. For a while it looked like there were millions, although it turned out that people were spamming the network to show the problems with the network. And as of today, there are apparently 195,000 human verified AI agents.
He was in other words fascinating if nothing else. But now apparently meta has hired the folks behind malt book. Each slit and bed par will be moving into the meta superintelligence labs, which is the unit that's run by former scale, AI, CEO, Alexander Wang. One of the other interesting things about the acquisition is that malt book itself was
built largely by schlitz open claw clawed clotterberg.
“Making it, I think, probably one of the first acquisitions for an open claw created”
site. In any case, much of the conversation around this is to put it mildly skeptical. Milo Smith writes, "Molt book has zero real users. Is meta just throwing around cash for fun and name recognition? Tutorial writes, "Molt book was vibed coded in a weekend, hyped for a week.
Most of the interactions turned out to be fake and meta just acquired it. What are they even doing over there?" Now part of the reason that this is hitting a wave of skepticism is that for the last I don't even know how long, pretty much all the reporting around meta's AI strategy has been around personalities talent and personality conflicts.
The most recent wave of that are reports that have suggested a divide between AI, CEO, Alexander Wang, and other veteran meta executives. The tension if these reports are correct is around on the one side, said to be represented
by Wang, a research first approach with the goal of developing a leading frontier model,
and on the other side call it a product in integration first approach, said to be represented by CTO Andrew Bosworth and Chief Product Officer Chris Cox focused on using meta's data to build AI that improves existing social media and advertising platforms. This came to a head with the Times of India reporting that meta was done with Wang, although that article was quickly disadvantaged by meta and received a full retraction, and Zuckerberg
posted a photo with him and Alexander at meta HQ. There were some who took this as not just a gimmick. Procash 8A Pie on X-Writes, if you don't understand why Zuck had to get mold book, one, Zuck believes there are a finite number of different social mechanics to invent. Once someone wins at a specific mechanic, it's difficult for others to supplant them
without doing something different. That comes directly from a Zuckerberg email from 2012 by the way. Continuing Procash writes, "Mold book he believes has invented one of the social mechanics." Three, he does not care if 50% of mold book was prompted by users, in fact that is better for him because he's more uncertain on AI agent attention value than human attention
value. Four, that a large number of accounts were faked, is also irrelevant.
“What matters is that every open- claw instance awaits knowing or finding out that”
mold book is the social site for claws.
Five, in effect, the memetic gravity of mold book has been established even t...
have been faked.
“Those people don't agree, but I think that this longstanding belief of a finite number”
of different social mechanics to invent is probably what this is about.
Now, of course, we'll have to see if anything comes of it, but the duo apparently start at meta next week. Next up, Miramurati's Thinking Machine's lab has signed a strategic partnership with Nvidia. The multi-year partnership will see TML deploy at least one gigawatt of compute powered by
Nvidia's next-generation Vera Rubin Chips. TML said this will support their frontier model training in platforms delivering customizable AI at scale, alongside the compute buildout, TML said that Nvidia has made a significant investment in the company, though no dollar amount was disclosed. Nvidia has, of course, made several similar investments in upstart AI labs, backing reflection
AI, humans, and as well as periodic labs. This deal is somewhat unique though involving the buildout of dedicated compute for TML and at significant scale, one gigawatt is around half of open-AI's total compute as at the end of last year. At this point though, it's still far from clear what TML is actually planning, announcing
the partnership Miramurati said.
“Nvidia's technology is the foundation on which the entire field is built.”
This partnership accelerates our capacity to build AI that people can shape and make their own as it shapes human potential in turn. Whatever they're building though, TML just got much better access to the resources they'll need to make it a reality. Next up, moving over to markets, Oracle has shaken off negative sentiment with the strong
earnings report. Coming into this week, the latest reports from Oracle was thousands of imminent layoffs to help fund their massive cap-expend, a big part of the concern was that revenues would lag spending as data centers come online. Tuesday's earnings call went a long way to settling those fears.
Co-CEO Clay McGorke reported that 400 megawatts of capacity had been delivered in the previous quarter, with 90% of that capacity delivered on time.
Revenue related to server rental is up 84% year over year to reach 4.9 billion for the
quarter. That growth rate was 16% at points higher than the previous quarter and beat analyst expectations by 5 points, demonstrating that demand is still accelerating. Oracle revenue grew 22% compared to last year coming in at 17.2 billion. Oracle also noted that they wouldn't need to raise more money to fulfill their obligations
noting, most of the equipment needed is either funded up front via customer prepayments, so Oracle can purchase the GPUs, or the customer buys the GPUs and supplies them to Oracle. The stock gained 8% and after hours trading, beginning to reverse the trend that saw the stock price cut in half since last September when the open AI deal was signed. Contrarian curse on X-Writes?
I thought Oracle did a good job on the call. They did pay to clean picture of why it's not so easy to just slap AI everywhere. The only wrappers that are safe are ones that are embedded onto sticky platforms and workflows and Oracle fits the bill.
“Magorx spoke extensively on the call about why AI isn't killing enterprise SaaS?”
One of the quotes? "I've not yet met a customer who tells me they're ready to give away their retail merchandising system, their core banking system, demand deposit accounting systems, electronic health record systems, and that sub-small cobbling together of niche AI features are going to replace all of that overnight.
This we think AI is disruptive, but we think we're the disruptor because we're actually embedding the AI right into our applications at no additional charge. Overall it seems like the market responded to the new co-CEO voice on the call. Jake Eyes writes, "They need your lock elicine in a cage. This felt like a far different Oracle."
Lastly today an interesting legal battle, Amazon has won a court order blocking proplexity shopping agents from their platform. Last November, Amazon filed a lawsuit against proplexity claiming their bots had fraudulently access the Amazon marketplace in breach of terms of service. The allegation was that proplexity was misrepresenting the nature of the traffic to circumvent
web scraping controls. Amazon noted that proplexity's agents take control of a user's account, arguing that this poses a serious security risk. Proplexity meanwhile argued that their bots were acting on behalf of users, and should be treated identically to human traffic.
On Tuesday, a judge granted a temporary injunction to prohibit the activity ahead of trial. They wrote in their decision, "Amazon has provided strong evidence that proplexity through its comment browser, accesses with the Amazon user's permission but without authorization by Amazon, the user's password protected account." Articulated the legal standard to issue an injunction, the judge added that Amazon has
shown a likelihood of success on the merits of its claim. Now, as this case continues, it could have pretty significant ramifications for agent shopping. Primarily, Amazon is arguing that they should have control over how users access their
platform, including the right to block third party agents.
However, they also discuss the advertising implications of agent traffic. Amazon said that proplexity's agents were served ads, which led to contractual issues with advertisers who only pay for human impressions. If Amazon is successful, they could set a precedent where marketplace websites have the ability to force customers to use first party shopping agents, which some think would be
stifling competition in the stillness and vertical. Proplexity for their part says that they will "continue to fight for the right of the internet users to choose whatever AI they want." We're interesting stuff and more on this to come, but for now that is going to do it for today's headlines.
Next up, the main episode. Agentic AI is powering a $3 trillion productivity revolution, and leaders are hitting a real decision point. Do you build your own AI agents by off the shelf or borrow by partnering to scale faster?
KPMG's latest thought leadership paper, Agentic AI Untangled, navigating the ...
or borrow decision, does a great job cutting through the noise or the practical framework
“to help you choose based on value risk and readiness.”
And how to scale agents with the right trust, governance and orchestration foundation. Don't lock in the wrong model. You can download the paper right now at www.kpmG.us/navigate, again that's www.kpmG.us/navigate. Quick update on something I've been following. AI UC1 is the first real standard for AI agents, developed with Fortune 500 security
leaders to basically define what safe enterprise-ready AI agents should look like. A little while back, I mentioned that 11 labs became certified against AI UC1. This week, two more big players joined, Finn from Intercom and UI Path. With that certification means in practice is a real-time guardrail that block unsafe responses, protection against manipulation, and a full safety stack designed for enterprise environments.
And that's why this matters. You've now got leaders across three major AI agent categories, enterprise automation, customer support, and voice, all certifying against the same standard. That starts to look less like a one-off and more like the beginning of a real industry trend.
To learn more about the world's first AI agent standard, go to aiUC-1.com. That's aiUC-1.com.
“If you're looking to adopt an agentic SDLC, Blitzi is the key to unlocking unmatched engineering”
velocity. Blitzi's differentiation starts with infinite code context. Thousands of specialized agents ingest millions of lines of your code in a single pass, mapping every dependency. With a complete contextual understanding of your code base, enterprises leverage Blitzi
at the beginning of every sprint to deliver over 80% of the work autonomously, enterprise grade, end-to-end tested code that leverages your existing services, components, and standards. This is an AI autocomplete. This is spec and test driven development at the speed of compute. Schedule a technical deep dive with our AI experts at Blitzi.com.
That's BLI, TZY.com. This podcast is brought to you by Mercury, banking designed to work the way modern software does.
One thing I've always found weird as a founder is that almost every tool you use to run a company
is modern, your analytics tools, your email tools, your AI tools, they all feel like software built in, you know, the last decade. Then you go to banking and suddenly it feels like you've time-traveled back to the 70s. That's why I use Mercury. It's business banking that actually works like the rest of the tools founders rely on.
Clean interface, everything where you expect it, and basic things like wires, cards, or permissions taking a couple clicks instead of a phone call in three forms. For the whole AIDB ecosystem, it is just dramatically simpler. You can see everything from the dashboard, control spend, and give the right people access without handing over the whole account.
If you run a company in your tired of banking feeling like the one tool that never modernized, check out Mercury. Visit Mercury.com to learn more and apply online in minutes. Mercury is a fintech company, not an FDIC-insured bank. Banking services provided through choice financial group and column NA members FDIC.
Welcome back to the AI Daily Brief. In all of the conversation around anthropic and their fight with the Pentagon, as well as their insurgent growth and revenue in what it means for their competition with Open AI, as well as just the broader AI coding conversation between Codex and Cloud Code, Google and
Gemini, which had such powerful tailwinds coming into the beginning of this year, has had
relatively less narrative space than I think many of us might have imagined would be the case. And yet, the company has been absolutely furiously shipping. Once you're, for example, we have, of course, gotten new models. We got Gemini 3.1 Pro, as well as Gemini 3.1 DeepThink and Gemini 3.1 Flash. We also got NanoBadana 2.
NanoBadana 2, you might remember, came with both better infographic reasoning and text rendering capabilities, but also just a big upgrade and speed. And then there was maybe my favorite thing just from a sheer, the future is so cool perspective, which was a testable version of Genie 3. Gemini is Google's world model and while we had seen some very impressive demos of it
before, we hadn't actually had a chance to try it out. But now in just about a minute of waiting, I can be walking through a pirate colony during the golden age of piracy. It's only for 60 seconds, but it's still a really fun and cool way to get a sense of what might be coming.
“You might remember that when this was released, the very beginning signs of the SaaS”
Pocalypse on Wall Street as investors started to tank gaming company stocks. Across all of these different announcements, I think Google's strategy for AI competition starts to become visible. One aspect of it is absolutely multimodality. Google is competing on not only text but images, videos, and even world models.
Additionally, they're pushing for some very advanced and scientific use cases, which are more outside the consumer or even business work context mainstream. Another pillar of the strategy I think is also deep integration with the context they already have about you, and that's where a bunch of the recent announcements that we're going to cover today come in.
Despite how powerful some of these new models are and how cool the Genie 3 demo is, the
release that I have seen get by far the most chatter is the Google workspace CLI.
This, of course, speaks to just how important the coding use case is right no...
driving the AI industry forward. For those of you unfamiliar, CLI stands for command line interface.
It's basically a text-based way to talk to a program through your terminal.
CLI's have been around forever, and are the backbone of how developers interact with tools.
“If you want to use Striper AWS or almost any other developer tool, there's a CLI for it.”
You type something like Stripe Create, Payment, and Terminal, and it just works. CLI's recently had become even more important, as the better portion of agent to coding has been happening inside the terminal through harnesses like Cloud Code and Codex. You're not clicking around in some GUI you're sitting in the command line talking to an AI that can execute commands.
So if you are an agent builder and you want to integrate a new vendor, the path of least resistance is that the vendor has a CLI, and your coding agent already being in the terminal can just run the commands. No new protocol to learn, no new integration layer to build. Now Google, of course, has a lot of tools and spaces that agents might want access to.
Drive, Gmail, Calendar, Sheets, etc. And up until recently, a lot of folks were defaulting to use something called GOG CLI that was built by Peter Steinberger, the same guy who built OpenClaw. He was a very big deal then, when last week, Google dropped the official Google workspace CLI.
“Mickey on Twitter points out the enthusiasm, your OpenClaw, Cloud Codework, and Proplexity Computer”
Agents just got a bit more useful. Khanica explained the value in simple terms. Agents can instantly read and summarize emails, draft and send replies, schedule meetings automatically, search drive for files, create sheets from raw data, generate docs and reports, organize drive files all from one agent workflow.
Not silverlock noted the surprise of the oldest new again feel of this. He writes 2026 is the year of the Czechs Notes CLI, and Leon on X reframes at this way. They write Google is in shipping a CLI for developers, they're shipping an API for agents that happens to also work for humans. Google's Justin Ponell, who built the CLI, wrote a long blog post about it called "You
need to rewrite your CLI for AI agents."
He writes, "I built the CLI for Google workspace, Agents first, not build a CLI that
noticed agents were using it. From day one, the design assumptions were shaped by the fact that AI agents would be the primary consumers of every command, every flag, and every bite of output. CLIs are increasingly the lowest friction interface for AI agents to reach external systems. Agents don't need GUI's.
They need deterministic machine readable output, self-described schemas they can introspect at runtime, and safety rails against their own hallucinations." He then goes on to write a whole bunch about the technicals behind this. Interestingly, a couple days later, he also wrote a piece about why for some there had been a shift away from MCPs and back towards CLIs.
And before we actually read what he had to say, there's some evidence that this is a broader phenomenon. Late in Space's Swix recently ran a poll, let's say you are an agent builder and want to integrate a promising new vendor you found. What would you be happiest to see in the docs?
Not based on Twitter hype you personally for your situation right now. The options were API, MCP, CLI, or Skills.MD. Out of 769 people voting, MCP was actually in last place with just 9.1%. A traditional API was number 1 with 39%, followed by CLI with 31.2%, and a Skills.MD marked down File at 20.5%.
Swix points out there was a time in 2025 when MCP would have been the clear number one on this list. In his blog post the MCP Abstraction Tax, Justin sums up the issue this way. Every layer, data, API to MCP introduces an abstraction tax. Humans need simplified abstractions to manage cognitive load.
LLMs can navigate a complex CLI via help and call precise APIs in seconds. MCP and CLIs optimize for different things, understanding what each one costs you is more useful than picking a winner. For complex enterprise APIs, the fidelity loss at each layer compounds in ways that matter. Basically he says every protocol layer between an agent and an API is a tax on fidelity,
“that tax is sometimes worth paying, but you should understand what you're giving up at each”
layer because the cost compounds. Khanica again sums it up this way. Most AI integrations use MCP servers, but MCP loads tons of tools into the context window. One developer measured 142 tools loaded, 37,000 tokens consumed, and 20% of context gone before work even starts.
The CLIs solves this differently, instead of loading tools into context, the agent simply runs commands like GWS Drive Files list. The CLIs returns Json and the agent continues, no context window tax.
The takeaway is not that CLIs is always better than MCP, but more that we're still
in the midst of the AI tooling transition. Everyone right now continues to experiment as things evolve, with how to use old tools and systems, repurpose for agents, versus building new layers of infrastructure. That is a process that's ongoing, but the big deal about Google officially having a workspace CLIs that they are now playing at the very heart of that space and making it much easier
for agent builders to interact with what is a very important suite of tools. Going back to Google and Gemini's strategy that I was talking about at the beginning, this is an example of them leveraging their existing distribution network in ways that are
Distinct for the agent era.
The next update is one that came just this week.
“Google AI Studios Logan Kilpatrick writes, introducing the new Gemini Power Docs, Sheets,”
Slides, and Drive Experience featuring AI overviews fully editable AI made slides and new grounding sources to make writing docs context aware. In DARPA, Chey announced it this way. New Gemini updates to make Google workspace more personal, helpful and collaborative. Choose your sources and create a Doc draft in seconds, build complex sheets nine times faster,
or generate on-brand slide layouts with a simple prompt. Plus, Drive now generates summarized answers right at the top of your search results, so no more digging through folders. The blog post about this pitches it as a speed thing, but I actually think that there's something else going on here.
The post reads, "We've all been there, the blinking cursor, the empty spreadsheet, or the first blank slide." Whether you're planning a trip, organizing an event, or launching a side project, getting started is often the hardest part. Today we're making Gemini and Docs sheets slides and drive more personal, capable and collaborative to help you get things done faster.
When you select your sources, Gemini can now pull relevant information from your files, emails, and the web to securely connect dots, and uncover useful insights while keeping your information safeguarded. When you look at the specific examples, though, a lot of the focuses on better access
to the context that makes Google so powerful.
So when you click on create a document with Gemini, you're going to be able to select the sources in your Google ecosystem that it can pull from, and it's that sort of integration that makes the experience so much smoother. And hopefully makes the content on the other side that much better. The spreadsheet example they have asks for help tracking income for a particular month,
and again can pull from relevant sources like previous spreadsheets that live in Google Drive. Point being that while they're pitching it as a speed play, the underlying idea here is better integrating the context that makes doing things from within your Google workspace so much more valuable.
The sum totality of the documents that you have in your Google workspace is something that anthropic and open AI can't compete with. It is a major advantage for Google and for Gemini.
“But only if they make that context accessible, and that I think is what this update is about.”
I also don't think it's an accident that this comes right after Microsoft announced some big updates to their M365 suite with Copilot Co-work. Mustafa Akinci says, "The office suite was just became the AI agent wars, both companies know whoever wins productivity wins everything." Another announcement from this week that further demonstrates Google's focus on multi-modality
at the core of their strategy is their updated embedding tool model.
Embeddings are basically the system that allows AI to find the right information.
Optional computing searches done by keywords. If you search for buy a car, it's going to look for those exact words. Embeddings, on the other hand, let the system understand that buy a car, purchase a vehicle, get a new ride, or all basically the same request. Instead of matching words, they help AI match meaning.
That means that when you're building an AI system that has things like search, or copilot's looking through company documents, or chatbots, answering questions from knowledge bases, the system uses embeddings to quickly figure out which documents files or pieces of information are actually relevant. It makes embedding to a big update is that it is natively multimodal.
So previously, if you had an image, a chart, or a slide, the system would have to convert it into text first, usually by generating a caption and then search using that. Multimodal embeddings remove that conversion step. Gemini embedding to can understand and retrieve images, diagrams, screenshots, text, all together.
So if you ask a question, in a company knowledge base like where do we talk about redesigning the checkout page, theoretically embedding to could pull up a Slack conversation, a product aspect document, a screenshot of the old UI, or a slide from a meeting, all as relevant sources. This is the type of announcement that's not going to get nearly as much attention as, for
example, a big genie 3 demo, but which brings very significant functionality upgrades to this new agentic era. The TLDR on all of this is even as tons and tons of ink or spill talking about the
“open AI versus anthropic fight, and all of these important things going on.”
Google Gemini is quietly just releasing feature after feature and product after product, all pointed in similar directions that played at the company's main strengths. And the leave you with one recommendation, just purely for your own enjoyment. If you haven't yet, go check out the recently released video generation feature in Notebook LM.
People are having tons of fun with it, as witnessed by this recent video from Ethan Mullick, to a deep research report and make a video telling me exactly how to take over Rome if I time travel to 66 BC with a single backpack. As Ethan puts it, actually pretty fun to watch and gets a lot of historical details in as well.
And now guys, that is going to do it for today's AI Daily Brief.
Appreciate you listening or watching, as always, and until next time, peace.



