Today on the AI Daily Brief, why Work AGI is the only AI that the big labs ca...
and before that in headlines, IBOFeeverse starts to take hold.
“The AI Daily Brief is a daily podcast and video about the most important news and discussions”
in AI. Alright, friends, good announcers before we dive in.
Finally brief.AI, you can find out all about the ecosystem of projects surrounding the podcast, with a big fun one this week being Agent Madness,
are round of 64 as live, people have contributed some really awesome agents to the bracket, but you're going to want to get your voting in by Thursday before we move on to the round of 32. Again, you can find that at AgentMadness.ai. We kick off today with some fairly significant IBOFeeverse. CNBC recently got a hold of documents that they describe as resembling an open AI IPO prospectus,
with the documents warning of numerous risks to open AI like their close ties to Microsoft.
“Potential investors were told that Microsoft is responsible for a substantial portion of our financing and compute,”
and open AI also disclosed concentration risks saying, if Microsoft modifies or terminates its commercial partnership with us, or if we are unable to successfully diversify our business partners, our business prospects operating results in financial condition could be aversely affected. Now, this is particularly relevant given reports that Microsoft is considering a lawsuit to block certain parts of Open AI's partnership with Amazon. Additional risk disclosures include Open AI's significant capital expenditure, reliance on compute resources, ongoing litigation with Elon Musk and their unusual structure as a public benefit corporation.
They even mention geopolitical risk related to Taiwan. Now, while CNBC kind of sold this at first as a pre-IPO prospectus, it appears that this document was shared with potential investors in Open AI's recent fundraising round, meaning that it doesn't actually seem to be prepared for the IPO, and yet the list of risks will likely closely mirror disclosures once they actually go public. Sources additionally said that Open AI is seeking a further 10 billion from investors to add to the 110 billion already raised from soft bank in video and Amazon.
And as we'll hear in the main episode, it sounds like Sam Altman is changing his focus to be able to concentrate more closely on things like fundraising.
“Now, in Open AI's spokesperson, basically said that this is just legal nothing, Berger commenting,”
this is a standard legal risk factor disclosure unrelated to any potential IPO prospectus.
Similar language has been in place for years, Microsoft is and will remain a critical long-term partner.
Now, much more tangibly in IPO news, SpaceX is aiming to file their IPO paperwork as soon as this week. Sources speaking with the information said that SpaceX and by extension XAI are finalizing the details of their prospectus and could file documents with the SEC this week. The stock is expected to begin trading in June if all goes to plan. That would make XAI the first out of the gate as the three large AI startups head towards IPO. SpaceX is said to be aiming to raise 75 billion in the public offering,
which would make it the largest IPO in history by a wide margin. They were originally aiming for 50 billion so this would be a substantial upside. In fact, if it works, that single IPO would surpass all the money raised in IPOs last year combined. SpaceX last raised money at 1.25 trillion, suggesting that it would debut as around the 12th largest company in the world.
When the prospectus does come out, we'll get our first look at XAI's books.
Analysts expect that SpaceX as a whole is losing money and XAI is deep in the red. Now this IPO is also expected to have a few unconventional features. Elon Musk has said that he wants to make IPO shares available to retail investors and larger quantities than usual. Typically companies offer around 10% of IPO shares available to retail prior to the listing, but SpaceX is expected to bump that number to 20%. In addition, the SpaceX IPO won't feature the standard six month lock-up for existing shareholders.
The safeguard is usually put in place to stop insider dumping their stock and crash in the price right out of the gate, sources said that a customer arrangement is still being sorted out, although it's unclear if this means a shorter lock-up or it actually means a longer lock-up. The information finally reports that Goldman Sachs, Morgan Stanley, Bank of America, JP Morgan and City, have all been preparing IPO plans even though none of them has officially been hired.
Continuing the theme of "Bucking Convention," SpaceX is said to be considering an approach where each investment bank is assigned a different task as part of this largest IPO in history. Now, when it comes to XAI's role in all of this, there is plenty of skepticism to go around. Contrarian curse on Twitter rights? The obvious reason to merge XAI in SpaceX is because XAI is a fourth rate lab that Elon knows is screwed unless they get oodles of compute for free,
so they'll raise the 75 to 100 and jam it into GPUs. SpaceX barely needs the money, and yet, I don't think that there's going to be any shortage of retail excitement. A new ETF is sending pre-IPO AI stocks to the moon, although that's not necessarily a good thing. Last week, fund rise listed their innovation fund, which holds shares in SpaceX and
Thropic and OpenAI.
access to startup equity, this isn't necessarily what most had in mind.
“Shares in the ETF are up 1,500% since launch, most recently seeing a 64% jump on Tuesday while”
being halted twice for volatility. By the end of the day, the fund was valued at more than 16 times the value of the shares at holds. There's obviously some wiggle room on how in Thropic stock is valued, but the current ETF's price implies almost a $5 trillion valuation, and since they last raised in February at a $380 billion valuation, it is unlikely that in that subsequent time, no matter how good we think Cloud Code is, that they have jumped to become worth more than Microsoft.
Now, of course, this is actually just a market structure issue. It's not possible to create more shares to satisfy the demand, so the ETF can completely detach from its underlying value. To sum, it's in early indication that AI startups will have screaming hot IPOs with a ton of
retail demand, while others think it's just a sign that meme stock trading never went away.
Jack Shannon, a morning star said, "With the implied valuations when you have this premium, your upside is gone." Clearly, it's going to attract some meme crowd and get some high-octane trading, but if someone is in this for the long-term, frankly, it's a horrible investment at the current
“price. Matt Malone of Opto Investments also pointed out how this demonstrates,”
why staying private for a really long time is really rough on retail investors. Malone said that these numbers are great for investors who want to get out, but if you're coming in, you're paying a huge huge premium. This shows the dynamic from private markets to public markets when public markets are often held out as the preferred pricing mechanism, but in this case, the public market price doesn't really make sense. Staying in market themes, soft bank
is apparently pushing the limits as they scrounge up funding for their open AI bet. The financial time reports that soft bank is testing their self-imposed borrowing limits after committing another
30 billion to open AI. Soft bank had previously held themselves to a 25% loan-to-value ratio,
meaning they won't borrow against more than 25% of their stock holdings. Last year's 22.5 billion in funding already stretched them pretty thin, with soft bank selling all of their video holdings and taking out billions of dollars in margin loans against their armstock. Responding to the FT's reporting, soft banks to E.O. Yoshimitsu Gojo said,
“"I don't deny the possibility in the future that we may temporarily go beyond 25%.”
Still apparently soft bank won't permanently change their policy just temporarily work around it as they hit a cash crunch." Basically, more than ever, Masiya Shisan is betting the company on Open AI. Speaking of Open AI, a big new deal between that company and Helion Energy has Sam Altman stepping down as chairman and board member of the Fusion Energy Company. Sam Altman personally led Helion's $500 million series E in 2021 at a $3 billion
valuation. At the time, it was the largest ever venture investment in a nuclear fusion startup. Axios reported that the new deal with Open AI would guarantee the company 12.5% of the energy initially produced. The goal would be to scale that to 5 gigawatts by 2030 and 50 gigawatts by 2035. Lastly today, the Pentagon's battle with Anthropic is now officially landed in the courts with a federal judge dragging the Pentagon for their conduct against Anthropic in the latest
court hearing. On Tuesday Anthropic's application for an injunction was heard in Northern California, and Judge Rita Lynn was very unimpressed. She said the Pentagon's actions were troubling her word as it appeared to be punishing Anthropic for speaking out. Now the genesis of all this is that Anthropic sued the Pentagon two weeks ago claiming that their designation as a supply chain risk was on awful retaliation. Anthropic is seeking for that designation to be overturned.
The case is currently in its earliest stages, with Anthropic seeking an injunction to suspend the designation until there is a full trial. Now the Pentagon's lawyer suggested the impact of the designation could be narrower than previously stated. He said that his understanding was that the designation would not prevent a military contractor from using Claude Coat to write software for the military. Instead, he told the court that the designation only stopped Anthropic
technology from being used within Pentagon's systems. For those following the story that is obviously a complete 180 from Secretary of War, Pete Hegg says tweet, where he said quote, "effective immediately, no contractor supplier or partner that does business with the United States military may conduct any commercial activity within Anthropic." The Pentagon is now arguing that this comment was so obviously beyond the scope of the law that Anthropic shouldn't
be allowed to raise it in court. The judge was unconvinced, stating, "it looks like the Pentagon is punishing Anthropic for trying to bring public scrutiny to this contract dispute,
which of course would be a violation of the first amendment." What's more in this case, the
chilling effect of Hegg says words, "are just as much of an issue as the actual designation." Anthropic said this is already caused harm among their customers. The judge acknowledged that point commenting, "everyone, including Anthropic, agrees that the Department of War is free to stop using Claude and look for a more permissive AI vendor." I don't see that as being what this case is about. I see the question in this case is being a very different one, which is whether
the government violated the law. Now, even a little old superintelligent, recently got our first letters from customers asking us to send them plans on how we will stop using Anthropic because of their relationships with the US government. That it should be clear as not something that we are going to do." Ultimately, the case comes down to this. The Pentagon lawyer argued, "what happens if Anthropic installs a kill switch for functionality that changes how it functions? That is an unacceptable
risk." The judge retorted, though, what I'm hearing from you, though, is that it's enough if an IT vendor
Is stubborn and insist on certain terms and it asks annoying questions, then ...
as a supply chain risk because they might not be trustworthy. That seems a pretty low bar. Anyways, guys, there will be more on this, I'm sure. For now, however, that is going to do it for today's headlines. Next up, the main episode. Alright, folks, quick pause. Here's the Uncomfortable Truth. If your enterprise AI strategy is we bought some tools, you don't actually have a strategy. KPMG took the harder route and became their
own client-zero. They embedded AI in agents across the enterprise, how work it's done, how teams collaborate, how decisions move, not as a tech-initiative but as a total operating model shift. And here's the real Unlock. That shift raised the ceiling on what people could do. Human state firmly at the center while AI reduced friction, surfaced insight, and accelerated momentum. The outcome was a more capable, more empowered workforce. If you want to understand what that
actually looks like in the real world, go to www.kpmG.us/AI. That's www.kpmG.us/AI.
“If you're looking to adopt an agentic SDLC, Blitzi is the key to unlocking un-matched engineering”
velocity. Blitzi's differentiation starts with infinite code context. Thousands of specialized agents ingest millions of lines of your code in a single pass, mapping every dependency. With a complete contextual understanding of your code base, enterprises leverage Blitzi at the beginning of every sprint to deliver over 80% of the work autonomously. Enterprise grade end-to-end tested code that leverages your existing services,
components, and standards. This isn't AI autocomplete. This is spec and test-driven development at the speed of compute. Schedule a technical deep dive with our AI experts at Blitzi.com. That's BLI, TZY, dot com. If you're building anything with voice AI, you need to know about assembly AI. They've built the best speech to text and speech understanding models in the industry. The quiet infrastructure behind products like granola,
dovetail, ashbe, and clueless. Now, as I've said before, voices one of the most important
“modalities of AI. It's the most natural human interface, and I think it's a key part of where the”
next wave of innovation is going to happen. Assembly AI's models lead to field inaccuracy and qualities so you can actually trust the data your product is built on. And there's speech understanding models help you go beyond transcription, uncovering insights, identifying speakers, and servicing key moments automatically. It's developer first, no contracts, pay only for what you use, and scales effortlessly. Go to assembly AI dot com slash brief, grab $50 in free credits,
and start building your voice AI product today. This episode is brought to you by Mercury, banking for people who expect more from the tools they rely on. If you're building a modern business but still using a traditional bank, it just doesn't make sense. I use Mercury for all of my ADB family of companies, and it honestly feels like financial software built for how people actually operate today. It's fast, clean, no-in-person visits, no minimum balances, and the things
that used to take forever, like sending wires or spinning up new accounts, take seconds. Everything lives in one dashboard, cards, payments, invoices, team permissions, and you can automate a lot of the busy works that you're not constantly manually managing your money.
Of all of the services I use to run AIDB, I never thought banking would be one of my most
painless and most happy experiences, but with Mercury, that's exactly what it is. Visit mercury.com to learn more and apply online in minutes. Mercury is a fintech company, not an FDIC intro bank, banking services provided through choice financial group and column NA members FDIC. Welcome back to the AI Daily Brief. The homework of 2026 so far has been big inflection point style change. Obviously, that's been the case for individuals, but it also
clearly is the case among companies who are competing in the AI space. Some of the dominant themes have been the qualification of everything in the convergence of features, and nowhere has the AI race gotten more focused than acute, then in Open AI strategic shifts, as it watches an insurgent anthropic, start to dominate the enterprise encoding conversation. Now, we are coming up on six months now over renewed focus on coding and knowledge work from Open AI.
Going all the way back, frankly, to the release of GPT-5, it was increasingly clear that code AGI was going to be a big part of their strategy as well. And yet, for most of 2025, we were still in the Open AI paradigm of letting a thousand flowers bloom. While in Thropic Kepther head down and focused on knowledge work, Open AI was a bit more voracious in its appetite, competing strategically in some ways much more
closely to Google's approach, but then we got Open AI's code read in December, and with the camera-newed focus. And what's more, the focus seemed to pay off. Codex is increasingly a real choice alongside Cloud Code for many AI builders, and over the last week there's been lots of reporting about the ways that Open AI is going to consolidate its focus even more. CEO of applications, Fiji Simo, in fact confirmed reports from last week that said exactly this.
She tweeted, "Company's go through phases of exploration and phases of refocus,
both their critical, but when new bets start to work like we're seeing now with Codex,
“it's very important to double down on them and avoid distractions."”
Today, we got the latest story on that front. And if anything, it shows that Open AI is quite serious about the idea of putting away side quests. Now, some of the news was managerial.
Sam Altman told staff on Tuesday that he would be changing and, in fact, in s...
reducing his role. Altman will no longer have direct oversight of Open AI safety and security
“teams, and will narrow his focus to raising capital, supply chains, and the data center build out.”
The safety team will be folded into the research organization headed by Chief Research Officer Mark Chen, and the security team will move into the so-called scaling organization under President Greg Brockman. When it comes then to core commercial strategy, Altman's reduced role seems to put CEO of Applications VGCmo in the driver's seat. Her core team, the product division, will be renamed AGI deployment, clearly in line with the company's ambitions.
Last week, the reporting said that SEMA had told engineers that the next big project would be combining chatchipiT codex and the Alice browser into a desktop super app. Now, interestingly, and someone unexpectedly, the latest reporting also gave us some information around Open AI's next big model. In a memo, Altman told staff that the company had finished pre-training the model that his code names spud. He said things are moving faster than
many of us expected, and told staff that they expect to have a quote very strong model in a quote few weeks that the team believes can really accelerate the economy, his words. Now, people jumped all over that phrase accelerate the economy. Shoot door-out rights. Accelerate the economy is doing a lot of heavy lifting. That's either AGI or a really confident marketing team.
“Now, obviously, this is an internal communication, and while at this point, I think if you're”
Open AI, you kind of have to assume that anything big that you say is going to be leaked at some point, it is an interesting choice to use that type of phrasing which, of course, run the risk of over-promising and under-delivering. Ever since the challenges of the release of GPT-5, which had misaligned expectations, Open AI has really shied away from that sort of big bombastic over-promising. Of course, someone like Altman has multiple constituencies that he's
got to deal with. In addition to getting users excited, he's got to keep his team excited as well, and so that communication and idea could be more squarely aimed at rallying the troops in a moment of intense transition. Maybe the most discussed new news, though, as it relates to Open AI's new
focus, is the fact that the mandate to end side-quest has claimed its first victim.
As part of his memo, Altman announced that Sorrow would be sunset, and Open AI would discontinue all products that use their video models. Within hours of the report breaking, the official Sora app account on Twitter tweeted, "We're saying goodbye to the Sora app to everyone who created with Sora, shared it, and built community around it. Thank you. What you made with Sora mattered and we know this news is disappointing. We'll share more soon
including timelines for the app and API and details on preserving your work." The decision was apparently largely due to constraints and compute resources. The Wall Street Journal reported that some Open AI staff had been surprised on how Compute Hungry the Sora app was, given the comparative lack of demand relative to all their other products. With Sora winding up, Altman said the substantial compute resources could be redeployed
to run that "spud model" once it's released. Hater on Twitter wrote, "Obed AI has been training a new model, codename spud, which they expect will greatly accelerate the economy.
They're also renaming their products' division to AGI deployment. Basically, they want
more compute for codecs, which is why they discontinued Sora. Now, in the one hand, this makes obvious sense for Sora to be the primary casualty of the renewed focus. Because not only is it distracting from a consumer perspective, it also is extremely resource-intensive, and as we know, there's just not enough compute to go around. But I do think it marks a pretty significant moment in that this is maybe the first time that we've seen Open AI really have to choose,
at least in such a public way, to not do something that they had clear interest in and ambition in, because of compute constraints and they're need to compete in the market. Yes, we have had Altman and other Open AI executives at various points in the past, say that one model or another was delayed because of compute constraints, but shutting down an entire application that had been unveiled not that long ago with much fanfare,
is a pretty compelling demonstration of just how big the stakes of these decisions are. Speaking of which, one bit of fallout from the end of Sora is the end of the deal with Disney.
“You might remember that after the Sora launch last October, instead of suing Open AI,”
Disney chose to partner with them and plan to do more with the technology. In the wake of Sora ending, Disney announced that they had canceled the partnership
and will not be following through with their billion dollar investment into Open AI.
The still-the-split seems applicable enough with Disney commenting in a statement, as the nascent AI field advances rapidly. We respect Open AI's decision to exit the video generation business into shift its priorities elsewhere. We appreciate the constructive collaboration between our teams and what we learned from it and we will continue to engage with AI platforms, to find new ways to meet fans where they are, while responsibly embracing new technologies
that respect IP and the rights of creators. Now, one part of the response to Sora ending from some parts of the community was dancing on the grave. The prime agent writes, "Good." Sora accelerated one of the worst aspects of the new AI economy, absolutely horrible thing for Open AI to create. This, of course, relates to the feeling that some had around the announcement of Sora that by creating AI TikTok, whatever Open AI's intentions
were, they were effectively behaving like just the latest tech company to try to steal all of our attention for the sake of ads. A modestman agreed, saying Open AI just killed Sora and nothing of value was lost. Put those GPUs to good work rather than making stupid videos. Maybe even try to cure cancer like your original mission said? Yet while some said that this was an
Indictment of the AI video space as a whole, Mendoe who's about as deep in th...
out there as the co-founder of machine cinema rights, it's funny seeing people retweeting the
“demise of Sora as evidence that AI video is doomed, not realizing that there's a whole ecosystem”
now. When Sora was first announced, it was just major New and Runway in the game. Now it's over 100 companies clamoring into the space and marketing departments, agencies, and studios are all locked in. Open code's DAX also made the point that even if you didn't like the Sora experiment, this type of experimentation is just part and parcel of figuring out what actually is valuable. He writes, "It's lame to see all the people saying, ha, call that I knew Sora wouldn't work.
Yeah, duh, everyone thought that including the people who were working on it. They probably learned a lot trying to make it work anyway. For every successful thing that exists, 100 efforts like this had to fail, and those learnings are fed into making something that ultimately does work and provides you with your steady paycheck." Now on this idea that these resources could be better spent elsewhere, not just in terms of compute, but in terms of talent,
it is worth noting that the end of Sora is not coming with job cuts. Open AI's Head of Sora Bill Peoples basically said that the Sora research team would be moving into the world model space, focusing on, quote, "Systems that deeply understand the world by learning to simulate arbitrary environments at high fidelity." With the prize as he put it, being automating the physical economy. All men reaffirm this in the memo, saying that the Sora
research team will quote, "prioritized longer-term world simulation research, especially as it pertains to robotics." Now for some of the natural next question then, "was with the end of Sora would we also see the end of Open AI's ad push?" The short answer is that nothing there has been canceled yet. In fact, Open AI has hired former meta executive Dave Duggan is their new VP of Global Ad Solutions. The pilot phase of
ads is over and ads will be rolling out to all free and go subscribers in the coming weeks, and yet apparently there's still a lot of work to be done. Ad buyers have complained that Open AI doesn't have a modern ad sales platform in her providing very minimal metrics, with multiple ad agency executives saying that they were unable to prove to their clients that chatGbT ads were working. On shopping, Open AI is dramatically pairing back the feature.
The instant checkout feature which allows customers to buy directly from the chatGbT window hasn't been a success. Open AI announced on Tuesday that they would be revamping the feature, writing, "We found that the initial version of instant checkout did not offer the level of flexibility that we aspire to provide, so we're allowing merchants to use their own checkout experiences while we focus our efforts on product discovery." Basically, Open AI will now support
a variety of checkout paths encouraging merchants to deploy their own chatGbT apps, as well as clicking away to external shopping platforms. Still, one does have to wonder if there are bigger changes in the off-ing. Click Health Simon Smith writes, "Now, when does Open AI
kill its ad side quest? Since it's like a $680 billion market dominated by incumbents,
“versus the largely untapped roughly $40 trillion plus market of automatable knowledge work?”
Simon's implicit argument here is of course, that even if the path to get there is more vague, the opportunity to reinvent how work happens in the world just feels quite a bit bigger than the opportunity to reinvent how people buy stuff on the internet." Now, with the renaming of the product team to the AGI deployment team, we've had a renewed wave of conversations about what AGI actually means. In an appearance on the Lex Freedman podcast,
Jensen was asked a question where AGI came up. Freedman basically asked when Jensen thought an AI would be able to start, grow, and run a successful technology company worth more than $1 billion. Jensen responded, "I think it's now. I think we've achieved AGI. It is not out of the question that a claw was able to create a web service, some interesting little app that all of a sudden,
you know, a few billion people used for 50 cents, and then it went out of business again shortly
after. Now, we saw a whole bunch of those types of companies during the internet era, and most of those websites were not anything more sophisticated than what open claw could generate today." Now, in Freedman drilled down, Jensen noted that his prediction only really applied to novelty software for the moment rather than anything more complicated. He said that he wouldn't be surprised if some social thing happened or somebody created a digital influencer or some social
application that feeds your little Tomagachi or something like that, and it became out of the blue and instant success. A lot of people use it for a couple of months and it kind of dies away. However, he continued, the odds of 100,000 of those agents building in video is 0%. 80,000 hours Benjamin Todd wrote an essay, "Do we already have AGI?" With his short answer being no and his longer answer being, on the most prominent definitions,
current AI is superhuman in some cognitive tasks, but still worse than almost all humans at others. That makes it impressive general, but not yet AGI. Now, regularly listeners will know that I don't think the AGI question is particularly useful in practice. However, one thing that I have been thinking about recently, especially as we had that discussion around what the atomic unit of AI disruption should be and why it should be
“tasks rather than jobs, I think effectively what we have and something that might kind of explain”
the jagged frontier of AI capability is it's almost like we have task AGI. Almost anything that you can ask AGI to do that is specific and discreet it can do really well.
The problem is that a lot of work is strings of tasks together where AI capability starts to break
down. And so to the extent that one's definition of AGI involves long strings of those tasks working together effectively without a lot of human oversight or intervention, then sure it's more debatable if we're there or not. I kind of think Ethan Molich has the right of
It when he tweeted, maybe we should retroactively all just agree with Tyler C...
so we can stop arguing about it. Also, doing so will drive home the lesson AGI alone is not enough
“for transformation. As all the stories recently of Open AI and Anthropic trying to partner with”
consulting and private equity firms suggest they are well aware that even if the models are AGI
capable it's going to take a lot of work to actually get them to diffuse and fully work and reinvent
“the systems inside big companies. Still, if you can take away anything from all these moves from”
Open AI and from the relentless pace of shipping at Claude is that right now more than ever
for AI companies, the only type of AGI that matters to them is Work AGI. For now however that
“is going to do it for today's AGI daily brief, I appreciate you listening or watching as always”
and until next time peace!


