Today on the AI Daily Brief, AI is officially political and how.
Before that in the headlines, he's open-clothed the most important software release ever.
“The AI Daily Brief is a daily podcast and video about the most important news and discussions”
in AI. Oh, right friends, quick announcements before we dive in. First of all, thank you to today's sponsor's Recall AI, Robots and Pencils, AIUC and Blitzie. To get an ad-free version of the show, go to patreon.com/AI Daily Brief. If you're interested in sponsoring the show, or really anything else in the AIDB ecosystem, head on over to AI Daily Brief.ai. While you are there, two things that I want to call your attention to. First, last day to do our
February poll survey, I appreciate everyone who has done that. This will just take a couple minutes and it helps us track AI usage and give people data about what's actually going on and what's trending and what's changing and if you contribute you get that data before anyone else. And the other last, it's the last day to sign up for this first edition of Enterprise Claw. You can find that at enterprisecloth.ai. With that, let's go over to the headlines and some big words from Jensen Huang. We could have today with a fun little quote from Nvidia CEO Jensen Huang
at the Morgan Stanley TMT conference from Wednesday. Jensen absolutely waxed about open-cloth saying,
“open-cloth is probably the single most important release of software probably ever. Linux took some”
30 years to reach this level. Open-cloth in what is it? Three weeks has now surpassed Linux. It is now the single most downloaded open-source software in history. Now what he's specifically referring to is not the idea that overall open-cloth has more downloads than Linux or Facebook's React library. When he's referring to is this chart that's flying around which is true of the GitHub star history of these projects, where open-cloth is officially ahead of those vented projects
in GitHub stars and is done so extremely quickly. Now, hold aside the specific details. The context really matters here. Open-cloth is a phenomenon that has fundamentally changed how people think about what AI can do. It has been ground zero in ushering in the true agent era, and one of the more consequential parts of Jensen's comments is that they came at a Wall Street conference. Clearly signaling that personal agents are a big deal in that investors need to get up to speed.
This shift in AI is also aligned with Huang's predictions about where the industry is going, for more than a year, Huang has been conceptualizing AI tokens as the new fundamental unit of work in GDP. During his talk, Huang updated this thesis and claimed he's so called token economies
coming into focus. Jensen also discussed Nvidia's recent $30 billion investment in open AI,
specifically in the context of it not being the $100 billion deal that was rumored to be in the
“work last year. He said, "I think the opportunity to invest a $100 billion in open AI is probably”
not in the cards. Not because Nvidia has gotten any less bullish on the company, but because Jensen's base case is that they IPO by the end of the year," meaning in his words, this might be the last time we'll have the opportunity to invest in a consequential company like this. Huang added that Nvidia's $10 billion investment in anthropic late last year was also probably their last, which isn't to say that Nvidia won't continue to benefit from the success of those companies. For example, Jensen commented
that Amazon's gigantic compute partnership with Open AI means that Nvidia is, quote, "ramping AWS like mad." Now, OpenClaw is not just a US phenomenon. In fact, the information recently reported on the many ways OpenClaw is changing what Chinese founders are building. They highlighted a recent OpenClaw hackathon in China, where one contestant made Tinder for AI agents. Basically, where OpenClaw's confined love interest for their humans still, another created an automated
recruiting site where OpenClaw's owned by job seekers and companies interview each other. There was also a gamified social media and travel platform that hosts content created by OpenClaw's Felix Tau, the co-founder of Mind Verse AI, said, "Every founder I know is now working on new projects to test the boundaries of what personal AI agents can do." One of the interesting differences in the Chinese tech scene is the large companies diving straight into the new agent at Trend.
White dance, Alibaba, and Tencent are now all offering hosted OpenClaw instances to customers. Something that none of the Western Cloud giants have done so far. Kimi creator Moonshot and Minimax are also offering cloud-based versions of OpenClaw within their proprietary apps as a way to draw in new users. The article also mentioned numerous startups and founders working on OpenClaw projects, either building features on top of OpenClaw or spinning up competitors in the personal agent space.
Q-Varis co-founder Dongshi Q said, "Tech entrepreneurs in China responded immediately to OpenClaw and launched new projects because they knew all of their competitors would be doing the same. Nobody wants to be left behind." Parker Lyman of Manis even tweeted, "This is how competitive it is in China. OpenClaw installers have started offering two hours of house cleaning as part of the package in order to win clients.
They'll even list any items you want to declutter on a second-hand marketplace. All for 57 bucks." Rites Lenny Rachitski of Lenny's podcast, "I don't think enough people are appreciating how insane this is. Over 80 OpenClaw meet-up scheduled around the world and more popping up every day.
For a product less than a few months old, I've never seen anything like this.
Something very special is happening." Now moving over to the numbers game, just one day after Anthropics revenue numbers were leaked
To the press, OpenAI struck back and leaked a larger number.
Anthropics had surpassed $19 billion in ARR, more than doubling their run rates since the end of
“last year. That put them within striking distance of OpenAI, who told investors they had close”
$20-25 with more than $20 billion in ARR. Now as soon as I heard, that Anthropics was officially at basic parity with the last number that we got from OpenAI, I just knew that we were somehow from some leak or going to get new OpenAI numbers. And sure enough, late last night, the information reported that OpenAI now has exceeded $25 billion in ARR. They also firmed up their 2025 estimate,
claiming they actually ended the year with $21.4 billion. That makes that a 17% jump over the first
two months of $20-26, which if it were not for Anthropics staggering $36 gain in the last couple of weeks, we'd be talking about with just as much slack in our jaws. Sources added that OpenAI's ARR calculation was based on revenue average over the past four weeks, but if they extrapolated just the past week, ARR would be even higher at $30 billion. Derek Thompson tweeted about all this, AI might still be an industrial bubble because almost every big tech is a bubble of some kind,
and the revenue has a long way to catch up to CapEx, but the idea that this industry has no
“business model is a take aging, like a rotted banana. Lastly today, something which I am absolutely”
going to come back to and do more of an operator's focused episode on at some point. Notebook LM can now create fully animated videos to accompany reports. Google is calling these cinematic video
overviews, and the results are pretty impressive. The demo showed a brief clip of a video
overview about mathematical limits using images in video with some very cool space themed visualizations. Now we did previously have video overviews, but up until now they'd just been slideshows. They were already a useful extension of audio overviews, but there wasn't as much of a let's say wow factor. The new cinematic video overviews are immediately more striking, and pretty much guaranteed to make people wonder how they were made. Specifically, they feel
more like a native video presentation with custom animations and images, rather than a simple slideshow leveraging stock images. Robert Skobel presented an even more impressive example, sharing a video based on summarizing AI chatter on X over the past few days. The video opens up on an animation of a DaVinci style contraption, as the voiceover discusses how AI discourse has moved on from chat pots to discuss infrastructure agents and politics. The video flips through
various generated images in a matching style, making the entire presentation feel like a coherent whole. It also draws on real photos where relevant. Skobel said that he analyzed tweets in generated the script externally, but the rest of it was straight from notebook LM, which also generated an audio podcast and a mind map. Now one of the things that we've talked about numerous
“times this year is how much Google's product strategy I think is about flexing their lead in”
multimodal AI. And one could argue that this is one of the bigger flexes to date, especially if you factor it for actual immediate term relevance for real people and real workers. Cinematic video overviews orchestrate the Gemini 3 family of models, NanoBin and a Pro and Vio to weave together voiceover images and video in a way that just feels like the beginnings at least of a professional video production. What's more, this is not your grandpa's 10 second video clip. Skobel's video
for example runs for almost five minutes. Describing the new tech Google wrote, Gemini now acts as a creative director, making hundreds of structural and stylistic decisions to tell the best story with your sources. It determines the best narrative, visual style, and format, and even refined its own work to ensure consistency. Now this stage, the only downside is that the feature is exclusive to the top tier ultra subscription, making me once again grateful that my job justifies holding
one of those types of subscriptions for all the major players. Very, very cool stuff from Google something I'm very excited to play around with more. For now, however, that's going to do it for the
headlines. Next up, the main episode. Why is there always a meeting bot in your zoom call,
blame recall.ai. Recall.ai powers the meeting bots and desktop recording apps behind products like Kloely, Hubspot, and ClickUp. They handled the hard infrastructure work, capturing clean recordings, transcripts, and metadata across zoom, Google Meet, Microsoft Teams, in-person meetings, and more, so developers don't have to build with themselves. If you're building a meeting note take or anything involving conversational data, recall.ai is the API for meeting recording.
Get started today with $100 in free credits at recall.ai/aidb, that's recall.ai/aidb. Most companies don't struggle with ideas. They struggle with turning them into real AI systems that deliver value. Robots and Pencils is a company built to close that gap. They design and deliver intelligent, cloud-native systems powered by generative and agentic AI, with focus, speed, and clear outcomes. Robots and Pencils work in small, high-impact pods.
Engineers, strategist designers, and applied AI specialists working together to move from my data production without unnecessary friction. Powered by RoboWorks, their agentic acceleration platform, teams deliver meaningful results including initial launches in as little as 45 days depending on scope. If your organization is ready to move faster, reduce complexity, and turn
AI ambition into real results, Robots and Pencils is built for that moment.
at robotsandpensils.com/aidbreath, that's robotsandpensils.com/aidbreath, Robots and Pencils,
“impact at velocity. There's a new standard that I think is going to matter a lot”
for the Enterprise AI agent space. It's called AIUC1, and it builds itself as the world's first
AI agent standard. It's designed to cover all the core enterprise risks, things like data and privacy, security, safety, reliability, accountability, and societal impact, all verified by a trusted third party. One of the reasons it's on my radar is that 11 labs, who you've heard me talk about before and is just an absolute juggernaut right now, just became the first voice agent to be certified against AIUC1, and is launching a first of its kind insurable AI agent. What that means in practice
is real-time guardrails that block unsafe responses and protect against manipulation, plus a full safety stack. This is the kind of thing that unlocks enterprise adoption. When a company building on 11 labs can point to a third party certification and say our agents are secure, safe, and verified, that changes the conversation. Go to AIUC.com to learn about
the world's first standard for AI agents. That's AIUC.com.
“Want to accelerate enterprise software development velocity by 5x? You need Blitzie,”
the only autonomous software development platform built for enterprise code bases. Your engineers to find the project, a new feature, refactor, or greenfield build. Blitzie agents first ingest and map your entire code base, then the platform generates a bespoke agent action plan for your team to review and approve. Once approved Blitzie gets to work autonomously generating hundreds of thousands of lines of validated and tested code. More than 80% of the
work completed in a single run. Blitzie is not generating code, it's developing software at the speed of compute. Your engineers review, refined, and ship. This is how Fortune 500 companies are compressing multi-month projects into a single sprint, accelerating engineering velocity by
5x. Experience Blitzie first hand at Blitzie.com. That's BLI, TZY.com.
Welcome back to the AI Daily Brief. When it comes to what I cover on this show, I have a strong preference as you guys well know at this point. Two changes and updates
“that are directly and immediately relevant to you and your lives and your work. And yet, of course,”
all of those changes are happening in a larger societal context that we can't ignore. And right now we are in a particularly notable moment in the history of the politics of AI, which I would describe as something like, if AI has flirted with politics so far, it is now through this phase becoming much more discreetly and distinctly a political issue. The verge goes even farther, writing in a recent piece that AI is now part of the culture wars.
And with a recent memo from Anthropics CEO Dario Amade, the culture warness of this conversation is likely to get worse, not better. I'm sure at this point you've been keeping up to speed with the Anthropic Pentagon, but the quick TLDRs that Anthropic had a couple of red lines around domestic surveillance and autonomous weapons that they refused to change in their contract, which really ticked off defense secretary Pete Heggseth, which led to all sorts of threats
of the U.S. government designating Anthropic as a supply chain risk, which is not something that the U.S. government has historically done for American companies, which led to memos and much public fighting last week, finally culminating in President Trump, blasting out on truth, social that Anthropic was now persona non grotto with the U.S. government, and Heggseth following up that not only would they not be working with Anthropic, they would in fact
be pursuing the supply chain risk designation and pushing other defense contractors to stop working with Anthropic as well. On the same day that this was all going down, open AI announced their own deal with the Department of War, and it has just been a mess. In the wake of Open AI announcing their deal last Friday night, Anthropic CEO Dario Amade published a 1600-word memo that was not happy with basically anyone. The memo was later leaked to the information, and Amade got right
to the point. He opened the memo by writing, "I want to be very clear on the messaging that is coming from Open AI and the mendacious nature of it. This is an example of who they really are, and I want to make sure everyone sees it for what it is." Dario explained that while we didn't know exactly what was in the Open AI contract, he had a few impressions about how their safeguards would work. He suggested that Open AI would deploy a model without legal restrictions,
but with a safety layer that amounts to model refusals on certain tasks. Amade continued, "Our general sense is that these kinds of approaches while they don't have zero efficacy are in the context of military applications, maybe 20% real and 80% safety theater." He explained that applications like autonomous weapon rear domestic surveillance rely on context that the model can be privy to, such as the presence of a human in the loop or the
Providence of surveillance data. Amade also alleged that the idea that Anthropic were offered the same terms as Open AI and rejected them was false. He added that he also believed it was false that Open AI's terms meaningfully prevent AI use in domestic mass surveillance or autonomous weaponry. Circling back to earlier statements Dario reiterated the core concern that the DOW has legal surveillance powers which are, quote, "not of great concern in the pre-AI world but take on a
different meaning in a post-AI world." Amade wrote that Anthropics negotiations on Friday had ultimately come down to a single clause in the contract. According to his retelling of events,
The Pentagon had agreed to everything Anthropic had asked for, but required t...
the specific phrase about analysis of bulk-quire data. He said, "This exactly matched the scenario
“we were most worried about. We found that very suspicious." On autonomous weapons,”
Amade said the Pentagon had argued that a human in the loop is required under the law, but Dario noted that this is only Pentagon policy, which was added during the Biden administration, and could be changed at Will by Secretary Heggset, adding, "So it is not for all intents and purposes of real constraint." Still, a lot of the details of the negotiations were kind of secondary to the main point he was trying to make. Specifically, he said that a lot of
the messaging from Open AI and DOW are, quote, "just straight-up lies about these issues or tries to confuse them." In pretty much no uncertain terms, he accused Sam Altman of Acting and Bad Faith, suggesting that all of his appearances to support Anthropic in public were just about him acting in a way that, quote, "doesn't make it seem like he gave up on the red lines and sold out when we wouldn't." In the spiciest and perhaps most politically
fraught part of the memo, Dario argued that the disagreement didn't actually have to do with the contract. He wrote, "The real reasons DOW and the Trump and Mind do not like us is that we haven't donated to Trump while Open AI and Greg Brockman have donated a lot. We haven't given dictator style praise to Trump while Sam has. We have supported AI regulation which is against their agenda. We've told the truth about a number of AI policy issues like job displacement. And we've
actually held our red lines with integrity rather than colluding with them to produce safety theater for the benefit of employees. Which I absolutely swear to you is what literally everyone at the DOW, Palantir, our political consultants, etc. assumed was the problem we were trying to solve. Sam is now with the help of the DOW Dario continues, trying to spend this as if we were unreasonable. We didn't engage in a good way, we were less flexible, etc. I want people to recognize this as
the gaslighting it is. Coming to a conclusion, Dario writes, "Thus Sam is trying to undermine our position while appearing to support it. I want people to be really clear on this. He's trying
to make it more possible for the admin to punish us by undercutting our public support. Finally,
I suspect he is even egging them on, though I have no direct evidence for this last thing. Dario argued that the narrative was mostly failing with the general public, but had been successful with some in his words Twitter morons. My main worry he concludes is how to make sure it doesn't work on open AI employees. Due to selection effects there's sort of a gullible bunch, but it seems important to push back on these narratives which Sam is peddling to his employees.
“So boy howdy, lots of unpacking this. I think it's important to keep in mind that this was”
Friday night right as this was all going down, and I think that there are a couple possible interpretations. One is that this was some type of strategic, either a strategic recruitment play, in other words to get disaffected open AI staffers to come over and join anthropic, or an attempt to lean into anti administration sentiment, basically an active app store politics. Anthropic at a time of Dario's writing had not yet hit number one in the app store charts,
but already it had rocketed up to number two. The other possible interpretation, though, of course, is effectively that this was just a crash out, that it wasn't super considered, and that any of these strategic outcomes were just secondary to the fact that it was a CEO venting in a sort of private forum that that would become public. This seems to be which V-Mouse which thinks, writing, Dario was obviously on mega tilt here, same as everyone else on Friday,
and the inflammatory stuff, especially about the White House's deeply-epping stupid to say, White House was trying to de-escalate and Dario needs to eat some Crow ASAP. Now, as V here is generally sympathetic to Anthropic and AI safety in general,
“and so I think it's notable that that interpretation is coming from him.”
Unsurprisingly, it does seem that the administration was not happy about this. Axios Business Editor, Dan Primac, wrote, "Ominate's blog post is said to have infuriated Defense Department officials, who believe he was trying to virtue signal to AI and Thropic employees upset about the Venezuela revelations, and BA engineers at rival companies who might share similar concerns." Now, I'm pretty sure Dan was talking about a previous memo
not this most recent one, but implying that the same logic from the previous memo applies to the Friday night writing as well. It is worth noting as we interpret things,
"The Dario has never been a big fan of Trump. A news article from last September reported
that in a Facebook post urging friends to vote for Kamala Harris, Amade had likened Trump to a feudal warlord. He also cut ties to a number of law firms who had made deals with the president." While pretty much everyone agreed that this was not going to work out all that well for Anthropic vis-a-vis the White House, even if they generally supported Dario's position,
there was more mixed feelings around his accusations with regard to Sam and Open AI. Dean Ball wrote, "I do not share the cynicism of some with respect to Open AI's actions in the dear W. Anthropic dispute. It basically seems to me as though Open AI was attempting to deescalate last week, whether they executed well as a separate question, but in their defense, good execution and such chaos was nearly impossible. It seems Open AI tried to reduce tensions
and find a productive path forward, while allowing its employees considerable latitude to speak their minds. The easy thing would have been for management to stay quiet and let this happen. They did not do that, and they also stood firm in opposition to the supply chain risk designation. In general, Open AI is unjustly maligned. This is the thing that bothers me the most about Dario's leaked memo. It spends so much time on Open AI conspiracies and cynicism that I fear
Industry solidarity and the future will be harder than it needs to be.
we will see state interference in the Frontier AI, and until we build formalized structures
“for such interference, it will be important for the industry to hang tough together.”
I fear that will be less likely now. Interestingly, Sam Alman seems to agree with Dean that the particulars of how they handled the Pentagon contract were in handled as well as they
might have been. During his first all-hands dealing with the issue, Alman said that he didn't
regret signing the deal, but wished he didn't rush to announce it last Friday night. Echoing previous comments he said the announcement made Open AI look opportunistic and not united with the field. Sources that the tones of the all-hand meeting was respectful with employees trying to drill down on the details in the contract. Alman apparently empathize with the mood in the room saying, "To try so hard to do the right thing and get so absolutely personally crushed for it,
and I know this is happening to all of you too, so I feel terrible for subjecting you to this, is really painful." A source speaking with the New York Post said that the reaction within the company was largely positive, safe for a small group. They said from the internal messages, people are pragmatic and agree that Friday night was perhaps a little rushed to not the best communication, but now that there is more information, it feels like everybody's generally positive,
safe for like these 30 people who are always the ones raising questions. And while no one has
publicly quit over the contract, reinforcement learning lead Max Schwarzyer announced on Monday that he had decided to leave Open AI to join in Tropic, which basically everyone assumed was a direct response to this. That said, not only did Schwarzyer not throw Open AI out of the bus, he tried to give at least a plausible reason for his move that wasn't this saying that he wanted to return to doing individual work as a researcher rather than continuing an management position.
On Wednesday evening, the financial times reported that in Tropic had restarted negotiations with the Pentagon around the contract. Almanay was reportedly back in discussions with the Department of War Undersecretary for Research in Engineering and former Uber Executive at Meal Michael.
“You might remember him as the person who referred to Amode as a liar with a god complex just”
about a week ago. The reporting frame the talks as a last-ditch effort to strike a deal and avoid being labeled to supply chain risk, and while they said that the memo was likely to complicate negotiations, they did not include any sourcing about the administration's current outlook on it. Axios, however, did receive comment from the administration, which through
cold water on the prospect of a reconciliation. At administration officials said, "Ultimately,
this is about our warfighters having the best tools to win a fight, and you can't trust Claude isn't secretly carrying out Dario's agenda in a classified setting." What's more, even before a formal supply chain risk designation, military contractors are already ripping out Anthropics Tech, CNBC reports that a number of defense contractors are telling employees to stop using Claude and switch to other models. The reporting directly references the threat
to label Anthropics supply chain risk as the cause. While opinions have been pretty unified that the designation goes way too far, including even from central figures at Open AI, for example, on Monday, former NSA and cyber command director, and now Open AI, board member Paul Makasone, said, "This is not a good space for our nation. We need Anthropics. We need Open AI. We need all of our large language model companies to be partnering with our government.
The moves of the defense contractors show why these types of threats are so pernicious.
“No one who has mission critical and business essential contracts with the U.S. government”
is going to take those risks." Alexander Hartstrick of G2 Ventures, which has a focus in the defense space, said that already, 10 of his firm's portfolio companies have, quote, "backed off of their use of Claude for defense use cases and are in active processes to replace the service with another one." Now, while this is undoubtedly the largest AI politics issue, and one that is thrustigated to the mainstream, it has political coattails that are dragging
other things in as well. As elections get closer to the conversation around data centers, for example, is getting more heated as well. This week, the president finalized the big tech pledge on data center energy use. The pledge was signed on Wednesday at a White House Roundtable with several tech executives in attendance. Attendees included Microsoft President Brad Smith, an Open AI CEO, Brad Lightcap. And Tropic, of course, was not represented, but they also haven't
begun building their own data centers. Seven companies signed the pledge, namely Google, Meta, Microsoft, Oracle, Open AI, Amazon, and XAI. So this covers all of the hyper scalars as well as each AI startup currently building significant AI infrastructure. Substantively, the tech companies have pledged to bring their own power supply, either through constructing new power plants, or paying to cover the cost of expanded infrastructure. The pledge doesn't prescribe any
particular solution, but the president said that each company should negotiate directly with utilities to ensure they're paying an appropriate rate. The agreement states that the tech companies will be on the hook for additional costs, even if they pull out of data center projects. That was presented as a key term that could have swaged fears of overbuilding into an AI burst, with consumers left holding the bag. In addition, the company signed up to contribute
power back to local grids in times of need. These load management agreements have been in place in Texas for several years, and if proven fairly successful at keeping the great operational during winter storms. The pledge is structured as an agreement with the president, so it's unclear if it carries any legal weight, but the president pointed out that this pledge is in the best interest of the hyper scalars. Articulating quite simply the obvious political truth,
Trump said, they need some PR help because people think that if a data center goes in, their electricity prices are going to go up. Some centers were rejected by communities for that, and now I think it's going to be the opposite. AI's our David Sachs took to Twitter
To law the deal, and critique opposing types of data center policies.
"This is a much better approach to affordability than Bernie Sanders' total ban on new data
“centers, which would halt the construction boom currently driving wage growth and job growth”
for blue collar workers." In fact, the rate payer protection pledge will lower electricity prices when AI companies pay for grid upgrades and sell their excess power back to the grid. The right approach to data centers is not to stop progress altogether, but rather to protect residential rate payers from pricing creases while making it easier to stand up new power generation.
“Speaking of Bernie Sanders, it's very clear that he thinks this is a winning political issue,”
and one that he's very much not going to let go. He put out a video of him flying to Berkeley, speaking with some of the more prominent AI-dumers like LA-Zard Kowski, and then releasing the video to his Twitter. Jeff Schellenberg of Compact magazine is ensure that this is the right strategy for AI criticism. Jeff writes, "The economic populist view of AI is or should be quite different from the Udkowski and Duma view, however because the latter is more narratively compelling
and urgent seeming, economic populists seem to be embracing it. This is unfortunate."
Finally, showing just what absolutely weird bedfellows AI issues are going to bring together,
future of life instituted AI safety as Max Tegmark announced the pro-human AI declaration. The verge reports that a secret meeting took place back in January to sign this document, and the group of people represented in the 90 attendees, are to say the very least scattered across the political spectrum. The group who've signed this thing include everyone from MAGA influencer
“and former presidential advisor Steve Bannon, to Ralph Nader. If you want to know more broadly”
when I think about the anti-AI movement, and which parts of it we should be paying attention to and how we should be engaging, I have a whole episode of that last week. For now in the purpose of this episode, the big thing that I want to track and where we'll conclude is that part of the fallout of the anthropic and Pentagon fight is that something which has remained mostly on the sidelines so far as a political issue is now being absolutely thrust into the mainstream.
Hopefully pretty soon we can get a reprieve from this, in an either case I'll probably try to dial back the coverage unless something truly huge happens, but that is where things stand from where
I'm sitting. And that is going to do it for the AI Daily Brief. Thanks as always for listening
or watching and until next time, peace!



