Today on the AI Daily Brief, the coming AI Rules Battle, before that in the h...
open AI plans to double its workforce with a big emphasis on the enterprise.
“The AI Daily Brief is a daily podcast and video about the most important news and discussions”
in AI. All right friends, quick announcements we've already done.
First of all, thank you to today's sponsors, KPMG, Blitzie, AI UC, and Super Intelligent to get
an ad-free version of the show, go to patreon.com/aideallybrief, or you can subscribe it on podcasts. If you're interested in sponsoring the show, send us a note at [email protected]. Now one other announcement, we are officially live with the first round of voting at agentmannis.ai. The project that you guys shared were so cool.
At some point this week, we will do a run through of some of the projects. I'll show you a little bit more about the bracket and how it was put together, which spoiler alert I let quad and chat GPT debate, so that it was not my call. For now, go check it out at agentmannis.ai, like I said, voting is now live. We kick off today with open AI blocking the AI layout trend with a massive hiring plan.
The financial times reports that open AI plans to double their head count this year. That would bring head count to around 8,000, requiring the equivalent of a dozen hires per day. The new employees will be added to product development, engineering research and sales teams,
and open AI also plans to recruit specialists to focus on what they call technical ambassador
ship, assisting enterprises to make better use of their tools. This is a fairly significant shift from where Sam Altman had positioned the company coming into the year. In a live stream town Hall of Ant in January, he said, "We are planning to dramatically slow down how quickly we grow, because we think we'll be able to do so much more with
fewer people." Since then, anthropic surgeon growth has challenged open AI's leadership and CEO of applications VGCMO has delivered a quote unquote wake up call to the company on enterprise sales. A little over a week ago, she told staff that, quote, "We are very much acting as if it's a code read."
The net result is an urgent need to scale up in order to capture the enterprise market. An unnamed executive from up in AI told the FT that the success of AI coding tools had "opened up entirely new lanes of things we can do." They added, "It does change how you think about everything from your products to how you serve the market.
All of a sudden, the company kind of rotated on its axis." Jason Hall writes, "I run 8 AI agents every day, and I still think adoption is the hardest problem in this space. Open AI apparently agrees. They're doubling their workforce in one of the roles they're specifically hiring for
is helping businesses actually implement their tools."
An 840 billion dollar company that still needs dedicated people to get customers to use
the product says a lot about where we really are. Adam GPT from up in AI responded, "It feels like we are top of the third inning. The models aren't the problem, they're smart enough now. Now it's about applying them at scale." AI enabling a processor workflow like we've been doing is one thing, but reimagining and
repaving that processor workflow as AI native is where transformational change will begin to occur at scale. It goes slow until it goes really fast. I think that'll be the story of 2026. To which Mark Cuban responded, "If by repaving you mean reinventing yes, one of the challenges
is that most corporate knowledge is still in someone's head, knowledge is far different than information. Ella Lemson agents can capture all the information it can touch internally and externally to the company, but there are things that you, me and everyone, security guard, sales, people whoever, do to make the things we do fit the way that we want them to."
None of that is documented anywhere.
“Now that got into a whole longer discussion about the nature of adoption, which I think”
is probably worth its own show, but I think print some to up when they wrote, "Welcome to the era of AI capabilities overhang, in which Open AI feels obligated to hire specialists focused on technical and bassidorship to teach enterprises how to extract value from AI agents." One company that is trying to move quickly into this future is apparently FedEx. The logistics giant is delivering AI training to every member of their 400,000 strong
workforce. The initiative began in December and is intended to make employees more knowledgeable, efficient, and promotion ready. Accenture is partnered to deliver the curriculum, which is designed to be updated to keep pace with changes in the technology.
The program is tailored to individual employees, includes role-based training on the AI systems FedEx is putting into place. In addition, employees are encouraged to take part in what FedEx is calling communities to practice, which includes use-case sharing as well as hackathons. Said EVP and Chief Data and Information Officer, V-Shall Tallwar, the more we invest in our
talent being on the leading aspect of that learning journey, the better off they will be, the better off we will be, and the better off the broader industry is going to be. Now you might be thinking to yourself, "This is just some random PR push from FedEx to get credit for doing this program, so why are you featuring it on this show?" But there is actually a specific answer to that.
While I don't know the details, it sounds to me, like what FedEx is designing here is something actually fully bespoke and continuous. Whether Accenture is the right partner for that who knows, but I do think that we are in a moment, where the changes in AI have completely outstripped any sort of traditional upskilling or certification methodology.
“I think the more that companies think in these broad, expansive and bespoke type of training”
approaches, even though they are obviously going to cost much more than previous types of workforce development, the absolutely better off they're going to be.
On the other end of this spectrum, and showing just the diversity in how diff...
are responding to AI, HSBC is apparently weighing deep job cuts. Bloomberg reports that as many as 20,000 employees could be laid off, as the bank bets on AI to cut headcount in middle and back office functions. This would be a 10% headcount reduction for the global bank, which has a huge footprint across Asia Europe in the Americas.
So it's just said that if the plans go ahead, the layoffs would take place over three to five years as part of a medium-term transition plan. More broadly, HSBC is expected by some to be a harbinger of deep cuts across the financial sector as AI automates more of the work. Last year, a report from Bloomberg Intelligence predicted that 200,000 positions would be
eliminated by global banks over the next three to five years, and a survey of banking CTOs conducted by business insider found that they expect 3% workforce reductions on average.
“Now this idea of headcount in middle management functions is, I think, relevant for our”
next story, which is that according to the Wall Street Journal, Mark Zuckerberg is building an AI agent to help him do his job. The WSJ reports that the agent is currently focused on making information sharing more efficient throughout the company. The idea is that it can surface insights that would otherwise require going through layers
of management to gather. Now this personal agent, of course, reflects a much deeper initiative at the company. Meta is currently going through two transformations that both enable one another. Management layers are being stripped back, and smaller flatter teams are being installed with the emphasis on individual contributors.
At the same time, Meta is rolling out agents to turbocharge the efforts. But currently has two personal agents deployed across the organization.
The first is called MyClaw, which, based on the name is likely a modified version of open
claw. That agent has access to chat logs and work files and can talk to colleagues on an employee's behalf. And interestingly, Meta is already seeing MyClaw's talk to each other to resolve issues rather than needing to interrupt their human owners.
There's even an agent specific message board within the company to facilitate this agent to agent communication. The second agent is called second brain and functions as an agent acknowledged base. The agent was built on top of clawed and can index and query documents for projects. Internal communications announcing the agent pitched it as an AI chief of staff assigned to
every employee up and down the organization. Source has said the tools are gaining momentum at Meta. Boosted by the use of AI tools now being graded as part of performance reviews. Now in the background, there are the rumors of 20% layoffs. And some have said that the rapid change in intense focus on AI use has fueled layoff
anxiety in the ranks. Still others, though, have said that the flat Norg structure in agent proliferation are breathing new life into the culture there. Meta is apparently hosting AI tutorial meetings multiple times for a week, as well as holding frequent hackathons and encouraging employees to build their own tools.
Some describe the atmosphere as fun and empowering reminiscent of the early move fast and break things era at Meta.
“And boy, honestly, this is one of those stories that could be an entire main all on its own.”
First of all, we've got AI used showing up in performance reviews, which I think is going
to become completely standard over the course of the next couple of years. Second, we've got this agent agent communication, which actually sounds like it's bearing fruit. We have, as I discussed on yesterday's show about jobs, a renegotiated relationship between managers and individual contributors.
I think this actually is going to be in practice one of the more disruptive aspects of AI, and so this kind of becomes a live action case that he have exactly that. Next week, I'm releasing a large presentation called the State of AI Q2, and this theme of leading organizations starting to separate from "Lagered Organizations" is a big part of it.
For now, though, that is going to do it for today's headlines, next up to the main episode. Alright folks, quick pause. Here's the Uncomfortable Truth. If your enterprise AI strategy is "We bought some tools, you don't actually have a strategy." KPMG took the harder route and became their own client-zero.
They embedded AI in agents across the enterprise, how work it's done, how teens collaborate, how decisions move, not as a tech initiative but as a total operating model shift. And here's the real unlock. That shift raised the ceiling on what people could do. Human State firmly at the center, while AI reduced friction, surfaced insight, and accelerated
momentum. The outcome was a more capable, more empowered workforce. If you want to understand what that actually looks like in the real world, go to www.kpmg.us/AI, that's www.kpmg.us/AI. If the emergence of AI code generation in 2022, Nvidia Master, inventor, and Harvard engineer
Sid Pureshi took a contrarian stance.
“Inference time, compute, and agent orchestration, not pre-training, would be the key to unlocking”
high-quality AI driven software development in the enterprise.
He believed the real breakthrough wasn't in how fast AI could generate code, but in how
deeply it could reason to build enterprise-grade applications. While the rest of the world focused on co-pilots, he architected something fundamentally different. Blitzie, the first autonomous software development platform leveraging thousands of agents that is purpose-built for enterprise-scale code bases.
Fortune 500 leaders are unlocking 5x engineering velocity and delivering months of engineering work in a matter of days with Blitzie. Transform the way you develop software, discover how at Blitzie.com, that's B.L.I.T.Z.Y. .com. Quick update on something I've been following.
AI UC1 is the first real standard for AI agents, developed with Fortune 500 security leaders to basically define what safe enterprise-ready AI agents should look like.
A little while back I mentioned that 11 labs became certified against AI UC1.
This week, two more big players joined, Finn from Intercom and UI Path.
With that certification means in practice is a real-time guardrail that block unsafe responses, protection against manipulation, and a full safety stack designed for enterprise environments. And that's why this matters. You've now got leaders across three major AI agent categories, enterprise automation, customer support, and voice, all certifying against the same standard.
That starts to look less like a one-off and more like the beginning of a real industry trend. It is a truth universally acknowledged that if your enterprise AI strategy is trying to buy the right AI tools, you don't have an enterprise AI strategy. Turns out that AI adoption is complex.
It involves not only use cases, but systems integration, data foundations, outcome tracking, people and skills, and governance.
My company, Super Intelligent, provides voice agent-driven assessments that map your organizational
maturity against industry benchmarks against all of these dimensions.
“If you want to find out more about how that works, go to bsuper.ai.”
And when you fill out the get started form, mention maturity maps. Again, that's bsuper.ai. Welcome back to the AI Daily Brief. One of the things that was absolutely inevitable about 2026 is that the conversation around AI regulations, AI policy, AI's rules of the road in the US at least, was going to get
much louder. Part of that has to do with the fact that AI is just playing an increasingly large role in people's lives, meaning that there's more attention focused on it, and that cuts across everything from the experience people are having at their jobs, to local politics, especially as this infrastructure buildout happens.
But this was also inevitable simply due to the schedule of American politics with the midterm elections coming up this year. The question, of course, has been given all of the other policy debates that we have in this country, where was AI going to rank? It's clear is that even if there are many other issues that still rank for a head of
“AI, it's growing in significance very, very quickly.”
Blue Rose researchers head of data science David Shore recently jumped on the unlocked podcast to talk about the politics of AI, and one of the things that he noted, in a companion Twitter thread, was that AI as an issue is rising an importance faster than any other issue they track. Right now, it's ranked 29th out of 39 issues.
At the very top of the list, as you might imagine, are things like the cost of living, the economy, political corruption, inflation, health care, taxes and government spending, democratic institutions, political division, foreign policy, budget deficits, poverty, immigration, crime, Medicare, social security, et cetera. The things that impact everyone in their day-to-day lives.
And yet, in terms of its rise in importance, it is absolutely right at the top, ahead of war in the Middle East, voting rights, political corruption, privacy, unemployment, mental health, Medicare, political division, and more. Already, in their research, at least AI is ranked above issues, including the environment, climate change, abortion, and guns.
And of course, this issue is not rising in a vacuum. The context in which AI has to operate is going to shape what people think of it. Shore rights, AI is hitting at a time when 61% of Americans say life has gotten less affordable in the last year, only 25% feel confident in their financial future, and only 34% said they have a secure job.
Dating things fairly dramatically shore rights, not a great starting place for major disruption to the labor market. This shows up in people being more concerned than excited about AI. More than 50% are concerned about losing their job in the next year, and an even higher percentage is concerned about losing their job because of AI.
Over 50% of people are concerned that either they or someone in their family will lose their job in the next year, with 56% being concerned for that specifically because of AI. The numbers just go up from there, 72% are concerned that AI will change the job market in a way that drives down wages for people like them, 77% are concerned about entire industries being eliminated by AI, and 79% are concerned for young people entering the
workforce and finding fewer job opportunities because of AI. When it comes to political messaging, blue rose research finds that people are very suspicious of anyone who say everything is okay.
Basically, there is very high conviction, whatever it is rooted in, that AI is more likely
to cause job losses than economic productivity that benefits people.
“And when asked what the government's most important priority in managing the growth of AI”
should be, when funding the creation of new jobs and basic benefits like health care, even if that means limiting the amount that American tech companies can profit from AI, beats the ever-loving snott out of keep innovating so that America outcompes the rest of the world and developing AI. Now the way that this question was phrased is extremely likely to get this sort of response,
but it's important to note that this is not just a left-right thing. And among Trump voters, funding the creation of new jobs and basic benefits, beat keep innovating to to one. Voters also aren't particularly keen on policies like UBI as the answer. When asked whether the government should prioritize creating good paying jobs or providing
direct income support, again, every demographic by a factor of about 3 to 1, head-creat good paying jobs over providing income support.
That's kind of the context that we're coming into.
AI is growing as an issue of concern, people are bringing their broader economic anxiety
“to that conversation, and they seem to care a lot about their continuing to be good paying”
jobs. Now, if you needed evidence that the conversation around AI policy was heating up, we've been seeing things on both the state and the national level. On the national level, Republican Marshal Blackburn rolled out a 291 behemoth in advance of the recent White House proposals, seemingly in a way to position herself at the head
of that conversation. While Blackburn claimed that it was in line with the White House's goals, the response of many was summed up by RSI's Adam Tier, who wrote, "Centator Blackburn's massive new AI regulation bill, 291 pages of near-and-the-sman dates, would make your P&T necrats blush with NBF had ever passed.
The layers of red tape contained in this proposal would create a compliance cost-held for small innovators, and the liability provisions would spawn an endless litigation hell that would be a trial lawyer's dream once they started filing for VLS lawsuits based on the completely open-ended theories of harm throughout the bill. But, of course, not only is it not just federal policy that the White House is paying
“attention to, a lot of their focus has been on preempting state-level regulation.”
There has increasingly been more consideration and engagement from the AI companies around state-level legislation, particularly out in New York and California.
Recently representatives from companies like OpenAI basically said that if federal policy
can't get its act together, they should start engaging more deeply with these state-level bills. Meanwhile, political races in these state-level environments have become a flashpoint for the politics of AI. The Wall Street Journal recently covered the Congressional Campaign of Alex Bores, who based
on his sponsorship of VLS focused on AI regulations, has become a target for superpacks who are against strict AI rules. So that's a bit about the environment that the White House's new national AI legislative framework comes into. Announcing the policy on Friday, the White House actually acknowledged the mixed feelings
to put it mildly people have about this technology. In the announcement article they write, the administration recognizes that some Americans feel uncertain about how this transformative technology will affect issues they care about, like their children's well-being or their monthly electricity bill. For the White House, this is a clear signal of the need for, as they put its strong federal
leadership to, quote, "insure the public's trust in how AI is developed and used in their daily lives." The six points of the legislative framework outlined by the White House include one, protecting children and empowering parents, two, safeguarding and strengthening American communities, which is the section that deals with data center politics and the cost of electricity,
three, respecting intellectual property rights and supporting creators, four, preventing censorship and protecting free speech, five, enabling innovation and ensuring American AI dominance, and six educating Americans in developing an AI-ready workforce. Now let's look into the larger policy document and see what they actually include here.
The first thing you'll note is that this is not some comprehensive document.
This is basically the polar opposite of Marsha Blackburn's 291-page term, although interestingly their goals are not dissimilar, in that both are trying to plant a stake in the conversation. Sometimes of what's notable within the different categories, there's nothing particularly surprising about the protecting children and empowering parent section. This is, as we'll discuss in a minute, one of the major concerns, particularly of some of the
groups on the right, who are most antagonistic with the White House about AI policy. Section number two, safeguarding and strengthening American communities, these are themes that the White House has been building on recently, specifically with their rate-payer protection pledge. The protection pledge was the Trump administration's approach to getting AI companies, commit
to footing the full bill for the AI infrastructure build-out, ensuring, for example, that people in communities with new AI data centers do not pay increased electricity costs.
“This, I think at this point, is one of the least controversial parts of all of this as”
witnessed by the fact that pretty much every AI company stepped up to agree. The rest of it is kind of a grab bag of other policies. Also on the subject of AI infrastructure, this directs Congress to streamline federal permitting for AI infrastructure construction and operation, specifically giving developers the ability to develop or procure onsite in behind-the-meter power generation.
On the community side, it suggests for more law enforcement efforts to combat AI-related scams for vulnerable populations, and also to provide resources for small businesses, including things like grants, tax incentives, and technical assistance programs. One of the most delicate balancing acts comes in section three, respecting intellectual property rights and supporting creators.
The White House reaffirms its position that training AI models on copyrighted material does not violate copyright laws, but also says that they acknowledge arguments to the contrary and supports allowing the courts to resolve that issue. They write that Congress should consider enabling licensing frameworks or collective right systems for right holders to collectively negotiate compensation from AI providers without
incurring anti-trust liability, any such legislation however they write should not address when and whether such licensing is required.
It sounds to me like they're basically trying to create a framework outside of courts
for these sort of negotiations to happen that is also outside of the anti-trust enforcement framework. And then one area that I again think is relatively uncontroversial is their notion that Congress should consider establishing a federal framework protecting individuals from the unauthorized distribution or commercial use of AI-generated replicas of their voice likeness or
Other identifiable attributes.
While of course respecting that there are clear exceptions for, quote, "parity satire news
“reporting and other expressive works," protected by the first amendment.”
Now, the first amendment and free speech is also the subject of part four.
This is the shortest section with just two bullet points, Congress should prevent the U.S. government from coursing technology providers to ban, compel or alter content based on particle or ideological agendas, and Congress should provide an effective means for Americans to seek redress from the federal government for agency efforts to censor expression on AI platforms or dictate the information provided by an AI platform.
Now, this is an issue that is going to come up. A law working its way through New York would basically limit the ability for chatbots to provide legal or medical advice. While the bill is intended it seems to give consumers redress for getting bad advice, i.e. the ability to sue, in practice it would basically mean that LLMs could only provide
legal advice to existing lawyers and medical advice to existing doctors with consumers being blocked from getting access to those questions. Given how many of the positive stories of AI are people understanding their medical bills
and health diagnoses for the first time, obviously an overly heavy handed version of this
bill could be extremely damaging. Now, going back to the White House framework, section #5 is enabling innovation and ensuring American AI dominance. One of the more interesting proposals here, Congress should not create any new federal rulemaking body to regulate AI and should instead support development and deployment of
sector-specific AI applications through existing regulatory bodies with subject matter expertise and through industry-led standards. Effectively, if this technology is as ubiquitous as it seems and is going to touch everything, rather than trying to create some new mega agency that can reasonably interact with every other regulatory body, just let the existing bodies develop new AI-specific policy capabilities.
Section 6 I mentioned in a show over the weekend, educating Americans in developing an AI-ready workforce, basically what I said about this for those who missed that episode, as
“that although I am very glad to see this included as a key part of the legislative framework,”
it is very clear that they have absolutely no idea what this actually is going to mean in practice.
It's basically a few hand-weavy bullet points, like using non-regulatory methods to ensure
existing education programs, including apprenticeships, incorporate AI training. That goes far less farther than what I would like to see, which is a mass-scale nationwide upscaling effort. Finally, in this forepager, the White House added a 7th point that was not in the main article at least not as one of the named points, which is the longest and most fully articulated
of any of the 7 points, which is about preempting state laws. So what have been the responses? Senator Ted Cruz seems to be stepping up to align himself with the White House and contra Marsha Blackburn, as the Republicans wrap their hands around this issue. Senator Blackburn said she "welcome to the White House to this important discussion"
and "look forward to working with my colleagues to codify the president's agenda, while still saying that her Trump AI Act is the solution American needs." Former Chief Technologist at the FTC Neil Chilson said, "So it's Blackburn's Trump AI Act against Trump's actual AI framework." Many other responses were basically looking to see whether they issued they cared about
most was included, former Trump official John Schwepp writes "Love the emphasis on age verification and protecting kids online, and former Trump adviser Jean Ball wrote, "I was especially heartened by this section, the one on free speech in First Amendment protections, and quote, "heartedly concur with the White House that Congress should act to prevent government coercion over the free speech rights of AI developers and users alike."
Others noted when their issues weren't there. Cybersecurity dive reporter Eric Geller writes, "One week after Trump's national cyber director said the administration wants to make cyber security a core consideration for AI developers, Trump issues an AI policy framework that doesn't even mention cyber." Unsurprisingly, the framework doesn't have a lot of support from Democrats, representative
Josh Godheimer wrote, "While the framework takes steps in the right direction, unfortunately
“the White House fails to address key issues, including strong accountability for AI companies.”
Preemption only makes sense if federal law effectively replaces what states have built with a standard that is truly comprehensive in protects Americans. Simply put, this framework still has a long way to go, voluntary standards won't do the trick." In addition to common sense guardrails, we need serious solutions that address workforce
challenges, better incentives for STEM education, enhanced protections against deep fakes, safe and secure AI models and agents, and guarantees that all Americans reap the massive benefits AI offers. Godheimer concludes, "We are in a Cold War era style race with China and we must win, both for our economy and our national security.
If done the right way, the potential for areas like health care, education, and government efficiency are boundless, but we have to win it the right way." He concludes by saying that he's working with his colleagues to develop a framework which presumably is in his estimation that right way. CNBC's Emily Wilkins writes, "If the White House wants AI bills to pass, they'll need
pro-business pro-AI Dems like Godheimer on board, based on the statement they have a ways to go." To be honest though, the Godheimer language leaves a lot more space for collaboration than it might seem at first. Even framing it as the framework still has a long way to go is very different from a rejection
out of hand. Indeed, in some ways, the White House's biggest political challenges are coming from their right flank. Steve Bannon's War room account on Twitter, quoted Joe Allen, who said, "Now you look at who this White House national policy framework is enabling, people like Google, people
XAI and Thropic and OpenAI.
What is the predominant goal of every single one of these companies?
“It is a transhuman future, and for some of them a post-human future.”
They want to build machines that replace your work. They want to build machines that have access to and are influencing or even controlling your children. They want to build machines that are the equivalent here on Earth to God. It is profoundly anti-human."
“And even if one views that as the far end of this spectrum of part of the conversation, it”
is still far from clear exactly where the right is going to land when it comes to alignment around AI issues. Still some remain cautiously optimistic.
Dean Ball, who while a former Trump advisor around AI, has been extremely critical lately
of their engagement with Anthropic, writes, "The White House is proposal for a nationwide AI law as a thoughtful document that will serve as an excellent foundation for the legislative work ahead. I would be happy to see these principles of translated well into statute become law." When someone asked, "I don't understand much about policy, but is a four-page document
with lists of quite broad and unspecific recommendations really worth celebrating?"
“Dean responded, "Yes," and clarified, "The major and crucial distinction between this”
document and an executive order or another report like the AI Action Plan is that this document is self-consciously the opening move in a long, multidimensional public negotiation over the legislation. You must read it that way." And so in that spirit, we will close here with the idea that this is the beginning
of a much bigger conversation, one that I would love for you all to be involved in. For now that is going to do it for today's AI Daily Brief, I appreciate you listening
or watching as always, and until next time, peace!


