Today on the AI Daily Brief, ProWorker AI.
Before that in the headlines, Meta delays its next AI model.
“The AI Daily Brief is a daily podcast and video about the most important news and discussions”
in AI. Oh, right friends, quick announcements before we dive in.
First of all, thank you to today's sponsors, KPMG, robots and pencils, Blitzie and AI
You See, to get an ad free version of the show, go to patreon.com/aidillybrief, or you can subscribe on Apple Podcasts. And if you're interested in sponsoring the show, send us a note at [email protected]. It appears that Meta has had another setback as their latest frontier model gets delayed. The New York Times reports that Meta's new model codenamed avocado has been delayed until
at least May. We last heard about the model's progress in January when CTO Andrew Bosworth told rotors it had been delivered for internal testing. He said at the time that the model was "very good" but warned that there's still a lot of work to be done in the reinforcement learning process.
More recently, there's been reports that Meta has set up a new Applied AI division that reports to Bosworth, rather than AICEO Alexander Wang. Rumors followed that Zuckerberg was done with Wang, although those rumours were stretched lessly denied.
“Now the reporting states that avocado performance has fallen short of the latest models from”
rivals, and this month's plan rollout has been delayed. The report mentioned a short fall in reasoning, coding, and writing from internal benchmarks.
In other words, basically every major category for modern LLMs.
Reportedly, the model outperformed Gemini 2.5 but wasn't a match for Gemini 3. Now part of the issue could be the long development cycle. Meta has been working on this model for almost nine months, and the goalpost of model performance have shifted dramatically during that time. Meta put an optimistic spin on the issue, issuing a statement which said, "Our next model
will be good, but more importantly, show the rapid trajectory we're on, and then we'll steadily push the frontier over the course of the year as we continue to release new models, we're excited for people to see what we've been cooking very soon." And yet, that doesn't exactly comport with reports that Meta leadership is even considering licensing Gemini to power their products as a stopgap solution.
That said, researchers are said to be excited about the next model after avocado, codenamed watermelon. Now ultimately, I certainly think that making people wait for a model that's actually good is way better than releasing a model that no one is impressed with, but the model battle for Meta remains distinctly uphill.
Ethan Mollek summed up a bit of the industry sentiment when he tweeted, "Both XAI and Meta seem to be falling behind, based on the Grok 4.2 benchmarks in this reporting. Frontier AI models are really a three-way race at this point." Speaking of XAI, it seems like there are big moves of foot in that organization. First of all, they grabbed a pair of senior leaders from cursor in a bid to catch up
on coding. The source is speaking with the information, said that Andrew Milich and Jason Ginsberg have joined XAI and will report directly to Elon Musk. The pair worked as heads of products for engineering at cursor. Now the move comes as Elon acknowledges that XAI is behind on coding.
During a conference appearance on Wednesday, Musk admitted the problem but said he expects XAI to catch up and exceed our competitors, his words, by the middle of the year. Meanwhile, XAI co-founders keep heading for the exits. Business insider reports that Xi-Hang died left the company earlier this week, and goes on Zhang has told colleagues he plans to leave in the coming days.
For those keeping track at home, that's another two departures to make a total of six co-founders leaving this year. Only three of the 12 co-founders remain at the company in one of them is Elon Musk himself. Now there's been speculation that some of these co-founders exited after projects they led fell short of Elon's expectations.
For example, Zhang led Grok code, until we pull in who departed at the end of February, was in charge of the Maline macro-hard project which we discussed on Thursday's show. Musk of course is known as a difficult person to work for, and hinted that this is a controlled demolition rather than a leadership collapse. He posted on Thursday, XAI was not built right the first time around, so was being rebuilt
from the foundations up. Same thing happened with Tesla.
Speaking of cursor, that company is seeking new funding at a massive $50 billion valuation.
Bloomberg reports that cursor is in talks for a new funding round that would almost double their valuation. The last round in November brought in $2.3 billion at a $29.3 billion valuation.
“Now remember, this is a company that doubled their revenue to $2 billion since they last”
raised funds. But what's significant about this is that if they really are raising at a $50 billion valuation, that suggests that they are trying to compete for the long haul, rather than thinking about trying to shack up with one of the leading model labs. Now that choice is in a shock given how CEO Michael Truill is positioning the company.
Employees were told in an all-hands-in-genuity that for cursor it is in his words wartime. That means a product overhauled a focus on automated coding tools, as well as an ambitious project to train their own state-of-the-art models to reduce their dependency on the other labs. In Labland, the information reports that anthropic is in talks with Blackstone and other
PE firms to launch an AI consulting venture. The venture would be a dedicated consulting firm to sell anthropic's tech to corporate customers. Alas, apparently anthropics ongoing conflict with the Pentagon has put the talks in the
Back burner.
Sources said that Blackstone leaders, including CEO Stephen Schwartzman, are concerned about
“announcing a partnership while anthropic is mired in conflict with the administration.”
The genesis of the deal was apparently Blackstone seeking anthropics help to deliver consulting services to their hundreds of portfolio companies. Blackstone also discussed a similar plan with open AI according to sources familiar with the talks.
Ultimately, what all of these stories get to is the fact that enterprises are lagging, and
it's going to take just a huge amount of time on task and actual human bodies to do the internal implementation that's actually needed. I predict you are going to see massive expansions in the forward deployed engineering departments of these firms, partnerships with all the existing consulting firms, new venture spin-ups like this, all at once and more.
Next up, an interesting statistic from a new survey from the American Medical Association, the survey found that 81% of doctors now use AI in their profession, leading use cases include keeping up with medical research, generating discharge instructions and documenting appointments.
“The AMA first gathered this data in 2023 and a found that usage is more than doubled since”
then. said AMA CEO John White, AI has quickly become part of everyday medical practice, physicians see real promise and a ability to support clinical decisions and cut down on administrative burden. Notably, the AMA has adopted augmented intelligence as their term for AI, hammering home the point that technology isn't supposed to replace human judgment.
And indeed, when you dig into the data, that seems to be how it's playing out. The leading use cases of AI in medicine are all about summarizing information and ading within administrative work, assistive diagnosis was the only use case that comes close to the actual practice of medicine, and only 17% of doctors said they were using AI in this way.
Finally today, some new comments from Sam Altman, speaking at a BlackRock conference
on Wednesday, while intelligence to cheap-to-meat or might be the end goal, for now, Sam Altman is very distinctly in the business of selling tokens. Speaking at the conference he said, fundamentally our business is going to look like selling tokens. We see a future where intelligence is a utility like electricity or water, and people buy it from us on a meter. In the full quote, Altman said the goal is still
to make abundant cheap intelligence widely available. However, he explored the idea that skyrocketing demand could mean high prices or rationing. That's particularly relevant given that token heavy-age and accused cases are coming online as energy issues are picking up steam. Now speaking on AGI, Altman said the term has lost all meaning. Instead, he's watching
for two major milestones. First, the threshold when the majority of the world's intelligence is inside of data centers, acknowledging huge error bars in the prediction he said this could
happen by 2028, and the second marker is the moment when leading scientists, CEOs, and political
leaders can no longer do their jobs without AI. Altman commented, "More and more, these jobs will be supervising a bunch of AI." That threshold of when you really wouldn't want to be doing your job without heavy reliance on AI might take a little bit longer, but probably not a lot longer. I don't know, man, that pretty much describes my job already, but here we are. Altman also addressed the numerous concerns around AI adoption, commenting, data centers are
getting blamed for electricity price hikes, and almost every company that does layoffs is blaming AI whether or not it really is about AI. Altman argued that one of the biggest problems to be faced in the coming years is a rapid shift in how capitalism works. First, he noted that the entire structure of capitalism is designed to manage scarcity. If AI delivers true abundance, then society will need to rapidly adjust to a new paradigm. In the more immediate term,
he noted that AI is disrupting the balance between labor and capital that keeps society functioning.
“He added, "I'm not a long-term job's doomer. I think we will figure out new things to do,”
but I think the next few years are going to be a painful adjustment." And indeed, that is exactly the topic of our next segment, so with that, we will close the headlines and move on to the main episode. Egentic AI is powering a $3 trillion productivity revolution, and leaders are hitting a real decision point. Do you build your own AI agents by off the shelf or borrow by partnering
to scale faster? KPMG's latest thought leadership paper, Egentic AI Untangled, navigating the build by or borrow decision, does a great job cutting through the noise or the practical framework to help you choose based on value risk and readiness, and how to scale agents with the right trust, governance, and orchestration foundation. Don't lock in the wrong model. You can download the paper right now at www.kpmg.us/nevigate. Again, that's www.kpmg.us/nevigate.
Most companies don't struggle with ideas. They struggle with turning them into real AI systems that deliver value. Robots and Pencils is a company built to close that gap. They design and deliver intelligent, cloud-native systems powered by generative and agentic AI, with focus, speed, and clear outcomes. Robots and Pencils work in small, high-impact pods. Engineers, strategists, designers, and applied AI specialists working together to move from
my data production without unnecessary friction. Powered by RoboWorks, their agendic acceleration platform, teams deliver meaningful results including initial launches in as little as 45 days depending on scope. If your organization is ready to move faster, reduce complexity, and turn AI ambition into real results, Robots and Pencils is built for that moment.
Start the conversation at robotsandpensils.
Robots and Pencils impact at velocity. Blitzy is driving over 5X engineering velocity for
large-scale enterprises. A publicly traded insurance provider leveraged Blitzy to build a bespoke payment processing application, an estimated 13-month project, and with Blitzy the application was completed in live and production in six weeks. A publicly traded vertical SaaS provider used Blitzy to extract services from a 500,000-line monolith without disrupting production 21 times faster than their pre-blitzy estimates. These aren't experiments. This is how
the world's most innovative enterprises are shipping software in 2026. You can hear directly about Blitzy from other Fortune 500 CTOs on the modern CTO or CIO classified podcasts. To learn more about how Blitzy can impact your SDLC, book a meeting with an AI solutions consultant at Blitzy.com, that's BLI, TZY.com.
“There's a new standard that I think is going to matter a lot for the enterprise AI agent space.”
It's called AIUC1, and it builds itself as the world's first AI agent standard.
It's designed to cover all the core enterprise risks, things like data and privacy, security, safety, reliability, accountability, and societal impact, all verified by a trusted third party. One of the reasons it's on my radar is that 11 labs, who you've heard me talk about before and is just an absolute juggernaut right now, just became the first voice agent to be certified against AIUC1, and is launching a first-of-its-kind insurable AI agent. What that means in practice
is real-time guardrails that block unsafe responses and protect against manipulation, plus a full safety stack. This is the kind of thing that unlocks enterprise adoption. When a company building on 11 labs can point to a third-party certification and say our agents are secure, safe, and verified, that changes the conversation. Go to AIUC.com to learn about the world's first standard for AI agents. That's AIUC.com.
Welcome back to the AI Daily Brief. There is a lot of chatter right now about AI-related job displacement. Just this week we've had both rumored and confirmed AI-related layoffs. In the confirmed category, Enterprise Software Company at Lassian has cut roughly 10% of its workforce or 1600 jobs, explicitly saying that it is, in fact, about AI. Said CEO Mike Cannonbrooks, "Our approach is not AI replaces people, but it would be disingenuous
to pretend AI doesn't change the mix of skills we need or the number of roles required in certain areas." It does. Meanwhile, rumors continue to swirl around massive job cuts at Oracle, although that is at this point completely unconfirmed.
Now, as we always point out, not everyone is convinced that all of these layoffs being announced
are actually about AI. Before these cuts at Lassian were announced, Buco Capital wrote on Twitter,
“"Unfortunately, I think we'll see meaningful layoffs in software this year, and I want to explain”
why it's just air-covered to call them AI-driven layoffs, even though every company will do so." Yes, AI makes companies more efficient. Developers and marketers can do more. CSMs have a wider span of control. You can answer 70% of your tier 1 support cases with AI, but that's not really what's going on. Two things are more elemental to the situation in the actual driver. One, valuations have reset, with a totally valid and reasonable focus on
free cash flows minus stock compensation, and the math simply doesn't math. Two, many of these companies staffed up during COVID had never actually took their medicine and got fit. They thought demand would come back, and it mostly hasn't, not in the same way. Now, he actually went on to use it Lassian as an example. He argues that for both at Lassian and Hubspot, the free cash flow right now is actually around zero. So, Buco writes, "The actual technical talent needs to get paid,
but their stocks are down 60% to 70% from recent highs." So, the situation is, they need to start making actual money. They have to pay their tech talent. Their dollar grants are going to have serious pollution consequences, and their cost structures are completely bloated for their current market cap, especially compared to more nimble competitors. If they keep paying all of these people in stock, their delusion will continue and the stocks will continue to be punished. If they pay
them all in cash, they will have no free cash flow. TLDR layoffs are, unfortunately, the only true answer. They are coming, they will be credited to AI, and that will be covered for the real problem.
“Now, I think this is an extremely important macro point that gets lost in this conversation,”
but for the purposes of what we're talking about today, it doesn't so much matter. What matters is that job disruption is in the zeitgeist. The week following that's a treaty report, and Thropic also released their latest research called Labour Market impacts of AI. In it, they introduce a new measure of AI displacement risk that they call observed exposure. The measure combines theoretical LLM capability as well as real-world usage data,
focusing on weighing work that is automated, rather than augmenting existing workers, more heavily than other categories. That produced this chart which you might have seen floating around, that showed both the measure of theoretical AI coverage and observed AI coverage. For example, they estimate that a huge amount of the knowledge work in management, business, and finance, and other areas like that could be done by AI, despite only a small fraction of it being currently
Done by AI.
expressed in empirical form. And yet, of course, people are understandably nervous to see categories
“that endropic argues have 90% plus exposure to AI disruption. Now to be fair, the rest of the”
report remains fairly inconclusive at this stage. They found no detectable unemployment effect yet, although they did reaffirm the idea that the canary in the coal mine might be the hiring of young workers into exposed jobs, which does seem to be slowing. Still, it's one more example of the larger conversation that's happening in earnest right now. And while one response to this is to angrily blame Silicon Valley CEO's, see the just release job loss.ai, which I'll talk
about much more extensively in a show this weekend, what's more encouraging to me is that we're starting to see some high-level discourse about how to make our way through this. Last week in the New York Times, ran an opinion piece from former Comer Secretary Gina Romando. They unfortunately
titled it "America Cannot With Stand The Economic Shock That's Coming," despite that completely
not at all being what Romando wrote about. In the piece she writes, artificial intelligence is transforming work faster than our workforce is adapting. Millions of Americans from white collar to blue collar, entry-level to executive may soon find themselves jobless and without prospects. Leaders across the political spectrum and the private sector tell me this crisis is coming and there's no obvious solution. I refuse to accept that an unemployment crisis is inevitable.
The answer, however, is into slow-down AI innovation and leave ourselves less competitive and less prepared. Nor is it generic re-skilling that pushes people into completely new roles and industries. Instead she argues we should build a modern transition system with better data to predict job losses and new forms of support to help workers transition between jobs. What we need, she writes, is a new grand bargain between the public and private sectors. One in which employers are held
“responsible for defining skills essential to the AI economy and for creating pathways into jobs”
and the government invest in the training incentives and safety nets that help workers move
quickly into them. The private sector has always been better positioned to see which new jobs are
emerging, which skills matter and how quickly demand will shift. So this new bargain should start with businesses taking the lead and providing real-time AI powered insights into hiring plans, technology adoption and skill needs. Now from there she goes into a number of other pieces of what she thinks would be a better overall framework. One area that she wants to see is better coordination between education and employers. She says the future of higher education should
be modular and employers must be active partners in shaping what gets taught. The country needs the shift focus from long and expensive degrees that riskops the lessons before completion towards short, affordable job-linked credits that offer on ramps from education to work. People should be encouraged to pursue credentials that can stand alone or be stacked over time
“into degrees, bringing people back to campus over the arc of their lives. She gives the”
example of a mid-career accountant who doesn't need another master's degree. Instead Raimando writes, she may be better off with a four-month credential and temporary wage insurance that bridges any pay gap and incentivizes her to accept a new role sooner. She also calls for new ways for higher education to be funded for a modernized apprenticeship system of employer-led training and incentives for the private sector to do this. That may mean she writes employer tax credits tied to on the
job training. States could pilot tax code reforms that reward worker retention and entry-level hiring, penalized layoffs and encourage companies to reinvest AI driven savings into the creation of jobs. This isn't corporate charity, it's strategic necessity. Now, this is one of the areas that I find most interesting based on my heuristic of opportunity AI versus efficiency AI. If you've heard me speak of this before, efficiency AI is the idea of using AI to do the same with less,
which is of course going to be at the root of most of these job cuts. Opportunity AI is seeing the potential for AI to allow you to produce more of whatever it is you produce or to bridge into new areas. Capitalism is of course inherently expanding and so it is inevitable that in the long run organizations that view AI as opportunity expanding will be the one to win. Why I'm interested in incentives to reinvest AI driven savings into the creation of jobs is that it creates an incentive
for employers to not stop at the edge of efficiency AI and instead to jump into that framework of opportunity AI. Now that's something that I'd like to explore in much more depth at some point, but let's wrap up with Ramondo's piece. Skeptics will argue she said that we've tried workforce reform and it hasn't worked, that the landscape for workforce development is littered with underperforming small scale training initiatives. They aren't wrong, but history shows that real
change comes in times of crisis. After World War II the GI Bill and land grant universities sent millions of veterans to school while public research funding seated advancements and manufacturing aerospace semiconductors and computing. A new grand bargain between the public and private sectors can help us meet this moment. I know we have the ingenuity to do it, what's missing now is the collective will. And before you write this office naively optimistic, Miss Ramondo is not the only
one speaking this way. The Washington Post editorial board recently wrote an opinion piece called an unlikely AI optimist. They reference a European Central Bank study that found that AI creates
More jobs than it eliminates.
and makes a sport of hamstringing technological innovation." So a report released Wednesday by the
“European Central Bank is especially striking. Based on a study of 5,000 firms in the Eurozone,”
two labor economists conclude that businesses embracing artificial intelligence are more likely to hire new staff than those who aren't. Specifically, they say companies that make significant use of AI are about 4% more likely to take on additional staff. In other words, the authors conclude AI intensive firms tend on average to hire rather than fire. The post editorial board writes, "This further undercuts the narrative that AI will take everyone's job. The nature of work
will evolve, but mostly for the better, as technological progress allows for less scut work." And yet they write, "Most Americans still express uncharacteristic pessimism about AI." Last month, 63% told UGUB they think AI will lead to a decrease in the number of jobs available,
while just 7% predicted AI will increase jobs. This is notably more skeptical than
respondents in China, where around 40% worry about AI replacing jobs. Because the United States has the world's biggest economy, perhaps people feel like they have the most to lose when the
“world changes. But America's success in the past has always come from embracing and shaping the”
future rather than recoiling from it. The ECB report is a refreshing reminder that there are life-changing opportunities not just risks from the AI revolution. And another paper around these themes that I want to share comes from three actual MIT professors and researchers, Darren Asumoglu, David Ator, and Simon Johnson. The paper released a couple weeks ago is called "building pro-worker artificial intelligence." In short, they argue that there are different categories of
technological change, with various types of impact on human employment. In the abstract they write, while AI's capacity to automate work is substantial, we argue that it's potential to serve as a collaborator by extending human judgment, enabling new tasks, and accelerating skill acquisition, is equally transformative and currently underexploited. The paper breaks technological change into a taxonomy of five categories. They evaluate each of those categories across three dimensions.
Labor productivity, the value of human expertise, and labor share of national income. The five categories are one labor augmenting technologies, two capital augmenting technologies, three automation technologies, four new task-creating technologies, five expertise leveling technologies. In each of these examples, they increase labor productivity. However, when it comes to the value of human expertise, there can be wide differences among them. Take, for example, the difference
between automation technologies, where existing expertise is made obsolete, versus new task-creating technologies where new expertise is needed. Now, in their framework, the only unambiguously
“pro-worker category is new task-creating technologies. The one that I think would see the most”
debate among smart people is the ambiguous pro-worker designation of expertise leveling technologies, which they call ambiguous because while new entrance benefit incumbents expertise is potentially devalued, whether democratization of expertise is a good thing or not, could get into some thorny debates. But their point here is to make it clear that not all technology changes the same, and that when it comes to AI, although we assume that it's all automation technologies that's just
not actually the case, they give a few examples of pro-worker AI in the field. One example is an electrician's assistant, where an electrician uses LLMs to support electricians in troubleshooting electronic machinery. Workers can upload photos and diagnostic data, and an AI matches to a database of prior problems. In practice, this have the average time for completing maintenance reports, and they categorize it as pro-worker because the worker remains in the loop modifying
AI recommendations, being collaborative, not subservient. Other examples they give are service workers assistant, a teacher's AI aid, hearing aids for Chinese gig delivery workers, and patent examiner decisions support. And yet they say these cases are too rare right now. Their main argument is that the market is at the moment not capitalizing on pro-worker AI opportunities. They argue that the current AI focuses overwhelmingly on task automation and AI development, neither of which co-hears
with their pro-worker definition. There are a couple of reasons they argue this is happening, misaligned firm incentives, like managers using automation, as a way to reduce dependence on unionized labor, rent dissipation, i.e. managers wanting to redistribute savings to share holders,
and with the authors call the AGI bet. Basically firms that believe AGI is imminent that see
little point in investing in pro-worker technologies. To wit, why build tools to enhance workers, if workers will be fully replaceable shortly. They also see misaligned developer incentives. Some of those build off of the misaligned firm incentives, i.e. customer demand shapes supply, if firms prefer buying AI automation, tech companies will prioritize building automation tools. It's self reinforcing. There's also a time horizon problem. Pro-worker technologies might
require years of investment while automation solutions are already market-ready. There's also the potential for worker resistance. Workers themselves may resist pro-worker AI tools that require them to acquire new expertise and adjust work habits.
If workers lack foundational skills or are reluctant to invest, then firms ar...
From there, the authors give nine different policy directions that they believe could
“move the needle further in the direction of pro-worker AI. This, I think, is the area where most”
people would have to bait, but I'm appreciative of the authors actually laying out some potential paths forward, rather than just identifying the problem. One category of remediation they recommend is, for example, for the government to leverage their huge GDP footprint in areas like healthcare and education to use market incentives to drive developers to develop pro-worker AI. They have a bunch of other ideas as well around
the task code, antitrust, etc. But the point is that whether these are the right directions or not, there are opportunities to try to drive towards more pro-worker AI. Lastly, and maybe most importantly, the paper pushes back on an idea, which seems almost by the fault accepted in AI discourse, that automation is the dominant force in economic history. However, if automation were the whole story of the author's argue,
labor's share should have been declining relentlessly since the Industrial Revolution,
but it hasn't, in fact, it rose during the first eight decades of the 20th century.
What's more, they point out? Rich heavily automated countries have higher labor shares than poor less automated countries. This is the opposite of what the automation roads labor thesis predicts. The explanation is that new task creation counter balances automation, which is a fancy way of saying that the creation and creative destruction does eventually kick in, and the jobs that go away are replaced by new other types of jobs, which is not to say that
we should just let the process happen on its own. In fact, the whole reason they're writing the paper is to get more people engaged and explicitly trying to push towards a pro-worker AI paradigm.
“I think there's a lot more discourse to be had about this,”
but one thing that I'd like to point out is that in spite of so much of the focus being on the jobs that are going away, we are starting to see some of the new things that will be created. Think about it this way. Take any job that exists right now, any knowledge worker job, and make it have a baby with a software engineer. And then give that child, who doesn't know what they don't know yet, the awareness of what one parent does with the coding skills of the other.
What comes out is kind of the new role. Effectively, we have agent builders and agent orchestrators in every flavor of the knowledge worker rainbow, and this will increasingly be an incredibly important role. In fact, flavors of AI engineer may become the dominant role. This is something that late in space has been talking about a lot recently. My point, and this is something that I'm going to be harping on with increasing fervor, is that there are a lot of ways to look at our AI
“future. I think, unfortunately, that the incentives of traditional media are to be relentlessly”
pessimistic, cynical, and fear-mongering. Again, in this incredibly proactive policy idea rich op-ed, the New York Times editors decided to go with the title "America cannot withstand the economic shock that's coming." However, with just a little bit of work, you can find more positive, optimistic thinking, and evidence, in many even sometimes unexpected places. So anyways, friends,
that is going to do it for today's episode. I appreciate you listening or watching. As always,
and until next time, peace!


