Today on the AI Daily Brief, as block lays off 40% of its staff, some are ask...
"Is this the new AI normal?" Before that, in the headlines, Google drops a new nanopinana image generation model.
“The AI Daily Brief is a daily podcast and video about the most important news and discussions”
in AI. Quick announcements before we dive in.
First of all, thank you to today's sponsors, KPMG, Insightwise, AI UC, and Blitzie.
To get an ad-free version of the show, go to patreon.com/aidelebrief, or you can subscribe on Apple Podcasts, to learn about sponsoring the show, or really anything else having to do with the show, go to aidelebrief.ai. One specific announcement that I'm excited to share, you've probably heard me talking about our twin open-cloud-related programs, "Claw Camp," which is up to about 5,000 people participating,
which is just absolutely phenomenal, and which is a totally free self-directed program that's going to teach you to build your agent team. For "Claw Camp," we're recently added more support for the agent team building part of the program, and you can find all of that at CampClaw.ai, and if you are in an enterprise and want to bring agent and agent team building to your company, we're now officially
live with enterprise-cloth. It is a six-week executive sprint that is all about helping executives learn about agents
“by actually building them and then surrounding that building an agent strategy and integration”
plan.
"Claw Camp" will always be free enterprise-cloth is a paid program, and it's being led by
the most excellent new far-guest bar who you've heard as a frequent guest on the show with a support from me. You can find out all about that at enterprisecloth.ai, registration will be open for about a week, and we will kick off the sprint in early March. Feel free to email me with any questions, but for now, let's dive into the show.
Men some weeks are all about just a crushing stream of new products and new models, and others are about the big picture debates and discussions, and this was definitely the latter. However, providing a little bit of sweet new capability relief is Google with their release of Nano Banana 2. Now each iteration of Nano Banana has been a hugely forward.
The original release last October was the first time users were able to reliably edit an image with natural language prompts. This was a huge deal, and even inspired me to think that we should probably have a different
“way to benchmark things based on how many new capabilities they unlock rather than by traditional”
benchmarks. It turns out that being able to use natural language to edit certain parts of an image just unlocked a huge amount of use cases that were fairly difficult before. Still maybe even bigger was the release of Nano Banana Pro in November, which combined image generation with reasoning to produce among other things the capability for really
high quality infographics and visual explanations. It turns out that increased ability to handle text, plus the ability to reason over an image generation, was a really potent combination. It wasn't just that you could give it a set of words and it would accurately represent them now.
It was that you could drop, for example, a transcript of my episode in, and it would spit
back out a visual infographic representation of it that almost always did a pretty good job.
Now some of the problems with Nano Banana Pro was that it was kind of slow and a little expensive. Google did offer a generous free trial but once that expired, free users were reverted back to the original Nano Banana which was starting to show its age. This week's release of Nano Banana 2 seeks to rectify the situation. Right Google, now you can get the advanced world knowledge, quality, and reasoning you
love a Nano Banana Pro at lightning fast speed. Now formally this model is Gemini 3.1 flash image, meaning it takes the same image generation layer as Nano Banana Pro and applies it to a more streamlined base model. Typically it has all the cost and speed advantages of Google's flash models applied to image generation.
The model inherits the knowledge base of flash and shares its ability to draw on web search as necessary. It also retains Nano Banana Pro's ability to generate legible text and its hallmarking for graphics style. The new model has many of the professional grade features from Nano Banana Pro, like strong
instruction following and the ability to integrate up to 5 characters and 14 objects from source images. It also supports outputs up to 4K making it viable for certain types of professional use cases. The big change is really the cost in the speed.
Nano Banana 2 is around half the cost of Nano Banana Pro and delivers outputs in seconds. 2 is now the default image generation model across all subscription tiers, although Pro and ultra subscribers will retain the ability to tap into Nano Banana Pro for specialized tasks. Venture beat frame the release as part of a land grab for production scale image generation.
They noted that Quen image 2.0 released earlier this month is arguably stated the art at around half the price of Nano Banana 2 while also being small enough to host on local devices. Reflecting the conversation that's been happening all year that we are no longer comparing just pure capability but also efficiency, Venture beat rights, Nano Banana 2 doesn't represent a generational leap in image generation quality.
What it represents is the maturation of AI image generation from a creative novelty into a production ready infrastructure component. Google they say is making a calculated bet. The next wave of enterprise AI image adoption will be driven not by the models that produce the most beautiful images, but by the ones that produce good enough images, fast enough
Cheaply enough to deploy at scale.
I think that's true, but I think you also see Google increasingly trying to flex the integration
“of all their systems to a hole that's greater than the sum of the parts.”
For example in his tweet about the model, CEO Sudarpachai shared a demo that they called Windowsy. He writes that it uses Nano Banana 2's world understanding to generate more accurate views from any window in the world, even pulling live local weather info. That's a type of demo that of course goes way beyond just the ability to produce a cool
image, and actually integrates the systems for something that's more powerful.
Ethan Molich writes, "I had some early access to Nano Banana 2. It isn't perfect, but it is the first model to handle really complex images and diagrams with some consistency." Justine Moore from A16Z found that it was leveled up for a bunch of use cases including infographics, ads, action shots, and cartoons.
An infographics justine found both improved text handling, as well as more accurate information. She also found improvements for product photography, action shots, and much more. I haven't had a chance to play around with it much yet, but I'm excited to do so. Next up a little report from Anthropic, we're going to cover the latest in their back and forth with the Pentagon on tomorrow's makeup show, but today we're looking at the
information report that daily signups for Claude have tripled since November. The total number of paid subscribers has more than doubled since October while free users are up by 60% over the past month. The information wrote that while Anthropic declined to share specifics, they said that growing usage of Claude, Claude, and Claude co-work were driving the surge.
One of the really fascinating phenomenon right now is that technical complexity of products does not seem to be as big of a barrier when it comes to adoption, as has previously been the norm for technology products.
“There is at least some evidence and I think this is a good example of that.”
That when it comes to AI, particularly work AI, people are willing to go the extra mile if they really can get benefit out of it. I have to say the fact that 5,000 people have signed up to learn how to use open claw on our claw camp program strikes me as a case in point on that as well. One other story from earlier in the week when I was traveling, on Monday, IBM became
the latest company to sell off due to Anthropic-related headlines. The company's stock loss 13% on the day, their largest single-day draw-down since March 2020. This time, the trigger wasn't even a new feature from Anthropic, but merely a blog post about how clawed can be used to modernize legacy code bases.
The post discussed the use of AI to rewrite co-ball systems, one of the most notorious problems in computer science. Co-ball was the dominant programming language back in the 1970s and still believe it or
not powers huge amounts of banking infrastructure and other critical systems.
However, the developers who actually understand the programming language are quite literally a dying breed. There are barely enough co-ball experts left to maintain these systems, let alone overhaul and rewrite them in modern language. Road Anthropic, modernizing a co-ball system once required armies of consultant spending
years mapping workflows. This resulted in large timelines and high costs that fewer willing to take on. AI changes this. Tools like Cloud Code can automate the exploration and analysis phases that consume most of the effort in co-ball modernization.
Now of all the crashes that Anthropic has triggered over the past month, to sum this was one of the more puzzling. IBM of course does far more than just maintain co-ball, and this wasn't even a new feature announced by Anthropic. They first showed off a co-ball modernization demo three months ago, and AI has been able
to assist in this process for several generations. In fact, last June, the Wall Street Journal Profiled Morgan Stanley's co-ball modernization efforts. Morgan Stanley used a combination of internal tools and open AI models, and boasted that
they had saved 280,000 developer hours while reviewing 9 million lines of code.
This is a very clear example then, of the fact that market participants aren't reacting just to new developments in AI. Charitably, they are catching up on more than a year of AI advancements and seriously thinking through the implications for the first time. This Charitably, of course, they might just be reflexively selling anything mentioned in
a blog post from Anthropic. Moving over to the chip battle, meta has rained in the scope of their custom silicone program after hitting roadblocks and design. The information reports that meta has scrapped development plans for their most advanced AI chip.
After struggling with key elements of the chip's design, efforts will be refocused on a less complicated version of a custom silicone. In a statement to the press, a meta spokesperson said, "We remain committed to investing in a diverse silicone portfolio to meet our needs, which includes advancing our meta inference in training accelerator portfolio, and will have more to share this year."
meta also recently signed massive chip buying deals with both Nvidia and AMD. In addition, the information broke news on Thursday that meta had signed a multi-billion-dollar deal with Google to rent their TPUs as a training cluster. The two companies had previously explored and outright purchase of TPUs, but sources didn't elaborate on the status of that deal.
“Honestly, what it feels like to me as we get more and more stories like this is that companies”
calculus around the cost of paying the Nvidia tax has changed and that custom silicone projects just aren't as valuable as getting GPUs on the racks at any cost. Last that a Microsoft has joined the crowd in open qualification. They have announced a new product called Copilot Tasks, which is designed for offloading mundane tasks.
The agent is equipped with its own virtual computer and browser, which Microsoft says will
Allow it to handle tasks like scheduling appointments and generating study pl...
The announcement leaned heavily on the idea that this is an agent designed for everyone, rather than just developers and enterprises. Microsoft described it as a "to-do list" that does itself. Adding, you describe what you need in natural language, Copilot plans and goes to work, you adjust or refine as needed.
Microsoft said the agent will check for permission before taking meaningful actions, and is initially releasing the product as a limited research preview to a small group of testers. But if you wanted any clearer sign that everyone is getting claw-fied, look no further than this. For now, however, that is going to do it for today's headlines.
Next up, the main episode. Egentic AI is powering a $3 trillion productivity revolution, and leaders are hitting a real decision point.
“Do you build your own AI agents by off the shelf or borrow by partnering to scale faster?”
KPMG's latest thought leadership paper, Egentic AI Untangled, navigating the build by or borrow decision, does a great job cutting through the noise or the practical framework to help you choose based on value, risk and readiness, and how to scale agents with the right trust, governance, and orchestration foundation. Don't lock in the wrong model.
You can download the paper right now at www.kpmG.us/nevigate, again that's www.kpmG.us/nevigate. As a consultant, responding to proposals can often feel like playing tennis against a wall. You're serving against yourself trying to guess what the client really wants. That all changes with insight-wise. Now you've got an AI proposals engine that thinks just like your client.
It returns to the brief time and time again, picking apart your work, identifying key evaluation criteria and wind themes, and making recommendations to ensure you stand out. Suddenly you're on center court, but this time you've got a secret weapon. Insightwise gets rid of all the time-consuming manual work, so you can focus on winning
more business more often. Generate reports, Bill Insights from your own data, build competitive advantage, and go to sleep before 2am. When it comes to proposals, you only get one shot. With Insightwise, make yours an ace.
“There's a new standard that I think is going to matter a lot for the Enterprise AI agent space.”
It's called AIUC1, and it builds itself as the world's first AI agent standard.
It's designed to cover all the core enterprise risks, things like data and privacy, security, safety, reliability, accountability, and societal impact, all verified by a trusted third party. One of the reasons it's on my radar is that 11 labs, who you've heard me talk about before, and it's just an absolute juggernaut right now, just became the first voice agent to be certified
against AIUC1, and is launching a first-of-its-kind insurable AI agent. What that means in practice is real-time guardrails that block unsafe responses and protect against manipulation, plus a full safety stack. This is the kind of thing that unlocks enterprise adoption. When a company building on 11 labs can point to a third-party certification and say our
agents are secure, safe, and verified, that changes the conversation.
Go to AIUC.com to learn about the world's first standard for AI agents, that's AIUC.com.
“Blitzie is driving over 5X engineering velocity for large-scale enterprises.”
A publicly traded insurance provider leveraged Blitzie to build a bespoke payments processing application, an estimated 13-month project, and with Blitzie, the application was completed and live in production in six weeks. A publicly traded vertical SaaS provider used Blitzie to extract services from a 500,000-line monolith. Without disrupting production, 21 times faster than their pre-blitzie estimates. These aren't experiments. This is how the world's most innovative
enterprises are shipping software in 2026. You can hear directly about Blitzie from other Fortune 500 CTOs on the modern CTO or CIO classified podcasts. To learn more about how Blitzie can impact your SDLC, book a meeting with an AI solutions consultant at Blitzie.com, that's BLI, TZY.com. Welcome back to the AI Daily Brief. Today, we're talking about a story that, on the one hand, is increasingly familiar. A big public company announces a set of layoffs and sites AI as at least
part of the catalyst. There are a couple things, however, that make this particular iteration of the
story feel just a little bit different. The first is the magnitude of the layoffs, which represents
one of the single biggest cuts in percentage terms in recent years, and the second reason this feels different is the way that it's being received both in the markets, as well as in the public discourse. On Thursday, Jack Dorsey announced that 4,000 employees at Blok formerly known as square would be laid off, that is a 40% reduction in headcount, almost half of the staff gone in one clean cut. Dorsey shared the memo that he sent to the team. Today, we're making one
of the hardest decisions in the history of our company, where reducing our organization by nearly half, from over 10,000 people to just under 6,000. That means over 4,000 of you are being asked to leave or enter into consultation. We're not making this decision because we're in trouble. Our business is strong, grows profit continues to grow, we continue to serve more and more customers and profitability is improving. But something has changed. We're already seeing that
the intelligence tools we're creating in using, paired with smaller and flatter teams, are enabling
A new way of working which fundamentally changes what it means to build and r...
accelerating rapidly. Now Dorsey said that he had two options, cut headcount gradually over months
“or years, or get it all out of the way in one fell swoop. He argued that as loud as this decision”
might be, he thinks it's better than the morale hit that the slow leak of continual layoffs leads to. Now I read the part where he sights the AI transformation as the reason. He actually doesn't use the term AI, which I imagine is very intentional, and it's not exactly clear if he's talking about some specific intelligence tool or system, although Block did incubate an internal AI agent called Goose last year. The agent was initially constructed as a harness for
AI coding, but even back in March, Block was making use of the agent across non-technical teams as well. Brad acts in the tech lead for AI at Block set at the time. We're seeing sales teams analyzed thousands of leads in hours instead of days, content teams automating complex asset management, and project managers cutting administrative time by 75%. The emotional feedback we're getting, like I could cry it was so helpful, really shows how these tools are transforming daily work.
Still, it seems pretty clear from Dorsey's note that he's not talking about a single tool, but instead is talking about the entire system that surrounds getting work done now.
“Indeed, I think the most important line here is this idea of AI, quote, "fundamentally”
changing what it means to build and run a company." And yet, almost as soon as it was announced, there was at least one part of the conversation that was extremely skeptical that AI was the actual reason for these layoffs. Quantian summed up the feelings of many, when they wrote, "Honestly, my reaction to Block is firing half their employees was, why do you have to block half 10,000 employees?"
Morning brew co-founder Austin refrites, everyone is talking about the square layoffs,
but just a reminder, Robin Hood has 2,500 employees and a market cap of 70 billion.
Quantian base has 4,500 employees and a market cap of 50 billion. Square, with its market cap of 30 billion, just cut to 6,000 employees. I wouldn't say this is all of a sudden a symbol of AI transformation and leanness. Bond and Nester will slaughter certainly as in buying it, saying, "In the three years from December 2019 to December 2022,
“Block more than triple its headcount from 3,900 to 1,500."”
On winding less than half an insane COVID-over hiring binge has much more to do with Jack Dorsey's managerial incompetence than whether AI is going to take your job. Slotter continued, "It's abundantly clear that AI is allowing us to be more efficient, is a much more appealing cover story than, uh, I have no idea how to manage a budget or achieve operating leverage, just like a Twitter."
The idea that this is a patterned Dorsey's leadership was also prevalent. A client capital wrote, "No one blinks when Elon Musk cut Twitter's workforce by roughly 80%, largely because the business had been egregiously overstaffed and poorly managed under Jack Dorsey." But now, as Dorsey turns around and cuts 40% of Block's workforce after years of similar mismanagement, the narrative suddenly shifts to AI doom rather than accountability.
Whether or not this is an example of it, economics researcher and professor Alex E. M. S. wrote, "AI laundering or blaming AI for layoffs you are going to do anyway is going to be a real thing." Now the voices around this were so loud that Jack actually came back to address it. He wrote, "Yes, we overhired during COVID because I incorrectly built two separate companies structure, square, and cash app rather than one, which we corrected mid 2024.
But this misses all the complexity we took on through lending, banking, and BNPL,
and that we're now targeting 2 million gross profit per person, 4x our pre-COVID efficiency,
which stayed flat at 500k from 2019 to 2024. We have and do run an efficient company, better than most." Now, whatever you think of that, part of what makes this interesting, is that this is maybe the most direct example of a CEO blaming AI for layoffs and restructuring that we've seen so far. We had the Amazon layoffs over the winter, which were pre-faced by CEO Andy Jassie, describing the long-term effect that AI would have. But when the layoffs
actually came, AI wasn't blamed. That memo, which came out last June, saw Jassie saying, "We will need fewer people doing some of the jobs that are being done today, and more people doing other types of jobs." Basically saying that there were going to be efficiencies from AI that were going to be reflected in headcount, although again, this hasn't been cited in any of the rounds of Amazon layoffs that we've seen. Even where we've seen CEO's
make headlines for discussing AI and internal memos, the connection directly to layoffs hasn't been as clear. Do a lingo cut ties with their contractors as a direct result of switching to AI-generated content, however CEO Lewis von On later backtracked and insisted the company hadn't laid off any full-time staff. Clarn on reduced their headcount by around 40% after adopting AI customer service bots, however their CEO later said this was due to natural attrition
rather than layoffs. The attrition was reversed by hiring contractors in an Uber-like arrangement to replace the workers who had left. The point is that to date, we don't actually have a really clear example of a company massively slashing headcount due to AI efficiency gains and having that actually be the case. Another note where the aspect of the story, though, was the market's reaction. Block-sword by more than 25% in overnight trading following the announcement.
Even though layoffs are typically associated with stock pumps, this was still an extraordinarily large gain. At the same time, as some pointed out, even a 25% jump wasn't enough to put
Block back on firm footing.
from its all-time high in 2021. Even with this dramatic recovery, Block still isn't back to their opening price for this year. And yet despite all this skepticism, it does feel like something of a turning point moment. Speaking with investors on Thursday night earnings call, Dorsey said that most companies will have to make similar AI-related cuts and due course.
“He said, "I don't think we're early to this realization. I think most companies are late.”
Within the next year, I believe the majority of companies will reach the same conclusion and make similar structural changes. I'd rather get here honestly and on our own terms than be forced into it reactively." What's more, Dorsey also validated what we've been talking about
on this show basically non-stop since the beginning of the year, which is that even within the
context of AI, something big had shifted in the very near past. Coming to Dorsey, something happened in December of last year, where the models got an order of magnitude more capable and more intelligent, and it's really shown a path forward in terms of us being able to apply it to nearly every single thing that we do. So if there are any gaps in our usage of AI right now, it's an application gap. Outside of the skepticism, the main take right now is that this is likely
to be the beginning of a pattern. Rites, biology, shrimp, avocen, this is the first AI cut and it will send shock waves. Journalists as a bellicominsco rights, this is precisely how this a trainee doom loop begins. The prospect of short-term games like this outweighs concerns over longer-term externalities and negative feedback loops. Putting it more crispy crystal ball rights, block just cut 40% of their workforce because of AI and were rewarded with a massive stock surge.
Other companies are going to want to recreate this. Job loss could get very ugly, very quick. Rude to Fire Rites, this was probably the starting block. When Wall Street companies see that they can cut their employees stacked by 30% to 40%, people they probably plan to fire for years anyway, and see that their stock pumps like this and just blame AI, easy mode. More companies will definitely copy this model. For some, it's a wake-up call about the need to adapt. Investor Tommy Shonna
“See Rites, the harsh but real truth is you need to be using AI every day to outperform and grow”
or you will be fired. Biology in that same tweet said, "For Jack to cut 40% of headcount in this way is a signal to everyone in tech, get good now. Become indispensable, work nights and weekends, learn the AI tools and raise your game, or you might not make the cut as an employee or as a company. There will be over correction he concludes, but the fundamental technical innovation is real, and you need to either disrupt yourself or get disrupted. Throwing a little bit of cold water
on that commentary, however, is Amanda who works in developer relations at the block who writes, "All the commentary from folks about block laying people off because they weren't AI native, I can assure you every single person I met at block was using in making an impact with AI at levels on the forefront, not just devs, and in my team AI was an ingrained part of our work, all of us. Not trying to scare anyone, but that's not it. Teams are getting leaner, period.
You do need to master this tooling, but that alone will not make you stand out or protect you."
“I think broadly speaking, we are in a recalibration moment right now.”
Everyone, from the people in AI to investors on Wall Street, to white color workers of all stripes,
are grappling with the tools having crossed a critical threshold over just the last few months.
I think part of what makes the energy field so intense right now is that there's this big collective burst of realization that's happening all at once. When you see companies cutting 40% or the announcement of a new plug-in on anthropic wiping $40 billion off of a company's market cap, those things are happening not because people have a really clear sense of where we are, instead they're happening because we are un-mort and have no sense exactly where we are.
We are in the midst of a dramatic repricing of everything as we try to grapple with what AI is going to mean, and that process is going to be chaotic. If there is a bright side in this, I think that the more dramatic nature of this move will help people sluff off their complacency and actually engage with the reality that we're all facing. At the same time, I do not believe that efficiency cuts are the end-game for AI and work.
I think this is a period potentially very painful one that we have to get through to get to the other side where the real opportunity lies. We will of course keep exploring these themes, but for now that it's going to do it for today's AI Daily Brief,
appreciate you listening or watching as always and until next time, peace!
[Music]

