Today on the AI Daily Brief, the big question shaping the battle for consumer...
before that in the headlines, is to open AI the new GitHub, the AI Daily Brief is a daily
“podcast and video about the most important news and discussions in AI.”
Alright, friends, quick announcements before we dive in.
First of all, thank you to today's sponsors, KPMG, AI UC, Lizzi, and Mercury to get an
ad free version of the show, go to patreon.com/aideallybrief, or you can subscribe in Apple podcasts. To learn about sponsoring the show, send us a note at [email protected]. Lastly, two other quick things to flag. First, thank you to everyone who has taken our February AI usage pulse survey.
You can find a link to that at aideallybrief.ai, and I would so appreciate it if you would take just a couple minutes to do that. Anyone who does will get the results before everyone else, and help us better share data about where users actually are and their behavior patterns right now. And if you are a company who is interested in building agent teams, registration is
live for enterprise-cloud, enterprise-cloud.ai, and will close on Friday. Now, with that out of the way, let's dive into the headlines. Back in December of last year, Mitchell Hashimoto tweeted, "The AI companies are on track to become GitHub faster than GitHub is becoming an AI company."
“A lot of folks agreed, although some like Ivan Barazan had thoughts on who it might be.”
Ivan writes, "Ben looking for who will do this for a while, bearish that it will be open AI, though." And yet, yesterday, we got this report from the information that OpenAI is developing an internal alternative to GitHub. According to the information sources, the project was spurred by a rise in outages for
Microsoft's Code Repository Platform. OpenAI engineers complained that these outages have stopped work for minutes or even hours at a time. GitHub had 37 outages in February, which was up dramatically from an average of 17 per month last year.
Microsoft has attributed these outages to human error and problems with Azure during a multi-year migration project away from GitHub's proprietary servers. Now sources for the OpenAI project did say that it's in its early stages and likely won't be completed for months.
They also noted that the project is intended for internal use first and foremost, but then
again, so was Claude Cod.
“This also isn't the only project to rebuild GitHub for the ancient era.”
That was also the pitch for the new startup from former GitHub CEO Thomas Dumke when he left Microsoft earlier this year. Dumke's idea was the integration of agent to code of you tools to help close the loop on fully autonomous code generation. Now, there are a lot of people who are trying to put different lenses on this.
For some, it's the latest example of OpenAI competing with Microsoft is the riff between the two companies' expans. Others see it as part of the Saspocca-lips theme of companies canceling their software subscription in favor of ViveCode alternatives. I'm not sure any of that's true.
It feels to be like it might just be the start of an inevitable shift in this category, give it how much code is pumping through these companies' coffers. As Amea puts it, the interesting play is not just hosting code, it's only the layer that understands how the code connects across services and teams. That's where agents actually need to operate.
Step we move over to Meta, who has formed a new Applied AI Engineering organization. According to a memo viewed by the Wall Street Journal, the new organization will work closely with both AR and VR organization reality labs, as well as the Meta Superintelligence Lab. Now this doesn't seem to be another prod restructuring of AI at Meta, which by some accounts went through four reshufflings last year.
Instead it appears to be aimed at filling gaps between hardware, tooling and model teams. The memo said that the goal was to strengthen Meta AI initiatives, commenting that the team will build the "data engine that helps our models get better faster." The new org has an unusually flat structure. It consists of two teams of 50 people each reporting into a single manager.
One team will work on building interfaces in internal tooling, while the other works on data collection and refinement. The flattened team mirrors the structure of TBD labs, which consists of around 50 highly paid AI researchers working under new AI CEO Alexander Wang within the broader Superintelligence org.
It also seems to reflect Mark Zuckerberg's new management philosophy that he outlined on Meta's most recent earnings call. He said that individual contributors are being elevated now that AI has allowed in his words, projects that used to require big teams now can be accomplished by a single very talented person.
Over in Amazon Land, that company is exploring the prospect of building technology to power AI advertising. According to the information, Amazon's ad business has held discussions over recent months with major websites and ad sales firms about the idea. The plan would involve placing ads in chatbots and agents.
One of the websites mentioned as a focus of the pitch was Pinterest, which is in the middle of an AI overhaul. In October, Pinterest launched a AI shopping recommendation assistant that helps users track down clothing featured on the website. You can see how this could be a natural fit for high-intent traffic.
Now, one of the things that people don't really know about Amazon or don't really think about much is how big its ad business actually is.
Last year, Amazon generated 68.6 billion in ad revenue, and while that represents only
a tenth of their overall business, it was their fastest growing division achieving 22% growth last year. As advertising comes to the AI platforms, there could very easily be a land grab around
Who gets to host the clearing house.
Now what consumers are going to think about all these AI ads remains to be seen in his part of the conversation that we're having in the main episode. Over in AI politics and chips, U.S. officials are considering a cap on Nvidia chipsales into China in a bid to constrain the power of training clusters. Bloomberg reports that U.S. trade officials are considering a cap of 75,000 ships per
customer, sources that the cap would apply to the newly approved Nvidia H200 chips, as well as AMD's MI325 AI chips.
They noted that chips apply would also be contained to a million total units sold into
China, a limit that was set earlier in the regulatory process, but up to now hasn't previously been reported. The million unit limit is reportedly far lower than the number in video originally proposed, which gives some additional context to recent comments from Commerce Secretary Howard Letnik.
During congressional testimony last month, Letnik said that Nvidia must live with the license terms set by the government and presumably this is what he meant. The 75,000 chip cap is also less than half the number sought by Chinese tech giants Ali Baba, Tencent, and by dance, each had reportedly told Nvidia that they would like chip counts of around 200,000 to build their large scale training clusters.
Within these limits each company will only be able to build data centers using around 100 megawatts of power, that's a far smaller scale than the multi-gigawatt training clusters that are planned by Western AI labs, and not even a match for XAI's original build-out of the Colossus megacluster last March, which began at 100,000 GPUs in quickly scale to 200,000, and is now reportedly at 550,000 units.
The big question is whether this is a meaningful constraint or simply window dressing to appease China Hawks in Washington. What's more the entire process is still murky and getting even murky or due to the Iran War, considering that China is a major strategic trading partner. Chips are on the agenda when President Trump meets with President Xi in a few weeks
time, but it's not hard to imagine that larger geopolitical issues could overshadow those particular trade negotiations. In device land, Apple has unveiled their new line of M5 power devices at their global event.
The new lineup includes MacBook Air and MacBook Pro models, all being the first to feature
the new M5 M5 Pro and M5 Max chipsets. The M5 chips feature a new component known as a neural accelerator to boost AI performance, and it's very clear that Apple has focused on the AI use case when it comes to selling these models. As you might imagine, the only real question on the minds of the AI folks was summed up
by Noah Hirschfeld who wrote, "The M5 MacBook looks cool and all, but where's the M5 Open Cloud Mac Mini?"
“Lastly, today, a bit of operator news, which I think is sneakily powerful.”
Stripe has previewed a new feature that would make charging for token use much easier. The feature allows AI app developers to automatically charge a usage feed directly on Stripe's platform. For example, an app developer might want to charge a 30% markup on API calls. Previously, they would have needed to track token usage on their backend and periodically
generate lump sum bills. The new feature allows Stripe to track usage and automatically build the customers the appropriate amount. Having this infrastructure provided startups could dramatically change the pricing structure for AI apps.
Currently, most apps charge a flat rate monthly subscription with usage caps or credit-based systems. Under these models, token usage is a cost-centered making profitability difficult to forecast. Last year, we saw multiple startups run into this problem. Most notably, replete briefly ran at negative 14% gross margins as demand in token volume
surged. The issue is only becoming more prevalent as token-hungry-agentic startups come to market. Stripes that they're billing tool will integrate into token tracking in model routing platforms like for selling open router. This should make it easy for existing apps to add the feature to their existing stack.
“Overall, I think this is a massive, massive step, not only in the path towards usage-based”
pricing for AI apps, but for that actually being a viable business model. Tokens can now easily be priced as a commodity all the way to the end user, and while in some cases that may mean that users are paying more for what they consume, overall I think it's going to be much healthier and more sustainable for the ecosystem. Good on Stripe for that feature, certainly excited to check it out in our own work.
For now, however, that is going to do it for today's headlines. Next up, the main episode. Egentic AI is powering a $3 trillion productivity revolution, and leaders are hitting a real decision point. Do you build your own AI agents by off the shelf or borrow by partnering to scale faster?
KPMG's latest thought leadership paper, Egentic AI on Tangled, navigating the build by or borrow decision, does a great job cutting through the noise or the practical framework to help you choose based on value risk and readiness, and how to scale agents with the right trust, governance, and orchestration foundation. Don't lock in the wrong model.
You can download the paper right now at www.kpmG.us/nevigate, again that's www.kpmG.us/nevigate.
“There's a new standard that I think is going to matter a lot for the Enterprise AI”
agent space. It's called AIUC-1, and it builds itself as the world's first AI agent standard.
It's designed to cover all the core enterprise risks, things like data and privacy, security, safety, reliability, accountability, and societal impact, all verified by a trusted third party. One of the reasons it's on my radar is that 11 labs, who you've heard me talk about before
Is just an absolute juggernaut right now, just became the first voice agent t...
against AIUC-1, and is launching a first of its kind insurable AI agent.
What that means in practice is real-time guardrails that block unsafe responses and protect against manipulation, plus a full safety stack. This is the kind of thing that unlocks enterprise adoption. When a company building on 11 labs can point to a third-party certification and say our agents are secure, safe, and verified, that changes the conversation.
Go to AIUC.com to learn about the world's first standard for AI agents, that's AIUC.com.
“If you're looking to adopt an agentic SDLC, blitzie is the key to unlocking unmatched engineering”
velocity. Once these differentiation starts with infinite code context, thousands of specialized agents ingest millions of lines of your code in a single pass, mapping every dependency. With a complete contextual understanding of your code base, enterprises leverage blitzie at the beginning of every sprint to deliver over 80% of the work autonomously.
Enterprise grade and to entested code that leverages your existing services, components, and standards. This is an AI autocomplete, this is spec and test driven development at the speed of compute. Digital technical deep dive with our AI experts at blitzie.com, that's BLI, TZY.com. This episode is brought to you by Mercury, radically different banking, now available
for personal accounts. I already use Mercury for my business. So when they introduced personal accounts, it made immediate sense for me. I try to bring the same level of intention to my personal finances that I bring to building companies, and most traditional banks just do not feel designed for that.
“With Mercury personal, you can toggle between business and personal in a click.”
You can set up sub accounts for specific goals, automate transfers, so projects and savings fund themselves, and put idle cash to work with high yield savings, all without friction. It's built for people who care about how their money moves and want tools that actually keep up. Visit mercury.com/personal to learn more.
Mercury is a fintech company, not an FDIC-insured bank. Banking services provided through choice financial group and column NA members FDIC. Welcome back to the AI Daily Brief. Here has been a lot of talk recently about the competition between anthropic and open AI.
Even before the events of the last week or so, anthropic had been mounting a complete and total insurgency, leveraging its devotion among coders and the increasing expansion of tools like Cloud Code to non- coders to steadily grow especially in enterprise settings. More recently, anthropic has also shown that they are not willing to concede consumer AI either.
“A great example of this is of course the choices they made around the Super Bowl ad, which”
as you know, if you listened, I didn't totally agree with where they basically came at open AI without naming them for putting ads in the consumer AI experience.
Now, of course, over the last week, we've had an even more powerful and unexpected catalyst
in the consumer response to anthropics battle with the Pentagon and open AI's response to that battle. What all of this adds up to is a really interesting moment to understand not only the state of the consumer AI battle, but to try to understand what's actually going to drive behavior and results in that battle going forward.
Now there are a couple of news stories that came up over the last 24 hours that tip this conversation over for me. The first was that Open AI announced GPT 5.3 Instant. This is of course an update to their model design for every day chatbot use. The model had already been optimized for speed, but the tweaks are seemingly intended to
make chatbot sessions a little more natural. Open AI says that they've reduced unnecessary refusals and toned down, quote, overly defensive or moralizing pre-ambles before answering the question. The intention is to provide a straight answer rather than one bogged down in caveats. In practice, they wrote this means fewer dead ends and more directly helpful answers.
Trying to simplify the message even further, in announcing the feature on X, they called it more accurate less cringe. Open AI gave a few examples of the kind of phrasing that GPT 5.3 Instant has cut out. The model will no longer tell you stop, take a breath, and make overbearing assumptions about the user's emotional state.
They present to the sample prompt where a user asked, "Why can't I find love in San Francisco?" The previous version of the model began by affirming the user, writing, "First of all, you're not broken, and it's not just you." The updated model has a much more matter of fact-toned, explaining that this is a common issue that moving quickly into practical advice.
Now the problems with chatGPT's personality have been a longstanding source of complaints on Reddit even becoming a bit of a meme. One user on the chatGPT subreddit posted a tweet, "I wake up, something's wrong with the clock on the wall. The numbers are jumbled. My hands aren't right. I tell my wife, she responds,
"That's not just an observation. It's a powerful insight. I scream."
Many users also felt infantilized by the model continuously telling them to calm down or take a breath. As one user, on Reddit, pointed out, "No one has ever calm down in all the history of telling someone to calm down." Now obviously this is a little bit subjective, but I will say here on this change. Thank the Altman's for this. I don't know that I've ever disliked the personality of an
LLA more than I dislike GPT 5.2. I find it so insufferable, in fact, that despite frequently
Switching between different LLMs for different use cases, I basically just wi...
to 5.2 at this point. But of course my particular beef is not the subject of the show. The subject of the show is what's going to matter in the battle for consumer AI. And so let's put a pin in this idea that personality and vibes matter. We'll come back to that. Couple other pieces of news that contribute to this conversation today. One, Claude Code has rolled out a voice mode capability. To read from anthropic rights, voice mode is rolling out
now in Claude Code. It's live for around 5% of users today and will be ramping through
“the coming weeks. This in some ways is a table stakes feature, but still one that's important.”
In many ways, this is the natural next step after the announcement of the remote control feature last week, where you can start a session on your laptop or desktop in Claude Code, and then move it over into the app so you can be working on things while you're on the go. I will note here, in order to more evenly distribute my critiques today, I will also agree with Ali Kamiller who reposted the announcement and said, "I love Claude Code, but anthropic speech to text inside
of the Claude mobile app is one of the worst dictation options out there, especially compared to Chatchee BT Whisper and Whisper Flow. I'm glad this voice mode now exists, but I'm not betting it will be as good as the other providers. Might be an accuracy versus native build trade-off." I agree entirely, whereas with Chatchee BT, one thing that's nice about it is that I don't
have to switch into Whisper Flow when it comes to Claude, I am never using its native voice,
I am always going to Whisper Flow whether I'm on mobile or on the laptop. But again, for the
“purposes of our conversation, we're talking about what features matter and how naturally these tools”
have to interact with how people behave in their daily lives. In the last story, before we try to abstract out to the questions that matter for consumer AI, is one more update on just the absolute surge from Anthropic. Bloomberg reported on Tuesday that Anthropic had reached 19 billion in ARR. That's more than double their 9 billion dollar run rate from the end of 2025 and has significant jump from 14 billion just a few weeks ago.
Anthropic was already seeing strong growth this year after the breakout success of Claude Code over the winter, but this is a whole different level of growth. The latest numbers we've heard from OpenAI are around 20 billion, which also could have grown over the last few weeks, but for all intents and purposes based on the last information we got from OpenAI, they and Anthropic now effectively have the same revenue. Figures from ramp seemed to back this up.
If you go back a year, the market share of AI chat subscriptions for US businesses was about 90 OpenAI and 10 Anthropic. Now admittedly, this is just one source, this is ramp, so you have a relatively tech forward and more advanced business subset, but by January of this year, products had overtaken OpenAI, and as of their most recent numbers, Anthropic now commands
over 60% of business AI payments settled through ramp. Again, never take any one set of numbers as
gospel, but the point that I want to set up here is that the Anthropic OpenAI horse race is more of a race than it's ever been. Which brings us back to the core question of what is actually going to matter in the consumer AI battle? We're taking a step away from the enterprise use case for just a minute, and looking instead at consumers. Now a couple of months ago, I might have attempted to say that Anthropic didn't actually care about this fight. In fact, mostly what we were
talking about coming into 2026 was OpenAI versus Gemini on this front. However, between the Super Bowl ad and the recent changes around the Pentagon, Anthropic feels very much in it. So now we're going to talk about a bunch of questions spread across about six different categories that I think that the answers to will shape who wins the consumer AI battle. The first category is use cases and product identity. One of the big questions I think especially pertinent coming on the heels of
“GPT-53 instant being announced as more accurate less cringe is ultimately for consumers what matters”
more, being state-of-the-art on performance versus just vibes. And to the extent it is being state-of-the-art, what is the part of state-of-the-art that people care most about? Is it for example just this speed vector? closely related to this is the question of how much the general consumer user is going to care about work use cases versus more personal use cases like companionship. This is obviously related to but not exactly the same as the vibes question. I would argue that vibes
matter in both work use cases and in personal use cases. Like I said, I pretty much only have work use cases and I still was responding negatively to the vibes of GPT-52 but I do think it's an interesting question to see how much can one product or one model serve both of these things. One of the things that we'll be fascinating to see is as usage of these platforms mature, do we have a lot of people in the overlap of those venn diagrams or are people kind of organizing
themselves into one or the other? Next question which I think has pretty significant impacts at least when it comes to anthropic is how much image and video generation are going to be integral to leading adoption. Now in the one hand you might say, well, do regular people really care about image and video generation if they're not using it for work? But there are certainly
some evidence that the answer is yes. Outside of the AI world, we have the fact that mobile
adoption was largely driven by visual media like Instagram and inside the AI world, we have some
Evidence that the way that people are using non-text generative tools is ofte...
communication and meaning, more than just professional uses. It's not specifically image or video
“generation, but I'm thinking of the sound and music example of Suno. The company reads a couple”
hundred million dollars in ARR and it appears that the vast majority of usage is not people
who would have previously hired some musician to create a song for them, but instead people writing silly family songs for their vacations and things like that. Now obviously this image and video generation question matters because anthropic is doing none of that and on the other end of the spectrum, Google feels extremely well positioned with that, although open AI is very clearly not seating any of that ground. Another question which is sort of about the state of the
art thing again, but from a slightly different angle is whether we already have or will at some point cross a threshold where when it comes to the state of the art, good enough is good enough and so it'll only be rational to only care about vibes. One could argue that for many use cases were already there and one could further argue that for certain types of use cases, particularly
things like voice and writing, state of the art and highest quality is so inherently subjective
that state of the art becomes about vibes itself. The answer to this question though could have a pretty deterministic impact in how the model companies choose to compete because if on average, we've reached a threshold where people aren't going to be jumping around because of model performance, then really vibes are all you're left with. A last question on the use cases in product identity category is what's the average number of models that people will be willing to use.
“This is one area where I think there is a dramatic difference between the average user and the”
power users. When we do our monthly AI usage pulse surveys, the people that are responding to those are using an average of something like 3.5 models. Those are very enfranchised, heavily engaged power users though, on average they're spending more than 10 hours a week using AI. The adoption
dynamics overall in the industry and the competitive dynamics look really different if the average
number of models that people are willing to use is 1.1 versus 2.1. Think about the multimodal question. If on average 95% of users are only willing to use one model, it might be a prerequisite that you have image or video generation built in. The next set of questions that I think will shape the consumer AI battle have to do with monetization and conversion. One big one is what percentage of users can the model labs actually get to upgrade to a paid account? This sort of sets
the total addressable market for revenue from consumer AI and obviously the size of the pie is going to dictate a lot about the competition for that pie. Now, going a layer deeper on that, another big question is which features, especially outside of work use cases, actually get people to convert. This comes back a little bit to the multimodal question. Are people converting because they run out of access to their favorite model,
which they're using all the time for companionship? Are they converting because they want something to happen faster? Are they converting because they're creating memes that they're sharing in their WhatsApp groups? Each of those has pretty dramatically different implications for how the consumer AI battle shakes out. And lastly, one big one. Something that's certainly anthropic is betting that will be a big deal is how much will ads in the free tier
actually matter. Anthropic is betting that at least in a short term, it will drive people away from chatGPT. I, as you probably know, am much less convinced of that. My base case about this is that the answer to the question of what percentage of people can they get to upgrade to a paid account is not going to be sufficient for these businesses to grow the way that they want, which will lead them inevitably back to the ads of the free tier model. Now, I'd love to be wrong
here, or at least for the people who are thinking about ads to do it in a more creative and value-added way than they're currently exploring. But obviously, if ads do matter to people in terms of their adoption choices, that's going to have a pretty big impact on which models they choose, unless, of course, everyone ends up just having ads in the free tier as a matter of course. The next question or set of questions, get a little bit more to the frontier. I think that one of
the risks when we're talking about consumer AI is being a little too productive in how we're talking about the user. Specifically, we're in this paradigm shift right now, as you well know, we're removing from assisted AI to more agenda AI. Everyone is racing to try to grapple with the implications and actually make it real for their particular set of use cases. It would be tempting I think to view that as something that's just for the infantized and power users. But I'm not sure
“that that's what the evidence suggests right now. Which brings me to the question of, what is the”
real expansion potential for the total market for agents? Are they just going to be a work thing or will everyone be using them? Will we have assistance that are running off and doing tasks for us in our personal lives as well? Will even our companionship interactions look a little more authentic in the future? What little evidence we have so far is that I think that people are underestimating the extent to which so-called normies are going to throw themselves into this new
agentic era. There are so many millions of people that are not waiting for cloud co-work to be good and are just diving into cloud code even though they're extremely uncomfortable with it. We have 5,500 people who are doing claw camp right now, hacking their way slowly and painfully in some cases
Through the morass of open claw and at least based on my interactions, most o...
are not developers by trade. They're not even necessarily particularly technical. They're just folks who are really excited about what the idea of building agents and agents teams could mean for them in their lives. In other words, my base case when it comes to a Gentic AI is that we are going to radically underestimate the portion of the world for whom that becomes an integral part of consumer
“AI and I think that that could shape the competitive dynamics quite a bit. The next couple of categories”
have to do with competition in lock-in directly. As adoption measures, one question will be how much integration into the system that people are already integrated into will matter. Call this the Google Gemini or Apple Intelligence question. Are people going to just default to whatever
AI is on their phone or are they going to make distinct consumer choices beyond that? How powerful
will it be that networks like Exyn Meta have their own AI's integrated into their social networks? Another kind of related question which also goes back to the how many models people are willing to use is how much integration into the work ecosystem will ultimately matter. Basically, will people on average be fine using one tool at home in a different tool or different platform of tools at work? Certainly the early evidence suggests that yes people will
be willing to make that separation. In fact, one of the big complaints for enterprise users is that they have to use versions of co-pilot at work, whereas they can choose whatever they want from
“another suite of tools when they're engaging in their personal lives. Interestingly, a division”
between work AI and home AI might actually make people have more appetite for model switching than if they didn't have that difference. In other words, once you're already going back and forth between one model for work in one model for home, you've got the mental and practical
frameworks for model switching, and so maybe adding a third or even a fourth model into the mix
doesn't really bother you as much. Which gets into the question of switching costs. Right now, it feels like the switching costs between these networks and models are extremely low. People can just bounce between the one that they prefer at any given time, and they seem to do so with pretty high frequency. One of the big caveats and providers to that is something of a mode in memory. If you've spent a bunch of time
giving catchy BT or clawed context about you or your work or a project, it can be really painful to switch that to another platform. Now, as we've recently seen,
“companies like Anthropic have tried to minimize this pain. Around the consumer campaign post”
Pentagon blow up, they pushed a feature which would allow people to better import memory from their other provider into clawed, but again, it was still a pretty lightweight memory import. Effectively, it was just a prompt that you run in chatchipeteer, whatever other LLM you were using, and you paste the results into clawed memory below. For someone like me, this is not going to cut it. I have 20 different projects in clawed each that of their own memory base in files and
context, and a simple prompt across the whole thing is just not going to cut it for that. Now, again, maybe I'm not representative of those general consumer users, and so that changes, but that's exactly why this is a question. Now, one interesting wrinkle, which bridges us to our last section, which is about ethics and regulation, is I would not be surprised if we might see some sort of policy or regulations around data and memory transportability. The fact that I don't have a
good way to export all of my context from Anthropic and take it over to OpenAI might be something that we decide as a society isn't really a legitimate business mode. It is after all my memory in context, so shouldn't I be able to with a single click be able to transport it to whichever model platform I choose? That will certainly be a debate, and there's reason we'll take some both sides, but I would not at all be surprised. Based on the other types of regulations we've seen in other
adjacent areas, if that becomes a thing, which obviously would lower switching costs even more, which gets us to the last category ethics and regulation. This is particularly pertinent, as OpenAI in chatchbt face a ton of heat after taking a deal with the Pentagon right after Anthropic was unwilling to concede. Quitgpt.org argues that 2.5 million people have taken part in their boycott, and certainly the actual uninstall numbers, as well as the insane growth in app downloads on Anthropic,
suggests that this is not Alges Bluster. I do think, however, that there's a question of how
deep and durable this consternation is. First of all, 2.5 million is a lot, but it's also a lot
less than a single percentage point when you're talking about a user base of 900 million. The vast majority of chatchbt users probably aren't paying attention at all to this stuff, and even for those who are paying attention, if and when we actually get GPT 5.4, which by the way on Tuesday, OpenAI posted 5.4 sooner than you think, with the capital on T, which I can only assume means Thursday, how durable our people's complaints going to be.
If 5.4 kicks the slats out of everything, as the excited folks on X are blustering about right now, will any of those 2.5 million come back? I don't know, but obviously those questions have a big impact on how much ethics and principles are actually going to matter when it comes to the long-term questions of adoption. There's also the question of which ethics issues people will actually care about. There are so many things surrounding AI. Are people going to care about job loss or people
going to care about existential risk or people going to care about IP issues and copyright issues and
Artists rights?
I think that there is some evidence this week that the partisan cleave is more powerful than
specific discrete AI issues when it comes to all of this. I don't think this is strictly true,
“and I think that AI is far less partisan than other areas of American politics right now,”
which I am massively grateful for. But I also think that part of the reason that the QuitGBT campaign is being resonant right now is that just a couple of weeks ago, it started to get into progressive
“and liberal circles that Greg Brockman was one of Trump's biggest donors right now.”
It didn't organize itself into a full boycott, but there were already people who were dropping
CHHBT for that reason. I don't know what percentage of those 2.5 million who have dropped
“CHHBT would identify themselves as progressive or liberal, but my guess is that a fair bit of them”
have more issues with the fact that it's the Trump White House that anthropic is fighting with than just any old White House trying to exert its will on a private company. If you take anything away from this, it's that the consumer AI battle is wildly more dynamic than just who has the best model. There are questions of vibes, use cases, distribution, ecosystem lock-in, monetization, ethics, and so much more. And importantly, this doesn't just matter
because it's an interesting thing to talk about on podcasts. It matters because it's going to shape what products these companies put in front of us. Anyways, guys, that is my exploration of the big questions shaping the consumer AI battle. And for now, that's going to do it for today's
AI Daily Brief. Appreciate you listening or watching. As always, until next time, peace.


