The AI Daily Brief: Artificial Intelligence News and Analysis
The AI Daily Brief: Artificial Intelligence News and Analysis

Every AI Product Is Becoming Every Other AI Product

3/20/202627:215,298 words
0:000:00

Google, Lovable, Replit, and OpenAI all announced what look like the same product in the last two weeks. Critics say it's desperation and strategic dilution — but what if coding capability natural...

Transcript

EN

Today on the AI Daily Brief, why every AI app is turning into every AI app is...

and product confusion or about something more fundamental. Before that of the headlines, Nvidia CEO Jensen Wong suggests, politely, that maybe AI leaders could stop scaring the ever-loving poo out of everyone.

The AI Daily Brief is a daily podcast and video about the most important news and discussions

in AI. Add free version of the show go to patreon.com/aideally brief or you can subscribe on the podcast. If you are interested in sponsoring the show, send us a note at sponsors@aideally brief.ai and of course, aideally brief.ai is where you can find out everything that is going

on in the AI DB ecosystem as well, always lots of good stuff cooking there.

But let's dive into the headlines. It is a truth universally acknowledged that there has never been an entire history of business communications, been any set of people so spectacularly bad at communicating as the contemporary leaders of the AI industry. Really since the launch of chatGBT, it has just been a clinic in how not to talk to people

and how not to build public support for what you're building. In video CEO Jensen Huang has finally had enough. Ever since the beginning of JNA's rise to prominence, Huang has been nothing but optimistic. He has continuously argued and never moved off his stance that AI is going to create jobs and he's never given quarter to any sort of AI take over theories instead to dismissing

the Miss Science fiction. Now he's calling on AI leaders to follow his lead. At a panel at the company's GTC event, he said, "The desire to warn people about the capability of the technology is really terrific, warning is good, scaring is less good because

this technology is too important to us."

Now going farther, Jensen thinks that in the midst of a growing national security debate around AI, Huang believes that one major national security risk is AI pessimism. This is of course something that we've talked about extensively on this show and an area where I very much agree. Americans consistently rank as some of if not the least optimistic about the technology, which

has major implications for everything from adoption to policy and beyond. Huang said that the anger and fear around AI could cause the U.S. to fall behind other nations and I would go further, it absolutely 100% will. Huang then urged AI leaders to bring the conversation back to what the technology actually is, not the highly speculative discussion of what it could become.

He commented, "It is not a biological being, it is not alien, it is not conscious, it is computer software. To say things that are quite extreme, quite catastrophic, that there's no evidence of it happening, could be more damaging than people think." Now of course, reasonable people are going to disagree on the line between warning and thoughtful

discourse about possibilities and outright scaring, but it feels pretty clear to me that

at least someone needs to take on the job of articulating what the positive future with AI could look like because that exists basically nowhere in the discourse right now. And of course things aren't going to get any less controversial from here.

The F. Bezos is in talks to raise a hundred billion dollar fund to transform the manufacturing

sector using AI. The Wall Street Journal reports that Bezos has met with some of the largest capital managers in the world over recent months, sources said he met with sovereign wealth funds across the Middle East earlier in the year, and more recently visited Singapore as part of the effort. Investor Documents described the fund as a "manufacturing transformation vehicle".

It aims to buy up companies in major industrial sectors, including chipmaking, defense and aerospace. The effort is linked to Project Prometheus, which was a startup founded by Bezos last November. Project Company aims to train AI that understands the physical world for deployment and engineering and manufacturing.

Bezos it would appear seems to be applying the private equity model of buying out legacy firms and revamping their tech stack to physical industries. The goal, of course, is by developing the technology and buying up the customers to build a massively vertically integrated effort to deploy physical AI at scale. Now there is an interesting broader shift here, where even as software starts to eat itself

as AI forces margins down, more and more entrepreneurs are moving back from bits to atoms and exploring the physical world again. Meanwhile, the politics of this one are already fraught, with Bernie Sanders tweeting, Jeff Bezos worth 234 billion, plans to replace 600,000 American workers with robots. Now he wants to spend $100 billion to fully automate not just his warehouses but factories

in the US and other countries. All regards are waging all out war against workers, fight back. Bernie Sanders also tweeted out a video of him having a conversation with Claude about as he put it, AI collecting massive amounts of personal data and how that information is being used to violate our privacy rights.

This one admittedly was pretty weird, but if you're wondering whether Bernie is going

to let this AI stuff go, the answer is clearly no.

Speaking of AI policy, the White House is set to announce a legislative framework for federal AI rules. Axios reports that the administration is expected to instruct Congress on their regulatory preferences today although the details as I record are not yet available. There's some amount of increasing pressure for Congress to get AI regulation on the books

heading into the midterms. Over the past year, the administration has been clear in their position that AI regulation

Should be a federal matter, but there's been a lack of consensus on exactly w...

should look like.

It is, however, increasingly untenable for the administration to resist state regulations

without putting forward their own clear set of policy preferences.

Earlier this week, up in AI chief global affairs officer Chris LeHane threw in his lot with state regulators, writing in a blog post in the absence of a national framework state should align around the emerging model in California and New York. Also this week, Google's president of global affairs can't walk or welcome state coordination on AI and call the approaches from California and New York manageable frameworks.

According to the Axios reporting, this new federal framework will preempt state regulation and tackle the foresees as previously laid out by AI's R-David Sachs. Those topics are child safety, communities, creators, and censorship. Some of these issues are fairly easily resolved. For example, the proposal is expected to codify the president's rate-payer protection

pledge, which requires tech companies to pay for their own energy infrastructure, but other issues are very quickly becoming quagmires. On Wednesday, Republican Senator Marsha Blackburn also released her own discussion draft

of a bill which she claimed represented the administration's views.

That draft included duty of care provision, the rate-payer protection pledge, deep-fake protections, and a set of guidelines around content watermarking. Totally controversially, the draft would sunset section 230 of the communications decency act, which protects online platforms from liability associated with user-generated content. While many have called for reforms to section 230, a full repeal is not something that

is going to just go through, without consideration, given that it's pretty much the foundation of the modern social internet. Despite Republicans' reputation for lighter touch regulation, Adam Thiro writes that Blackburn's massive new AI regulation bill, 291 pages of near-endless mandates would "make European tech and crowds blush with envy," if it ever passed.

It represents he says a recipe for technological stagnation and hyper-politicization of technology markets in speech that must be completely rejected. So yeah, if you thought we were close to some common sense rules, we are it appears not. Lastly today, Apple's app stores throwing the brakes on the vibe-coding revolution, and yet many think their rules are out of step with the AI era.

The information reports that multiple vibe-coding platforms, including replant and vibe-code, have been blocked from updating their apps unless they make big modifications. The app store prohibits apps from running code in a way that changes the way the app functions, and that nebulous rule is now being enforced, leading to a crackdown on mobile vibe-coding platforms.

An Apple spokesperson said that the policy wasn't specific to vibe-coding apps, and sources added that Apple is close to reaching an agreement with Replet and vibe-code, with each agreeing to either tweak how previews are presented or remove certain features entirely. Replet said their tweaks involve showing previews in a separate browser rather than the app. vibe-codes said that they had been instructed to remove the ability to vibe-code

apps for Apple devices entirely, and while the policy is theoretically born out of security concerns, there is an obvious chilling effect that some believe is deeply cynical. Gene Burris, a competition lawyer who works with the coalition for App Fairness said, "Apple has a history of not allowing apps or features that create competition on their platform."

And indeed, others are calling for Apple to get with the times, even if it means consumers can create their own software rather than paying the App Store tax.

Kyle Maycomber, the CEO of vibe-coding platform bit rigged said, "I think vibe-coding

is really compelling and people want it," and so I hope Apple will notice this in the value it brings and is working on revised guidelines. Maycomber is himself a 14-year Apple veteran before founding his own company, and while he understands the security concerns, he noted that the policies were put in place many years ago.

Gone, let's ask an all-red rights?

App Store review is one of the first columns of the software ecosystem to just completely

buckle under the weight of AI. It almost makes building apps not worth it until Apple gets its stuff in order. That said, putting his tongue in his own cheeky route, "Why is the App Store review taking so long?" He complained, as his agent submitted the five new apps that had built that day to the

App Store. This is a problem that is going to absolutely get worse, not better, so Apple's got to do something here, and I don't think broad, blunt policy is really going to work. Five-coding is, however, in a way, the genesis topic of our main episode as well. And for now, we will close the headlines and move over on into the main.

Alright folks, quick pause. Here's the uncomfortable truth. If your enterprise AI strategy is we bought some tools, you don't actually have a strategy. KPMG took the harder route and became their own client-zero. They embedded AI in agents across the enterprise, how work it's done, how teens collaborate,

how decisions move, not as a tech initiative but as a total operating model shift. And here's the real unlock. That shift raised the ceiling on what people could do. Human State firmly at the center, while AI reduced friction, surfaced insight, and accelerated momentum.

The outcome was a more capable, more empowered workforce.

If you want to understand what that actually looks like in the real world, go to www.kpmg.us/AI,

that's www.kpmg.us/AI. latency is driving over 5x engineering velocity for large-scale enterprises. A publicly traded insurance provider leveraged Blitzi to build a bespoke payments processing application, an estimated 13-month project, and with Blitzi, the application was completed

In live in production in six weeks.

A publicly traded vertical SaaS provider used Blitzi to extract services from a 500,000

line monolith without disrupting production, 21 times faster than their pre Blitzi estimates.

These aren't experiments. This is how the world's most innovative enterprises are shipping software in 2026. You can hear directly about Blitzi from other Fortune 500 CTOs on the modern CTO or CIO classified podcasts. To learn more about how Blitzi can impact your SDLC, but committing with an AI solutions

consultant at Blitzi.com, that's BLI, TZY.com. Quick update on something I've been following.

AI UC1 is the first real standard for AI agents, developed with Fortune 500 security

leaders to basically define what safe enterprise-ready AI agents should look like. A little while back I mentioned that 11 labs became certified against AI UC1. This week, two more big players joined, Finn from Intercom and UI Path. With that certification means in practice is a real-time guardrails that block unsafe responses, protection against manipulation, and a full safety stack designed for enterprise environments.

And that's why this matters. You've now got leaders across three major AI agent categories, enterprise automation, customer support, and voice, all certifying against the same standard. That starts to look less like a one-off and more like the beginning of a real industry trend.

If you're an operator, your day is a nonstop stream of decisions, and most of them require you to look at the data. You don't need another dashboard.

You need answers you can trust, fast, but the bottleneck is always the same.

The data isn't ready, it's scattered, it's messy, definitions aren't clear, you're reading on your data team, or waiting on domain experts for clarification and confirmation. That's the bottleneck today's sponsor, PromptQL, is built to break. PromptQL is a trusted AI analyst for high-frequency decision-making. It connects across warehouses, databases, SaaS, and internal APIs.

No massive data prep or centralization required. It's built for multiplayer input. teammates can jump into a thread, correct assumptions, and nuance, flag edge cases. PromptQL turns everyday conversations into a shared context, and if something is ambiguous, it doesn't guess.

It escalates to the right expert, captures the correct logic, and gets it right next time.

That's how it delivers trust in accuracy.

Over time, PromptQL specializes to your business, like that veteran employee who just knows things. From simple what is questions to complex what if scenarios, you can model impact and stress test decisions before you commit, all through a simple natural language prompt. PromptQL, the trusted AI analyst for teams with shared context, and messy data.

Welcome back to the AI Daily Brief. Over the last couple days, we have a bunch of stories which, on the face of them, are unrelated. It's different companies announcing new products, or updates to their old products, all trying to jockey for position in the ever-changing AI landscape. And yet, when you look at all the announcements, there is clearly a convergence happening.

The products are starting to mirror one another. We've discussed a version of this trend as the qualification of AI, but it feels like there's something even more going on. Here's how Booco capital summed it up. Open AI is building a super app row that can do everything, and lovable can do general

tasks now. Also does everything. Airtable pivoted. You can vibe code there now. I send all my agents to my MacMany to fight to the death, and I'll use the strongest

one. Bro, AGI is here. So let's talk about what open AI's plans to launch a desktop super app. Google's release of their new vibe coding experience in Google AI Studio, lovable, announcement of lovable general tasks, and cloud codes announcement that you can use it from telegram

all have to do with one another.

The temptation, I think, is for people to view these companies, and maybe the AI product industry

more broadly as failing, throwing everything against the wall and releasing kitchen sync products that don't really make any sense. I think, though, what we're actually seeing is a recognition that the capability to code does

not just unlock new approaches to software engineering and vibe coding, but basically

everything else in knowledge work. But let's go back and start with what was announced from Google AI Studio. Google AI Studio themselves tweeted, "Videcoding in AI Studio just got a major upgrade, complete player, build real-time games, and tools, real services, connect live data, persistent builds, close the tab, and it keeps working, pro UI, shades the end frame or motion and

PM support." Logan Kilpatrick adds one click database support, signing with Google support, a new coding agent powered by anti-gravity, multiplayer, and back-end support, and so much more coming soon. So a couple of things going on here.

First of all, Google is integrating anti-gravity directly into Google AI Studio rather than these things being totally separate experiences. Along with that, they are trying to build a more end-to-end experience where you can actually get all the way to applications that can be deployed, as they put it going from prototypes to production apps.

So a lot of the parts of the announcement are just the boring guts required for that sort of move. Integrated databases in authentication, access to modern web tools like frame or motion, and connections to external services like databases and payment processors. And yet there are also some very googly parts of this announcement.

Another thing that we've been tracking, especially as open AI and anthropic go tit-for-tat

With coding capabilities around codex and cloud code, is that while Google ce...

withdrawn from the AI coding fight, this announcement is proof point of that, they also are clearly trying to compete in areas where they are just in a class of their own, specifically around everything having to do with multimodal. Anything that benefits from having access to the entire corpus of YouTube, for example. We see that in things like the Genie 3 model, and we even see it in the specific ways

that they're pushing, this new vibe-coding experience in Google AI Studio, specifically around this idea of pushing real-time multiplayer games.

This is the first use case that they highlight in their announcement post, and it don't

think that that's because they think that there are so many people out there right now who want to build massive multiplayer first-player laser tag games.

I think they're trying to show off a capability set that they believe is very different.

I started playing around with this a little bit, prototyping a game where you take a design from Leonardo da Vinci's notebooks, and can actually interact with it in 3D space, trying to turn it into a working machine, almost as a sort of 3D exploratory sandbox type of missed game. Now in the first iterations of this game experience, or in as visually appealing as I

wanted, I fired up a different new Google tool that had been updated just the day before. That tool is their updated creative canvas called Stitch. On Wednesday Google Labs tweeted, "Meet the new stitch, your vibe design partner." Now the upgrades that they promised as part of this new version included in AI native canvas, a smarter design agent, native voice integrations you can design by talking, instant

prototypes and transportable design systems. It's really a mass expansion in some ways of what people think of as design. And of course what's going on behind the scenes is that Google is leveraging these new models capabilities to code to make a better design experience. A couple days later, they dropped a set of new starter ideas that show how blurry a lot

of these knowledge work tasks are getting. Their starter idea number one was to take a messy document and turn it into a fully-style portfolio. And what's clear is that Google has ambition to be integrating and expanding these experiences in very short order.

Logan Kilpatrick again writes, "Our AI Studio Vive Coding roadmap for the next few weeks includes design mode, figment integration, Google workspace integration, better GitHub support, planning mode, immersive UI, agent, multiple chats per app, simplified deploys, G1 support, and more." Easy app CMMostafa Ekinci writes, Google rebuilt AI Studio from scratch just to add Vive

Coding, formance of work for one feature, that tells you everything about where the industry is headed. Vive Coding is in a trend anymore, it's the default interface.

And that of course is what I think is the broader point and all of these announcements.

So what's the next one? The next one is "Loveable for General Tasks."

Loveable CEO Antoine Oceca writes, "Loveable has always been for building apps.

Today it also becomes your data scientist, your business analyst, your deck builder, and your marketing assistant." This is a big step towards what "Levelable is becoming," a general purpose co-founder that can do anything. Some of the examples they show to show off the new tools including dropping in a CSV file

of health industry data to find a startup idea, taking an application that you've built in "Levelable" and then creating marketing assets to help launch it, or creating a pitch deck for that app. Now it's interesting, is that this is actually quite similar to what Replet announced with Replet Agent 4 a couple weeks ago.

In his announcement tweet, "Replet CEO, I'm John Masadroot, software isn't merely technical work anymore, it's creative." Introducing Replet Agent 4, design on an infinite canvas, work with your team, run parallel agents, and ship working apps, sites, slides, and more. So let me show you an example of how these things are all blending.

What you're looking at right now, or hearing me describe if you're just listening, is effectively a slides as a web page view of our February AIDB usage pulse survey, even though the information is still conveyed in slides, you can interact with it like it's a website. Basically, I've built the website version and the downloadable slides version at the same time using Replet Agent 4, and it turns out that this pattern of the blurring of information

output is not something for a new just being explored by these companies for the first time. For example, when you're working in Gamma, when you start something new, you have the option to create a document, a presentation, a mobile experience, or a web page, or you can do it all at the same time.

When you're using GenSpark or Manus to build slides, what's happening behind the scenes is that their general agent is using code to deliver against anything that you're actually looking for as an output. In other words, the GenSpark general agent is a coding agent with the coding part abstracted and the output format placed front and center.

Which is why I think people are a little off with one of the common responses that I've

seen for the levelables announcement that this is a move of some type of desperation. Adam Barto writes, "First sign that levelable is dead, pivoting to general assistant is the most investor pleasing move you could do. Their app building business is obviously going nowhere and investor money is drying up. Why should anyone use levelable instead of the already established ecosystems?"

Now for what it's worth just a week ago, levelable reported that it's ARR jumped from

300 to 400 million in a single month, so I'm not sure that it's fair to say that

it's app coding business is going nowhere, but Adam's hardly alone in this sentiment.

Tyler Angert writes, "This is the founder equivalent of becoming a paperclip ...

Increase shareholder value they said.

We must increase our tamte to 8 billion therefore we will literally make our core product

to kitchen sink for general purpose work." Why? Just make separate products if you were so inclined. What a completely dilute of move. Going as horizontal as possible with no opinion.

Hardly been diarights complete strategic dilution may not go well. It's a huge reach to go from building apps to doing anything a business needs. Now, of course, not everyone agrees. Prajwaltoma writes, "People say levelable is spreading to thin by going beyond code, but think about it.

You need to build the MVP, analyze user data, pitch investors and run marketing.

It just became the tool that does all of that in one place.

No more jumping between five different AI tools.

This saves so much time." And while that's a totally reasonable argument about the product value here, I think Peter Yang has the right of it. And he writes, and in this case, this was after the Replet Agent 4 launch, "Code is the foundation of all knowledge work.

If an agent can write code, it can also generate apps, presentations, animations, and more." In needy resurface with that same sentiment around the levelable announcement writing, "Code is the foundation of all knowledge work, another proof point right here." Now, this is of course something that we've talked about on this show before. In January I did an episode called "Code Agi" is functional Agi about why the advances

in coding capability matter not just because of the way that it would impact software engineering or even vibe-coding tools, but because of the other capabilities that it unlocked. And providing a little evidence that even thinking about vibe-coding is its own category might be increasingly reductive. One interesting finding from our AI usage pulse survey for February, which admittedly is

that the very vanguard of users given that it's all of you guys answering, who, as listeners to a daily, AI show are not going to represent the average human being let's just put it that way, still 71.3% of respondents were vibe-coding in February. 62% had some use case that went beyond just the system into the realm of automated or agente AI, and while we saw coding use cases continue to be the most common and highest

reported value use cases, we also saw a real diversification from coding into other strategic knowledge work areas like data analysis and strategic planning. For some what's happening is just completely inevitable. I'll be creator, Eugenia Cudio writes, 2026 will be the year when every AI product converges into some version of open-claw.

Ben Vinegar puts it more poetically, either die a code gen tool or live long enough to become the everything app, which brings us to OpenAI. On Thursday night, the Wall Street Journal released an exclusive report about OpenAI's plans to launch a desktop super app that would combine chatchipiT codecs and their browser into a single experience.

The WSJ points out that the strategy marks a shift from OpenAI's previous approach to launching lots of standalone products, then all had to stand on their own two feet. Now, this of course gets back to those comments from CEO of applications Fiji Simo, where she told the company that they were going to stop focusing on side quests and spreading their efforts across too many different areas.

Peter Yang again writes, "I think OpenAI's strategy is pretty clear.

One, more people have chatchipiT installed in any other AI product. You will make chatchipiT great for coding and knowledge work. Three, make it a personal assistant like OpenClaw that knows you and can do whatever you want. They just need to get to two and three faster before people switch to Cloud or Gemini for the same use cases." Swix aka Sean Wang from Layton Space pointed out meanwhile that a very long time ago

he had written a blog post with the line, attempts at building superapps have repeatedly failed outside China, but it's clear that both chatchipiT and CloudCow work are well on their way to being AI superapps, except instead of having every app having their own app they make themselves legible to the AI overlords with MCP, UI and skills, and OpenClaw Markdown files. Speaking of OpenClaw, one of the other things that we've been watching

is the way that Anthropic has been slowly going one by one through the features of OpenClaw that people like and adding them into the core CloudCow or CloudCow work experience. The most recent announcement on that front comes from Turek from CloudCow who writes, "We just released CloudCow channels, which allows you to control your CloudCow session through

SelectMCPs, starting with Telegram and Discord. Basically, you can now message CloudCow directly

from your phone, which was of course a huge draw for the OpenClaw experience." Now Goggen's alluga thinks that this shows OpenAI and Anthropic heading in slightly different directions. He writes, "OpenAI merging chatchipiT, codecs, and Atlas into one superapp while Anthropics ships features like channels, persistent memory, and 10K skills in the same month." Two very different strategies playing out in real time. One is consolidating

everything under one roof, the other is making the core tool so extensible that the ecosystem builds itself around it. And while he may be right that there is a slight difference in

strategy, I think that might have to do more with the starting point of where they are,

in other words, OpenAI having to deal with products sprawl, then actually being a different strategy. It feels a little bit like both ends are working towards the middle here of a very similar type of experience. Indeed, certainly Fiji Simo herself, seems to suggest that this is more about having a codex plus experience than it is about having codex sit alongside a bunch of other experiences.

She writes, "Companys go through phases of exploration and phases of refocus,

both are critical. But when new bets start to work, like we're seeing now with codex,

it's very important to double down on them and avoid distractions."

Really glad we're using this moment. Put differently, it may not be that OpenAI is trying to create a super app, it's that they believe that inherently codex is their super app and they're organizing everything around it. Now, even if I'm right, and this convergence does not show flailing in a lack of product vision, but instead a natural path from coding capabilities to broader knowledge work capabilities, that still doesn't mean that the everything app approach

will actually work from a product standpoint. And Will writes, "On one hand, I will be happy to have GPT Pro in codex, but on the other, I really come to appreciate all the focus and attention they've

placed on making a purely software engineering-focused product." And I think it is worth noting

that the other thing that's going on here is just the first large-scale startup competitions

in an era where there are officially no modes. Ed Sim writes, "When shipping new features cost near zero, every company becomes every company, and when switching costs are also near zero,

who wins?" The next few months are going to be interesting. I think it's more than the next few

months. I think that we are in a totally different type of company building paradigm that we have barely wrapped our heads around. On the one hand, there are no bearers to entry. People can build and spin things up faster than ever before, non-technical founders can build the early versions of their products, and yet on the other hand, basically all the traditional modes of fallen. No bearers to entry, but also no modes is a very strange and kind of viciously competitive

environment that makes continual pivots feel like the only operational strategy. In AI land, nothing is going to sit still for long. For now, if nothing else, we have a lot of fun new toys to play around with, and for that alone, I am grateful and excited. For now, that is going to do it

for today's AI Daily Brief. I appreciate you listening or watching, as always, until next time, peace!

Compare and Explore