Today on the AI Daily Brief, the week the Global AI Conversation hit a whole ...
The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI.
“Alright friends, quick announcements before we dive in.”
First of all, thank you to today's sponsors, Assembly, Robots and Pencils, AIUC and Blitzi. To get an ad free version of the show, go to patreon.com/aidilybrief, or you can subscribe on Apple Podcasts to learn about sponsoring the show, send us a note at [email protected]. While you're on AI Daily Brief.ai, you can subscribe to our newsletter, which is newly restarted, and which is going to have all of the links to all of the articles and posts that I reference in the show.
And you can also learn about all our various other ecosystem initiatives like clock camp or enterprise claw, registration for which is open until the end of next week. The last couple of months have seen a steady growing acknowledgement of just how significant the disruption of AI is. This shift came first to those who are actually in the industry. Just last week, OpenAI founder Andre Carpathy wrote, "It's hard to communicate how much programming has changed due to AI in the last two months.
Not gradually and over time in the progress as usual way, but specifically this last December.
“There are a number of asterisks, but in my opinion, coding agents basically didn't work before December and basically work since.”
The models have significantly higher quality, long-term coherence and tenacity, and they can power through large and long tasks, while past enough that it is extremely disruptive to the default programming workflow. As a result, programming is becoming unrecognizable. You're not typing computer code into an editor like the way things were since computers were invented. That era is over.
You're spinning up AI agents, giving them tasks in English, and managing and reviewing their work in parallel. The biggest prize is in figuring out how you can keep ascending the layers of abstraction to set up long-running orchestrator claws with all of the right tools, memory, and instructions that productively manage multiple parallel code instances for you. The leverage achievable via top-tier agentic engineering feels very high right now. In my opinion, this is nowhere near business's usual time and software.
Now of course, this is not limited to software. On the earnings call where he explained the 40% reduction in the block team, Jack Dorsey specifically noted, the leap that AI had made around the same December timeline. And indeed, Wall Street is one of the main places where the recognition of the phase shift in AI is fully coming home to Roost. Michael Gaye of the League lag report wrote this week, "I once tweeted AI is BS, have been playing around with Proplexity Computer to Automate Workflows.
“It's not BS, it's going to fundamentally alter the world. I believe it now. By the end of the year, I believe we will see huge layoffs.”
Block is a sign of what's to come. This humble shift, a throwing up of the hands and saying I was wrong to doubt this, maybe got its best expression this week, and a memo from legendary oak tree investor Howard Marks. The memo is called AI hurdles ahead. In it Marks writes, "My main reason for writing this adedum is to address significant changes that have taken place in AI over the three months since I published, is it a bubble?"
First he said, "There's the pace at which developments in AI are occurring. That speed is unlike anything we've seen before now, and this has implications that have never existed."
AI is growing its speeds that greatly outpaced the technological innovations of the past. Nothing has ever taken hold at the pace AI has. It's able to change the world at a speed that approaches instantaneous, outpacing the ability of most observers to anticipate or even comprehend. The second important thing that's happened has been an incredible leap ahead in AI's capabilities.
Level 1 is Chad AI. Level 2 is tool using AI. Level 3 is autonomous agent. At this level the user doesn't tell AI what to do. The user gives it a goal as well as the parameters of the desired output. The agent does the work, checks it, and submits a finished product. This is labor replacement at the task level, not assistance, replacement.
Mark's continues, the most significant thing that distinguishes AI is something we've never dealt with in connection with prior technological developments.
AI's ability to act autonomously. The bottom line Mark's concludes, is that AI is very real, capable of doing a lot of work that here to four has been done by knowledge workers, and growing extremely rapidly in terms of applications. What we see today is only the beginning. As I mentioned above, if I had to guess, I'd say its potential is more likely underestimated today than overestimated. He does point out that it's not clear that the market is pricing that disruption the right way,
but that the change is undeniable. And yet still, when we look back at the history of this particular week in time, this will be the satrini report week. The piece by satrini's Alepsha is called the 2028 Global Intelligence Crisis, and walks through a doomsday scenario where effectively AI is so good that it's actually bearish. Creating a doomsbyroll where AI does everything, allowing companies to cut human workers, which reduce his spending, which reduces available capital from consumers,
which forces companies to lay off more, and so on and so forth. The note, while admittedly an artifact of speculative exploration, hit with the force of a new Tronbomb, in a Wall Street environment that finds itself extremely destabilized and unclear what to make of AI change. Is it an infrastructure bubble? Is it the Saspocalypse where AI does everything?
Can it be both at the same time?
the report having a high vibe to substance ratio, it was extremely resonant.
“So much so that much of the rest of the week has been responses and rejoinders.”
Economics opinion writer Noah Smith wrote a response called the satrini post is just a scary bedtime story. He summed it up, AI might take your job, but it probably won't crash the economy. And if it does, we know how to deal with it. No arrests? If you don't like posts about AI, I have some bad news. For the next few years, there are probably going to be a lot of them. It's not often one gets to live through an industrial revolution in real time,
especially one that moves so quickly. There will be very few pieces of the economy, if any, that this revolution doesn't touch. And it will have major implications for other things I write about, like geopolitics, society, etc. AI is not going to be a special, compartmentalized topic for a long time. It's going to be central to a lot of what's going on. If you find that boring, well, all I can say is we don't get to choose the times we live in.
Every couple of weeks, someone comes out with a big post about how AI is changing everything, and that post goes viral and everyone talks about it for a few days. A couple weeks ago, it was Matt Schumer, something big is happening. This week, it's a trainee research is the 2028 global intelligence crisis, and yes, the title is in all caps. The post paints a picture of a future in which AI disrupts
lots of different kinds of white-collar work and service industry business models and industries like software, finance, business services, and so on, and in which this disruption causes an economic crisis. Noah continues that this is really two-theses in one. A micro-economic thesis about which industries and jobs AI will disrupt, and a macro-economic thesis about what this will do to the economy overall.
Now, I'll pause the reading there, but Noah goes on to basically make the point
that, among other things, the satrini post operates from the implicit idea that there will be no policy response. A fairly confusing view, given the magnitude of the disruption there are articulating. The Kobayisi letter also took on the satrini post. Their response essay was called what if AI doesn't actually end the world? In a day right, what's obviously true, AI is not another software feature or efficiency gain. It's a general purpose capability
shock that touches every white-collar work flow simultaneously, unlike any revolution in history, AI is getting better at everything simultaneously. But what if the doomsday scenario is false? It assumes demand is fixed, that productivity gains don't expand markets, and that the system cannot adapt faster than the disruption. We believe they continue. There is a second path that is being dramatically underpriced. The same anthropic take-downs
that look like early signs of systemic collapse may ultimately be the start of the largest
productivity expansion ever. While our analysis is not a certain outcome,
“it is important to remember that humanity has always prevailed, and the free market always works”
itself out. A couple of the key pieces of the argument from the Kobayisi letter one is something that I've talked about frequently on this show, that the doom loop or any long-term job loss scenario assumes that demand is fixed. The bearish loop they write creates a simplified linear model. AI gets better, businesses reduce headcounting wages, buying drops, businesses invest in AI again to defend their margins, and the downward cycle repeats.
This assumes they write a completely stagnant economy. History suggests otherwise, when the cost of producing something collapses, demand really stays flat. It expands. When compute cost fell, we did not consume the same amount of compute more cheaply. We consumed orders of magnitude more of it and built entirely new industries on top. AI decreases costs in every sector, and when service costs go down, purchasing power increases
with or without wage growth. The doom loop becomes dominant only if AI replaces labor without materially expanding demand. The optimistic scenario emerges if cheaper compute and productivity yields entirely new categories of consumption and economic activity. The way that I put this in the past is that if the cost produce code is 1/100 of what it used to be, we don't get 1/100 of the coders, we get a hundred times more code. The Kobayisi letter also argues that labor markets don't vanish
“but restructure. They write a key concern is that AI disproportionately affects white-collar employment,”
which drives discretionary consumption and housing demand. This is true and a legitimate concern, particularly as the wealth divide is already so massive. However, AI struggles with physical world dexterity and human identity. Skill trades hands on health care, advance manufacturing, and experience driven industries retain structural demand. In many cases, AI complements these roles rather than replaces them. More importantly, AI lowers the barrier to entrepreneurship.
When one individual can automate accounting, marketing, supporting coding tasks, small-scale business formation becomes easier. We are bullish on small businesses. In fact, the removal of barriers to entry through AI may be the solution to flatten the wealth divide that we currently face. The internet killed certain job categories but created entirely new ones. AI may follow a similar story, compressing some white-collar functions,
while expanding self-directed economic participation elsewhere. In their conclusion they write, AI amplifies outcomes. It can amplify fragility if institutions fail to adapt, and it can also amplify prosperity if productivity outpaces disruption. The anthropic takedowns are signals that workflows are being reprised and cognitive labor is becoming cheaper, a clear transition. But transition is not the same as collapse.
As every other major technological revolution has looked desabilizing at the ...
the most underpriced possibility today is not dystopia, it's abundance. AI may compress rents, reduce friction, and restructure labor markets, but it may also deliver the largest real productivity expansion in modern history. And it wasn't just internet news letters that were publishing rebuttals, no less than Citadel securities got in on the game. Their piece, which they called the 2026 global intelligence crisis,
and they pointed out that much of the evidence just points in a different direction. Easily the most reference part of the Citadel rejoinder is the chart of indeed job postings for software engineers that shows them going up dramatically over the last few months. They also point out that maybe the biggest x-factor in all of this is AI diffusion speed. Not how much of the white-collar work AI could do right now theoretically,
but at what speed will enterprise actually allow it to do that work. Citadel writes, "The first order presentation of AI adoption is generally a binary question. Do you use AI?" The more important question and so far as it relates to the AI
“displacement narrative is, how intensely is AI being used for work?”
Looking at St. Louis' Fed data, they say, "The data presents a little evidence of any imminent displacement risk. Recursive technology they point out is not recursive adoption, and point out that the risk of displacement declines with a slower pace of adoption." Finally, calling upon the example of history, they write, "In 1930, John Minner Keynes wrote economic possibilities for our grandchildren,
predicting that productivity growth would be so powerful that by the early 21st century,
the work we would fall to 15 hours." He was directionally correct about productivity growth, but profoundly wrong about labor market implications. Rather than working dramatically less, societies consume dramatically more. Why? Because rising productivity lowered costs and expanded the consumption frontier. Preferences shifted towards higher quality goods, new services, and previously unimaginable
forms of expenditure. Leisure increased modestly, but material aspiration expanded far more.
“History suggests productivity gains do not automatically translate into labor withdrawal or demand”
collapse, as they alter the composition of demand, expand real incomes, and generate new industries. Keynes underestimated the elasticity of human wants. You've heard me talk about assembly AI, and they're insanely accurate voice AI models,
but they just ship something big. Universal 3 Pro is a first of its kind,
class of speech language model that lets you prompt speech recognition with your own domain context and vocabulary, instead of fixing transcripts in post-processing. It's more flexible than traditional ASR and more deterministic than LLMs, so you get accurate output at the source, and can capture the emotion behind human speech that transcripts often miss, all without custom models or post-processing hacks.
And to celebrate the launch, they're making it free to try for all of February. If you're building anything with voice, this one's worth a look. Head to assembly AI.com/freeoffer to check it out. Today's episode is brought to you by Robots and Pencils. A company that is growing fast. Their work is a high growth AWS and Databricks partner,
means that they're looking for elite talent ready to create real impact at velocity. Their teams are made up of AI native engineers, strategists, and designers who love solving hard problems, and pushing how AI shows up in real products. They move quickly using RoboWorks, their agentic acceleration platform, so teams can deliver meaningful outcomes in weeks, not months. They don't build big teams. They build high-impact number ones.
The people there are wicked smart with patents, published research, and work that's helped shape entire categories. They work in velocity pods and studios that stay focused and moved within tent. If you're ready for career defining work with peers who challenge you and have your back, Robots and Pencils is the place. Explore open roles at robotsandpensils.com/careers,
“that's Robots and Pencils.com/careers. There's a new standard that I think is going to matter a lot”
for the Enterprise AI agent space. It's called AIUC1, and it builds itself as the world's first AI agent standard. It's designed to cover all the core enterprise risks, things like data and privacy, security, safety, reliability, accountability, and societal impact, all verified by a trusted third party. One of the reasons it's on my radar is that 11 labs, who you've heard me talk about before, and is just an absolute juggernaut right now, just became the first voice agent to
be certified against AIUC1, and is launching a first of its kind insurable AI agent. What that means in practice is real-time guardrails that block unsafe responses and protect against manipulation, plus a full safety stack. This is the kind of thing that unlocks enterprise adoption. When a company building on 11 labs can point to a third party certification and say our agents are secure, safe, and verified, that changes the conversation. Going to AIUC.com to learn
about the world's first standard for AI agents. That's AIUC.com.
Weekends are for vibe coding. It has never been easier to bring a passion project to life,
so go ahead and fire up your favorite vibe coding tool. But Monday is coming, and before you know it, you'll be staring down a maze of microservices, a legacy cobalt system from the 1970s, and an engineering roadmap that will exist well past your retirement party. That's why you need
Blitzie.
Deploy the beginning of every sprint and tackle your roadmap 500 percent faster.
“Blitzie's agents ingest your entire code base plan the work, and deliver over 80 percent”
autonomously, validated and untested premium quality code at the speed of compute, months of engineering compressed into days. Vive code your passion projects on the weekend, bring Blitzie to work on Monday. See why Fortune 500's trust Blitzie for the code that matters at Blitzie.com. That's BLI, TZY, dot com. And this idea of human wants both in terms of their ability to expand, but also just in terms of their manifestation in reality in theory was the subject of
my ponderance, written in the midst of a 20-hour force layover in the Amazonian rainforest,
that I called were all missing the most important market force that will shape AI, or my
plane made an emergency landing in the Amazon, and all I got was this lesson about the future of the world. The piece reads, "It's a weird week, man. A bomb cyclone blizzard with the force of a category two nearly three hurricane shut down New York and the rest of the east coast." This was problematic for lots of reasons, not least of which was that it completely torpedoed our family's return from Uruguay to the Hudson Valley. Meanwhile, back home, the latest AI-doomer sci-fi,
I say that with a lot less derision than it probably sounds, struck a nerve deep enough to rip the throats out of IBM, Visa, and many others just because of what AI might do. As I sit here and menouse Brazil, I find myself contemplating how my family's experience over the last 24 hours
“or so demonstrates just how wrong I think we are about how AI ends up playing out in the economy.”
I'll give you a moment to finish roughing at the utter linkedinness of that statement,
and then let me explain why we're all missing the most important market force that will shape AI.
When we got the notification that we were making an emergency landing in menouse, the trip had already been another calamity. A few days earlier in the middle of the night, we got the text notification from Delta that, almost assuredly, our upcoming trip from Montevideo to JFK, was going to get 86 by the impending snowstorm. The options for rescheduling weren't great. It was basically stick around Uruguay until Friday, when we were supposed to have gotten home
Monday morning, or scary on Monday to do a new multi-leg trip through Sao Paulo and Atlanta. We figured that even if things were still nirally in New York on the back end, solving that from Georgia was easier than solving that from Sao Paulo. Due to flee, we drove the two hours from Jose Ignacio to the Montevideo airport, returned our tiny VW rental and let the kids scarfs a Mickey D's before the 27 hours of upcoming
travel. In retrospect, it was the last peaceful moments of optimism we'd have for some time. We got to the check-in line and instantly it was clear that something was wrong. I can't check you and said the attendant, wait, what? Why? We can't check in anyone who's final destination is New York, but the storm is over, we're not even getting there until tomorrow when it will be even more over, and we've got stops in Sao Paulo and Atlanta. Let us get stuck there.
I can't, it's our policy. And after a call to her supervisor, it remained their policy. We didn't have a lot of great options. Turn around and hang out for another five days, or call Delta to have them delete the final Atlanta to JFK legs so we could at least get to the U.S. Atlanta it was and after some frantic searching, we booked with seemed like the last rental car in America to do the 15-hour drive home from Jackson Hartfield.
Fast forward about 10 hours, we've made it through the first leg of the flight, a couple of hours in the actually kind of excellent GRU airport in Brazil, and all of us, including four-year-old
“Gus and seven-year-old Alden, are passed out dreaming of a next-day full-of-key assaults”
and a dozen Huawei and Red Bull stops. That isn't till at 4am, the captain gets over the loudspeaker and says that, "Sorry, a generator has stopped working and we have to make an emergency diversion into menouse." That's the capital of the Amazon for those keeping track at home. We hadn't even made it out of Brazil. So much for at least if we get stuck, it will be in the U.S. Airports are stressful at the best of times. 300 people dropped out of
the sky into a place many of them had never heard of, and at the mercy of the gods of airplane
mechanics, hotel availability, and Brazilian customs authorities, and you've got something else entirely. But this is supposed to be the setup to a story about AI, right? We're now sitting here at the lovely hotel villa Amazonia in the old part of Manouse, waiting for a room to be ready so we can catch a few weeks before trudging back to catch another plane. Hopefully a new one to be honest, that will somehow some way get us back to the U.S. A. It is absolutely undeniable how much AI
has made this experience better. I've used LLMs to translate back and forth in a language I barely know how to say thank you in, research the safety profile of different areas. It's not a war zone, but it's not low-risk either. G. Thanks, Chatchee B.T. Real reassuring, and I've also used LLMs to hunt for rental cars, plan driving routes, and of course reassure myself that Airbus 330s really can fly with just one generator in those tens 45 minutes between when we got the announcement
and when we touch down. And yet, as awesome as AI has been, every part of the story has really been about human interaction and human discretion, either that went for us or against us. The attendant at MVD and her supervisor who didn't buck a clearly stupid policy that might have made sense 24 hours earlier, but certainly didn't anymore. The customer service reps of the Delta Diamond Medallion status line, who ranged from wildly unhelpful on the one end of the
spectrum to hustling to find us a flight to Philly on the other. The hotel staffers who overlooked that we hadn't technically booked our kids on the reservation, and who hustled to get us in a room before the 3pm check in time. AI has been extremely helpful during this trip, but at no point
Would I have rather interacted with AI than these humans?
human interaction out of some historic sense of legacy of the way things have always been done.
In fact, I'd venture to say that I'm exactly the type of person who, in many situations, would wildly rather interact with an anonymous robot. The reason that I prefer human interaction is the possibility of exception. Human systems are built with an implicit assumption of discretionary non-compliance. Rules tend to be written much tighter than anyone expects them to be followed. Everyone knows this, the rule writers know it, the managers know it, the humans
interacting with the system as customers, human judgment is the shock absorber between the world the policy was designed for and the messy reality. To be clear, this is a feature, not a bug, the whole system would be significantly more brittle if everyone just followed the rules perfectly. You can probably see where I'm going with this. A world where AI agents perfectly followed the policy all the time would be in many, many real world contexts, much worse than
the one where humans follow it only imperfectly. Call it the paradox of perfect compliance. But couldn't AI have grace and flexibility programmed in as well? Sure, and as we design
“agent-led systems, it will probably be important to remember that in people's real-lived”
experience, exceptions are as important as rules. But kindness as governance, an unspoken and yet nearly universal aspect of well-functioning human systems is hard to program. Small acts of bureaucratic rebellion tend not to be the by-product of clear, rational calculations. Instead, they are felt decisions. They are a split-second judgment call that comes on the heels of the utterly relatable exhale of an exhausted parent at their witsend,
just trying to keep it together for their even more exhausted kids. There's something in the pleas of the person being helped that suggests that, as bad as this situation is, there's something else they're going through that's even harder. Which brings us to this weekend's market-free out cause to shore. The latest AI-dumer fanfic/thought exercise is a fictional dispatch from 2028 describing an AI-driven economic crisis. This one is in about the fallout of an AI-bubble
popping because of a performance plateau. Instead, it's a meditation on what happens if AI actually gets as good as we think it will. Basically, so bullish it's bearish. The piece is from well-respected
“market research firms at Trinney and is well-constructed and worth reading. And boy did people read it.”
9 million on the exposed alone. Bloomberg the Wall Street Journal and many more wrote articles
about the piece as the latest leg of the Saspocalypse cleaved billions off of tech and finance talks. In other words, markets actually moved on a literal work or fiction. There is a ton of great debate to be had around the piece, which of course, happening right now, and which makes the outcome of them having shared it likely better in the medium and long run than if they hadn't shared it, even if Door-Dash stockholders don't really agree right now.
I'm not really interested in a point-by-point rebuttal. What I do want to point out is that like most analysis on both the bear and bull side, it rests on an assumption so deeply embedded that almost no one questions it, that because markets reward efficiency, efficiency is inevitable. This efficiency gospel isn't exactly wrong, but it mistakes means for ends. And here is my main point. Markets don't exist to be efficient.
Markets exist to serve human preferences. Outside of the efficiency gospel, the value of efficiency is primarily in how it improves a company's ability to serve human wants and needs. Not an ends in and of itself. Confusing the two is like saying the point of a restaurant is great ingredients in a clean kitchen. Too much of the AI discourse on both bear and bull sides makes exactly this mistake. We've thought a lot about how much more efficient AI will make things
but too little about what we and other humans of the future are going to want. AI might make every part of a company's operations more efficient, but will that company's
“customers actually want to interact with the new, more efficient version on the other side?”
What are the chances that they actually reject it in favor of a more human version? Well, they actually be willing to pay a premium for a less-rawly efficient experience, because they like that version of the experience better. Markets make this confusing. Investors are the high priest of the efficiency gospel. In the day-to-day excitement of market moves tends to lead more media attention to be focused
on the stock story than the value created to the end consumer. Indeed, for the market priest and priestesses, the value to the end consumer is actually secondary to the value to the shareholder, but that only lasts for so long. A company can live for a long time because the markets like it, but not forever. Ultimately, the buck stops with the customer, and when it comes to the customer, human institutions are not outcome-generating machines, at least not exclusively. In many cases,
they're also or even primarily agency-validating systems. There's plenty of evidence to suggest that people are willing to pay for the possibility of being an exception. The chance that someone will look at your situation and deviate from the script, the knowledge that the person across
the counter could break the rule for you, even if they don't. Friction isn't always waste.
Think about all the waste capitalism has invented for us to transform the possibility of exception into the exception is the norm. Loyalty programs, status tiers, premium service. The entire premium loyalty economy is a multi-billion dollar bet that people will pay for guaranteed access to generally favorable human discretion. A couple of years ago I decided I was being stupid
Not to concentrate airline loyalty on a single airline and so picked Delta.
bit of scratch on a Delta Sky Miles card so I got to diamond last year. Turns out there's a special 24-hour line just for diamond members, and man have I put that thing to the test the last few days. Even as Reddit rages at 2, 3, 4, even 5 hour wait times with Delta in the wake of the blizzard, I've been able to get a real live human being on the phone in under a minute a half dozen times. The point is Delta isn't trying to automate the diamond line. The diamond line is the product,
automate that, and you've eliminated the thing people are paying for. I'm not trying to be polyanish about the magnitude of AI disruption, anyone who listens to the podcast knows how enormous
“a change I think we're living through, and how profoundly challenging this next middle part could be,”
even though I'm optimistic for the long term. But a big strand of the most urgent concerns are predicated not just on the scale of disruption but the speed. This type of doomerism rejects comparisons to the past, because the paradigm shift was more gradual while this one is happening everywhere all at once. The core question these arguments tend not to grapple with is, just because
AI could do something will it always be called to do so? If you live in the efficiency gospel,
the answer is of course yes. If an on-human intelligence can perform the same task more efficiently, it will inevitably be tapped to do that task at the expense of the human who used to do it. But efficiency is not destiny. Indeed, efficiency is only one type of market force. Humans have agency. Humans have purchasing power. Even in the satrini report, the white collar labor folks aren't out of consumer power yet. If human desire runs counter to efficiency,
as it often does, there's every reason to think that the old maxim that the customer is always right will provide a serious counterweight to the unstoppable market advance of the machines.
“Safetyists have long advocated some type of pause to allow us more time to adapt. I think”
we might be underestimating the extent to which human consumer preferences will do that all on their own. It's entirely possible I'm wrong, and the forces of the efficiency gospel are too strong to resist. But I'm on our 30 or 40 or 50 or who knows by the time you're reading this, stranded in who knows where Brazil with two small kids. AI as information guide has been amazing, but exactly zero times have I wished I could have a more efficient AI to interact with.
What I've wanted was a human being who looks at our situation and decided to break the rules just a little to help us get home. Efficiency is not destiny. And ultimately, and now I'm done reading myself and back to just talking as myself, the thing to note here is just that as compelling as all of these arguments sound, as many holes as there are in one, as many better points in another,
the reality is that we are all just grasping and guessing at a future that we cannot know.
Abundance author Derek Thompson writes, "The level of uncertainty is so high, and the quality and supply of real-world, real-time information about AI's macroeconomic effects so poultry, that very serious conversations about AI are often more literary than genuinely analytical. I feel lucky to have been able to have conversations about the frontier of AI with executives and builders at Frontier Labs, economists at AI conferences, investors in AI, and other AI folks
at off-the-record dinners, where important truths can theoretically be shared without risk. I can't emphasize enough that nobody knows anything is about as close to the reality here as three words are going to get you. Nobody knows what's going to happen this year, or next year, or the year after that. There is no secret cigar-filled room of people who have unique access to some authentic postcard from the future. When you drill down underneath the bluster,
the boomerism, the fear, the anxiety, what's there at the bottom is genuine uncertainty, a vacuum into which storytelling is flooding. The Frontier Labs don't really know what they're building exactly. The economists don't really know how to model the thing they're claiming they're building. I wish more people talked about and thought about the subject through that sort of lens. We're trying to model the economy-wide effects of a technology whose properties the
Frontier Labs can't even really describe yet. Whatever you think about AI today, be prepared to change your mind soon. In an extension of that post in his sub-stack, Derek writes that artificial intelligence offers its obsessives a kind of Schrodinger's apocalypse, which exists in a superposition between the economy as about to change forever, and from a macroeconomic standpoint, everything still looks eerily normal.
My final reminder for this episode is that in the case of this Schrodinger's apocalypse, it's not just a question of acknowledging that multiple possibilities exist in the box.
“I think we need to recognize that a much more fundamental level that we have a lot more agency”
than we give ourselves credit for to decide and shape which versions of this future come to pass. For now that is going to do it for today's AI Daily Brief,
appreciate you listening or watching, as always, and until next time, peace!


