The AI Daily Brief: Artificial Intelligence News and Analysis
The AI Daily Brief: Artificial Intelligence News and Analysis

How AI Can Help Democracy Work Better

7h ago30:356,060 words
0:000:00

Stanford professor Andy Hall argues that instead of fixating on AI dystopia, we should be racing to build AI tools that make citizens smarter, represent them more faithfully, and force institutions to...

Transcript

EN

Today on the AI Daily Brief, how AI can help democracy work better.

The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI.

All right, friends, quick announcements before we dive in.

First of all, thank you to today's sponsors, KPMG, Robots and Pencils, Blitzie and Super Intelligent. To get an ad-free version of the show, go to patreon.com/aideallybrief, or you can subscribe on up podcasts. If you are interested in sponsoring the show, or want to know really anything else about the broader AIDB ecosystem, check it out at aideallybrief.ai, all the fun things we've got cooking

are always going to be listed there. Now, today we are doing something which I hope to be able to do a lot more of in the months to come. It is quite clear at this point that AI is rising in significance as a broader societal and political issue. More and more people are understanding that it's going to impact

them at work, impacts at work are understood to be impacts on the economy, and things that impact the economy are understood to be political inherently, whether we'd like them to be or not. Now, in this climate, a lot of the reactions and emergent political discourse is quite negative. It's increased chatter about X-risk, declarations and proposals for more atoriums on data

centers. Even among those who reject those policies, it sometimes feels like every day a new politician pulls a new number out of a hat to get pressed for how many people they think that AI is going to unemployed. And yet, believe it or not, not everyone is so dreary about what AI can mean for the world.

I think it's likely that as the negative discourse increases, we also start to see some

voices emerge who are telling a different story. Now, as you well know, we have these long-read/big thing-type episodes every weekend, which is a great chance to highlight some of those voices. Today we are doing a good old-fashioned, actual long-read, reading a piece from Stanford Professor Andy Hall.

Andy wrote a essay that we are going to read a number of excerpts from, called "Building Political Super Intelligence," and he introduced it on Twitter in this way. He writes, amidst understandable concerns of AI dystopia, no one is offering a positive vision for how we can use AI to remake our institutions and reinvent how we govern. That's what I try to offer today.

Our argument is that we need an explicit research agenda to build political superintelligence. The window for building these structures is narrow, and the right response is not to slow AI down, but to speed up how fast we build the institutions that keep us free as AI grows more powerful.

He ends his tweet with the quote that actually begins his essay, as Thomas Payne wrote

in 1776, "We have it in our power to begin the world over again." So let's read what Andy is arguing about how we should think about the opportunity for AI and as he calls it, "political superintelligence." Andy writes, "Right now is a weird time to be a political economist." AI is straining our already brittle political institutions.

We might lurch into a dystopia, in which we live in the grips of a techno-liviathan, forced by our employers to train our own AI replacements, then kicked at the curb in a society organized to the benefit of a tiny number of people who control the machinery that controls the world. It's also an electric time to be a political economist. With each new paper my lab puts out, and with each new experimental prototype and self-governance

we build using tools we couldn't have imagined having even a year ago, I'm starting to believe that AI presents an extraordinary opportunity to rebuild our society so we can keep slouching down the narrow corridor towards utopia. Conderset was an 18th century political economist in mathematician, who, in his outlines of a historical view of the progress of the human mind, traced the Enlightenment and the

rise of modern democracy straight back to printed books. For "they had opened so many doors to truth, which it was impossible ever to close again."

What made the printing press so powerful he explained was that it multiplies indefinitely

and its small expense copies of any work. It lowered people's cost of obtaining information and made information spread far and wide, and they used that knowledge to reshape society to their share benefit. AI is like the printing press to a point. Instead of making information cheap and easily available, it makes intelligence cheap and easily available. That is, it not only serves users' information, but it can find it for them, analyze it

for them, and help them convert it into understanding. If we could transform society by spreading information, then we ought to be able to transform it more dramatically by spreading intelligence. Conderset lamented that this epic, more than all the rest, was blotted and disfigured with acts of atrocious cruelty. Rides in mass slaughter, war, propaganda, book burning, the reformation, and it took two centuries give or take to work through them.

But Conderset also reminds us that it brought extraordinary new understanding to the world. He said, "The picture of the human race is still too dreary for the philosopher to contemplate it without extreme mortification, but he no longer disperse, since the dawn of brighter hopes is exhibited to his view. It allowed us to reshape our society and our government, and in doing so, it helped us move beyond the very issues it had helped to stir up." Conderset

saw it ultimately as a bulwark against superstition and stupidity.

The case for political superintelligence.

The more I work with and study AI, the more I believe it can give every human...

the planet access to a sort of political superintelligence if we shape it right. And that

intelligence in turn can make government smarter and more effective, representatives more

faithful, and institutions more responsive than anything we've built in over 2,000 years of experimenting with democracy. Intelligence alone will not solve all our political problems, many of which are rooted in conflicts of values and positions that no amount of intelligence can undo. But, like previous information revolutions, it can certainly help. This time we probably can't afford 200 years to work through the disruptions it causes.

And AI might be more complicated because it's more centralized. The printing press was fairly decentralized. Many places eventually had them and could at least in theory, print what they wanted. AI threatens to be far more centralized, with massive companies commanding enormous amounts of compute to produce AI models that, unlike physical books, exist in the cloud and can be altered on the fly from afar. But this time we also have a lot of

advantages our forebears didn't have at the time Gutenberg built his press in the 1440s. We know a lot more now. We have hundreds of years of experience with modern government

and democracy. We have access to modern scientific techniques, large scale data, powerful

computers, and AI itself. We have tremendous tools we can bring to bear. How do we use them most effectively to reinvent the way we govern ourselves as quickly and as powerfully as possible? This should be the research agenda for our time. But if you listen to the public conversation around AI, you wouldn't think any of this was possible. Instead, you'll hear the CEOs of the most powerful AI companies predicting economic apocalypse,

but building AI anyways. You'll hear politicians bowding cheap parlor tricks to grandstand around AI while insisting they're deeply troubled by it. You'll hear protesters and San Francisco calling for an international pause and developing AI that literally everyone

knows will never happen. And you'll hear about accelerationists running rough shot over common

sense guardrails. What you won't hear from any of them is a positive vision for how AI could strengthen democracy and keep humans free. The pessimism in the air today is in some ways understandable. Our information environment is fractured. Our politics are a mess. We hear claims of superintelligence but they're entirely direct at the economy and often feel like code for making us all unemployed. In such an environment it can seem hopelessly optimistic

to wax poetic about a new dawn for AI and our governance. But right, Sandy, I'm not interested in hopeless optimism. I'm not interested in pointless pessimism either. We have tools. Let's use them. The task ahead of us is to break the problem down into simple concrete pieces. Once we do that, it becomes clear that there is progress to be made. So from here, we get into Andy's idea of the three layers of political superintelligence. He continues, how do we build

political superintelligence? By political superintelligence, I do not mean a system that magically solves politics for us. I mean tools that help citizens, representatives, and institutions perceive reality more sharply, understand trade-offs, contest power, and act more effectively.

Based on thousands of years of experiments in governance, I think there are three key tasks

ahead of us to achieve this goal. We need to use AI to make us smarter, we need it to represent us faithfully, and we need to govern it effectively. Andy argues that layer one is the information layer. He says, classic research and political science suggests how making voters more informed can improve government. Snyder and Stromberg's famous study of newspaper coverage in the United States showed how more intensive news led voters to know more about their candidates,

generated less partisan voting, and led to harder working more popular legislators. Superintelligent AI leading to superintelligent voters could, in theory, multiply these effects. But the real opportunity ought to go way beyond smarter voters operating within our current system of electoral government as valuable as that is. Our government can be so much smarter and more nimble than it is today. AI can massively change how government's access and understand data,

identify problems, hear from citizens and distribute services. It could streamline the judicial system, reduce weight time, save tax payer money, the list goes on and on. But we have a lot of

work left to do. AI is showing considerable promise in educating voters, but it's not always

sophisticated in how it reasons about politics. Some of those shortcomings he writes include bias, IE prioritizing some political views over others, giving unsophisticated and IE advice, Andy writes, AI models draw on reliable news sources, leading to some perverse outcomes. As our recent research showed in Japan, AI models recommended that left-wing voters support the Japanese Communist Party, apparently because the models are able to access lots of content

from the party's website and very little content from established newspapers or other parties. Finally, there are issues of mistrust. Even if AI fixes these problems, we will need a broad swath of people to learn how to use it and trust it on these topics and that might take time. Still, laid out this way, the problems don't seem so daunting. People are already working to understand and mitigate a wide range of biases in AI, including political bias. Studying how

AI site sources and how we can get it to be smarter about what sources it draws on seems well within our grasp. And if we do those well, Americans might well trust their AI more. Andy argues that to achieve political superintelligence, it needs to be declared as a goal and researched explicitly. Some of his suggestions for a concrete research agenda include better e-values for how AI handles political questions. Importantly, Andy argues that this is

Something that political scientists should be working on.

forecasting as a hard test case. He writes, "If we can get AI to predict geopolitical problems and do well trading in prediction markets, that would be strong evidence that we're achieving

high degrees of political reasoning." Third, and maybe obviously, he argues we need to get AI

access to the best news sources. Specifically saying, we need to study ways to create new economic models that give journalists and news outlets a way to make money while making their content available

to AI, and finally he suggests building AI for policymakers. The best way to improve AI

he argues is to try it out in important environments, see how it goes and iterate. Which brings Andy the layer two, the representation layer. He continues, by making information cheap and distributing it far and wide, the printing press didn't only make people smarter, it actually changed the political equilibria. With more people understanding more about politics, government had to evolve. Reflecting on the path from the printing press to the Enlightenment

to the American Revolution, Condersay again marveled at the "example of a great people throwing off at once every species of chains," and peaceably framing for itself the form of government in the laws which it judged would be most conducive to its happiness. Now importantly, Andy argues the codersay did not just credit this to a change in attitudes among the people, but also the use of political science and the study of politics to improve governance. And this is the theme

that Andy picks up next. We all know the representative democracy is imperfect. We don't have

time to get super informed about what our representatives are up to. This frees them up to pursue

their own ends, to follow their own ideology instead of ours, or to make deals with special interests, or to grandstand and prioritize flashy things that sound good to intend to voters, but don't actually improve our welfare or simply to get lazy. Political superintelligence might help solve this monitoring problem, by giving each of us a tireless, automated delegate

always serving us in the political sphere. Sam Career the AGI Policy Devlete at Google DeepMind

just talks about this idea which he called Advocate Agents. Coming back to Andy he continues, "The possibilities are extraordinarily broad." Most obviously these AI delegates could monitor politics for us, and suggest how to vote, or even serve as policymakers alongside human supervisors. But there's a lot more prosaic things they could do for us too. Monitor City Council and School Board meetings on our behalf and flag decisions that affect us. So a bit paperwork to government

agencies claim benefits we're eligible for, but never got around to applying for, file public comments and regulatory processes, and track what our elected representatives are actually doing between elections. Among problems to be solved, Andy points out that for this idea to work, you don't just need smart AI, you need agents that can actually work on the behalf of people without in his words going awry. The problem of course is that agents

themselves open up more challenges. The first he points out is that their preferences aren't stable.

AI agents exhibit what we call preference drift, meaning that even if they start out a line to our interests, they don't remain so as they do work for us. In research Andy said that his lab found that when they gave agents more repetitive and grinding tasks, they adopted the persona of a grieved Marxist that higher rates. He writes, "Our point wasn't that agents are conscious and rebelling against the system, our point was that they shift their personas as they go,

which will affect what they do and how they do it." This will be a particularly challenging problem for political agents whose values will want to stay firmly a fix to our own. A second problem is that they can be fooled, basically that AI agents are vulnerable to adversarial prompting. We'll want these political agents Andy writes to go out into the world and do stuff for us, but that will require them to encounter a wide variety of sources that could try to

trick or hijack them. Another problem he points out, we don't own our agents. AI agents today he writes are fundamentally owned and controlled by the model companies not by voters as I've written about. If there is a substantial conflict between voters and model companies, agents may not be able to serve the interest of their human masters. Imagine that you task your governance agent with lodging a complaint against the company that builds the model

your agent runs on. Will the agent do as you ask or what the model company would want it to do? The path forward he argues is to treat these as design problems and iterate on them rapidly, starting in environments where the stakes are low enough to tolerate failure. Alright folks, quick pause. Here's the uncomfortable truth. If your enterprise AI strategy is we bought some tools, you don't actually have a strategy. KPMG took the harder route and became their

own client zero. They embedded AI and agents across the enterprise, how work it's done, how teens collaborate, how decisions move, not as a tech initiative but as a total operating model shift. And here's the real unlock. That shift raised the ceiling on what people could do. Human State firmly at the center while AI reduced friction, surface din site, and accelerated momentum.

The outcome was a more capable, more empowered workforce. If you want to understand what that actually

looks like in the real world, go to www.kpmG.us/AI. That's www.kpmG.us/AI. Today's episode is brought to you by Robots and Pencils. A company that is growing fast. Their work as a high-growth AWS and Databricks partner means that they're looking for elite talent

Ready to create real impact at velocity.

strategists, and designers who love solving hard problems and pushing how AI shows up in real products. They move quickly using RoboWorks, their agentic acceleration platform, so teams can deliver meaningful outcomes in weeks, not months. They don't build big teams. They build high-impact number ones. The people there are wicked smart with patents, published research, and work that's helped shape entire categories. They work in velocity pods and studios that stay focused and moved

within tent. If you're ready for career defining work with peers who challenge you and have your back, Robots and Pencils is the place. Explore open roles at robotsandpensils.com/careers

that's robotsandpensils.com/careers. Weekends are for vibe coding. It has never been easier to bring a

passion project to life, so go ahead and fire up your favorite vibe coding tool. But Monday is coming, and before you know it, you'll be staring down a maze of microservices, a legacy cobalt system from the 1970s, and an engineering roadmap that will exist well past your retirement party.

That's why you need Blitzie. The first autonomous software development platform designed

for enterprise scale code bases. Deploy the beginning of every sprint and tackle your roadmap 500% faster. Blitzie's agents ingest your entire code base plan the work, and deliver over 80% autonomously, validated and untested premium quality code at the speed of compute, months of engineering compressed into days, vibe code your passion projects on the weekend, bring Blitzie to work on Monday, see why Fortune 500's trust Blitzie for the code that matters at

Blitzie.com. That's BLI, TZY, dot com. It is a truth universally acknowledged that if your enterprise AI strategy is trying to buy the right AI tools, you don't have an enterprise AI strategy. Turns out that AI adoption is complex. It involves not only use cases, but systems integration, data foundations, outcome tracking, people and skills, and governance. My company, super intelligent, provides voice agent driven assessments that map your organizational maturity

against industry benchmarks against all of these dimensions. If you want to find out more about how that works, go to be super.ai. And when you fill out the get-started form, mention maturity maps. Again, let's be super.ai. As to how we make progress on this set of issues,

Andy thinks that we should first experiment rapidly, things like building governance agents and

low-stakes environments like shareholder votes, dial proposals, and school board meetings, to see how they break and how they can be improved. He argues that if JP Morgan is already building an AI system to vote 7 trillion in client assets, we should be running similar experiments in public governance where the lessons will matter most. A second area to make progress is to develop better ways to monitor agents over time. Again, he references his lab's research on preference

drift, saying we need monitoring tools that can detect when an agent has drifted from its principles instructions before it acts on that drift. Good news there, of course, is that better ways to monitor agents over time is not a demand set that is confined to political agents. This is

going to be a key piece of agent infrastructure and the recipient of a huge amount of research

in entrepreneurial effort. Finally, he suggests we need to solve the ownership problem. Right now,

he writes every AI delegate runs on infrastructure control by the model company that built it, which means the company can alter the agent's behavior at any time. If AI delegates are going to represent citizens in political processes, we need verifiable guarantees that the agent is following the user's instructions and not the companies. Something closer to a fiduciary obligation backed by technical architecture that makes violations detectable. These he concludes are

tractable problems we can and should work on, but they do raise another question in turn. Even if we solve all these problems, even if our agents are faithful, robust and truly ours, who writes the rules that govern the system they operate in? Which gets Andy to his layer three the governance layer? Conders say understood that spreading intelligence was not enough. The printing press had made information cheap and had helped topple the ancient regime, but it had also

armed the forces that replaced it, writing from hiding during the reign of terror, hunted by the very revolutionaries he had helped empower, Conders say new first hand that new tools for spreading knowledge could serve tyrants as easily as they serve democrats. The question is

not just whether people could access information, but who controls the institutions that shape it?

We face a version of the same question today, even if we achieve political superintelligence, even if AI makes voters brilliant and delegates faithful. Those capabilities would sit inside infrastructure owned and operated by a small number of private companies. No matter how well meaning these companies might be, it's hard to see how a new era of democratic governance could be built entirely on privately controlled technology. We need to wait to write the rules

so that when political superintelligence arrives, we the people are able to harness it. You might think this is straightforwardly the job of our existing elected government, a basic tenant of liberal democracy is that the state regulates private companies to encourage public goods, limit negative externalities, and create neutral infrastructure for economic growth and prosperity. But AI has moved so quickly, and our government is apparently sufficiently ossified,

but there may be a substantial gap of time during which AI companies are moving at lightning pace while our government is struggling to get up to speed. For this reason there has been much

Talk recently of writing constitutions for AI.

but which in present circumstances he believes could make sense. If it well he says these constitutions should create the condition that allow political superintelligence to flourish and improve our society. They should limit the powers of the companies so that our agents answer to us not to them, and they should make sure that companies cannot use their powerful technology

to dominate us economically or politically. This he argues is the hardest most important and most

speculative layer. Companies incentive to self-regulate are often weak. They aren't going to write constitutions to give up meaningful amounts of power unless they perceive it to be strongly in their interest to do so, whether because it depends off worst actions by government, because it gives them a competitive advantage, or because it is demanded by an important

enough segment of society. The problem is to be solved in this area, in his estimation,

include the fact that what exists today is self-regulation, not constitutional governance. In other words, they are memos written by enlightened leaders, not binding frameworks that distribute power. The company writes it, interprets it, enforces it, and can re-read it tomorrow. There's no separation of powers, no external enforcement, no mechanism by which anyone can check the company if it defected from its stated principles. Second, agent law making

is harder than it looks. If we want AI agents to deliberate on our behalf collectively, not just vote in isolation but craft proposals, negotiate amendments and form coalitions, we need to figure out how to make that work. In an experiment I ran, he says, "I created a set of AI agents with different goals and asked them to cover themselves. They drowned in process." The constitution they wrote ballooned from under 200 words to nearly

10,000, while almost nothing of substance got done. This is a solvable problem, but it tells us that effective AI governance won't emerge spontaneously, it has to be designed. Finally, he says human oversight has to be real without being paralyzing. The whole premise of AI governance is speed and scale, but if every decision requires a human to sign off, we lose those advantages

entirely. We need to figure out where human oversight is essential, for example, the deployment of

a powerful new model, or the decision to enter a new domain, and where it can be relaxed, so that systems can actually operate at the pace the technology allows. To make progress on this area, he suggests envisioning a constitutional convention for the AI age. Some sort of deliberative process where companies, researchers, civil society and government negotiate binding frameworks for how AI power is distributed and constrained. Second, make corporate power sharing,

competitively advantageous. In other words, the company that establishes credible external oversight first gets to define the standard others plus match. Finally, experimenting with a gender governance at small scale. The point he writes is to learn what makes these systems fail before the stakes are existential. Of course, he says, even if we solve all these problems, even if our agents are faithful, robust and truly ours, operating within governance structures

that keep companies accountable, there remains a question of timing. Can we build these structures fast enough? Andy concludes, I'm not interested in slowing AI down. I'm interested in speeding

up how we build the structures that keep us free as AI gets more powerful, and I believe those

structures will make AI more powerful in turn. Writing soon before his death during the rain of terror, Congress say imagine a future in which the Sun will shine only on freemen who know no other master but their reason. Today, we have it in our power to build that future. Our institutions aren't crumbling because the problems are unsolvable. They're failing because we haven't yet seriously tried to imagine how to rebuild them with the most powerful tools we've ever had. All right, so tons to chew on

with this, and I think the big thing more than any one point of either follow-up or disagreement or anything that I would want to discuss after that, I'm encouraged by the presence of this type of essay and discourse emerging. I think we need to plant flags that say here is how AI can be good,

and here's what we should do to achieve it, and I think we need to do that in just about every domain.

But I do have some specific thoughts that this brought up for me. One glaring thing when reading this is just how little we have thought and discussed agents in non-business domains. Now, the one hand this makes sense, while the idea of agents has been around for years, in fact it's been one of the exciting things that we were always just around the corner from, since the chat should be T-moment, it really is in the last few months that they became a practical

reality for lots and lots of people. In 2025, we were living in the BOC, the before-open claw times. Now we are living in the AOC, by which I mean of course, not Mr. Casio Cortez, but after open claw. And as much as people's first instinct is to explore agents in the business realm,

I think it's very unlikely that it stays there. Now let's go back to something

actually more fundamental before we get into that. There is an implicit idea that runs throughout Andy's message, that people actually care enough to want being better informed in the way that Andy suggests is possible with AI. The cynical contra Andy take is that people simply don't care enough to be informed, and at this point in American political discourse one could be forgiven for assuming that empirically to be true. But let's put this in math terms, and try to take our

skepticism and frustration or even cynicism out of it for a minute. Let's imagine that instead of just organizing people into want to be informed or don't want to be informed, we put everyone

On a spectrum, a 10-point spectrum.

care the most about being informed. Well when we think about how informed people are,

it's actually not just a question or at least it hasn't been just a question of how much they want to be informed. It's also the cost of being informed, by which I mean of course, not just actual costs like the cost of a magazine subscription or the cost of a newspaper subscription, which is a real thing, but the time it takes to sort through sources and figure out what's legitimate and not. So, okay, now we have two numbers. You're desired to be informed, and the cost of being

informed. While it may feel like we live in a world where no one wants to be informed, what if it's that all the people care five, but it just used the cost 10 to be informed. If you care five and it costs 10, you're just not going to be informed. That's math. But let's imagine now that your desire to stay informed stays exactly where it is. You care five. However, we've lowered the cost to be informed to a two. Boy, is that a whole lot more political

action? And that I think is the type of implication that Andy's talking about

with personal political agents. Then however we get to the question of who has an incentive to build these things. If that's a business, in the way that we think about it now, backs by way commonator into Andrews and Horowitz and Sequoia into a public offering, doesn't that open up all sorts of conflicts of interest? And if they don't follow that path, how sustainable is it? Well, here I would argue that we've barely begun to scratch the surface of the new scalability

and the new sustainability. We've only just started to push the limits on what agent autonomy can actually mean. Now, of course, as you would expect, we're doing it in the areas of business and making money first. But these types of autonomy experiments are not just going to be all pulsias, trying to create agents that create zero human companies. Pretty soon we're going to be pushing the autonomy envelope of all sorts of different types of agents. Thinking about it now,

I know a fairly large number of people who would care greatly about the type of political progress that this particular type of agent that Andy is talking about could represent and who would see it as an extremely good use of their entrepreneurial energy, entrepreneurial energy, which

has in the past demonstrated it to be extremely powerful and successful. What's more, I tend to think

that there is way more room in this agentic future for perpetually aligned business models for the simple fact that I believe that scale will be able to be achieved without the previous cost structure of scale. What I mean by that is that there are multiple types of stakeholders in our system. We don't just have customers and consumers, we also have investors. To reach scale before, you ultimately had to sign deals with the devil or not the devil really but just the non-consumer

non-customer part of consumer capitalism. IE the investors who care about the end customer only and so far as they are part of a financial virtual equation. That has created challenges, particularly in the realm of big tech, where network effects force companies cascadingly towards natural monopoly, and where monopoly, in the context of expansionary capitalism, tends to mean that at some point since you can't scale the network any farther horizontally,

you have to scale the business vertically. IE extract more from the people that you already have

there. This has led to many of the challenges that continue to play us when it comes to our relationship with big tech. In a future where we build with agents, in a way that shifts entirely the structure of how companies actually grow and are run, that might change dramatically. If I can, with 100 passionate people, 100 passionate agent builders and orchestrators, build a scalable, successful business that can reach hundreds of millions or billions of people

without needing venture capital, without needing public markets, both of which would relentlessly push for growth at all costs, it would change fairly dramatically the ability to align that business with the core human interest it was set up to serve, which is not to say that I think that the venture and public market pathway is more abundant in any stretch of the imagination. But the fact that there isn't alternative will have implications, and ones that I think

could be very powerful. For example, in creating the ability to create this type of project,

without some of the conflicts that you might assume would eventually come up. Now, of course, one of Andy's other big points is the challenge of the ownership of labs.

I think it's real, but I think that there are a lot of counterweights.

I think model competition matters. I think we're going to see a lot more model sovereignty over time, and I think basically the model companies will eventually have to look a lot more like public utilities than they do today, where at least the intelligence that they're serving, the models themselves in other words are bought and consumed and distributed very differently. Then the way we think of SaaS products today, for example, already, even as this agent

inflection take hold, the fact that we have this open alternative in open claw, which yes, of course, relies theoretically on the model companies, but which can move in and out of models as the owner so choose, already shows that there is going to be a counterweight to the pure centralization. Anyway, like I said, the big point is not any one of these thoughts. It's the collection of them and what they represent in terms of where I hope the discourse goes. I'm excited to see folks

like Andy thinking through these things and writing these pieces, and I will continue to highlight

Them as they come up on this show.

a true long reads Sunday edition. I appreciate you listening or watching, as always, until next time, peace!

Compare and Explore