Hey folks, look, come on here and we do a podcast and talk about world events...
like to, I don't like to brag, I like to tune my own horn.
I am influential in a variety of spheres.
“And one of the spheres that I think gets short-trifed is obviously fashion.”
Am I in a win-tour, do I drive the trends? Am I ferrodomo, is that a person that does this? I don't really know, I don't know, but the point is people look to me for wardrobe choices, not just color, but like if their pants should fit. And especially this time of year, as the seasons are changing, you have got to refresh.
Quince is what's going to help you refresh your wardrobe and bring out the spring in
your person out, they've got all of the essentials. And by the way, we're talking about 100% European, linen. This is high quality premium material stuff built to last.
“The prices are like 50 to 60% less than similar brands, and I'm going to tell you how.”
They work directly with ethical factories that cut out the middleman, your paying for quality, not just brand markup, refresh your wardrobe with Quince, go to Quince.com/tws for free shipping and 365 day returns. It's now available in Canada to go to q-u-i-n-c-e.com/tws for free shipping and 365 day returns Quince.com/tws. So ladies and gentlemen, welcome. My name is John Stewart. It's another weekly show podcast
on this Earth Day Eve. Is that our de-ecelebrate? I love Earth. I can't wait. The Pitterpatter of Little Feet at 6 in the morning running down stairs to open up the Earth Day Presence. And as this glorious Earth is being celebrated, while simultaneously being destroyed on the back end of it, I thought it would be appropriate, not to worry about Iran, not to worry
about climate change, but to worry about a third existential threat, which is AI. Artificial
intelligence. It is happening people. And it's about time that we had a sober conversation about its deletrious effects, but also its opportunities. And so we're going to go straight to the source. We're going to go to two brilliant MIT economists. We're going to talk to us a little bit about the possibilities of AI, the collateral damage of AI, and the various ways we might be able to mitigate that. So we're just going to get right into it with
those cats right now. Here they are. Folks, we're going to break it down today in terms of the AI revolution. And what will be the repercussions for the American people, the American worker, the world, writ large. Who do you go to for this kind of thing? You go to the experts, you go to the brilliant people, you go to Doron, Osomoglu, Nobel laureate. I don't throw that around Nobel laureate. And
economics, MIT is the professor and David Audre, Rubinfeld professor of economics at MIT. Guys, thank you so much for joining us today. Oh, our pleasure. Absolutely. Thanks for having us on. David and Dron, I am beginning to get increasingly discomforted by the speed at which AI seems to be infiltrating into not just sort of the popular consensus in culture, but the workforce. So I want to ask
you guys, what is our timeframe as this technology is, when are we going to really feel the full effect of this new technology? Just beginning to get worried about it now, John. Don't, you know me. You know me. We know each. No, I've been, you know, I'm worried about everything. So I'm I and I'm very worried about this too. Not about the timeline because the timeline is so uncertain. It's hard for me to worry about something that's
“so uncertain, but with all of the consequences, I think we are definitely not ready for”
AI. The workforce isn't ready for AI. We don't know what it's going to do. I think the people who are really not ready for AI are the students who's learning is going to be affected in so many different ways. And we don't know we have no guardrails, no ways of ensuring that students are actually learning how to learn and they can actually become experts in anything in the age of AI when they can get a lot of answers from AI. So there
Is just so many things to be concerned with.
Because yes, they won't AI. What will they need to learn? What will we all just be?
If they don't need to learn anything, then they're just not needed as workers. And we don't want to be in that scenario, right? So we do need people to have expertise in mastery. And
“I do think AI has both potential and risk, right? And I think Darrell will talk more about”
the risk. So I'll probably talk more about the potential. And let me point out that although I do not have a Nobel Prize around here at MIT, it's more distinguished to not have one than to have one. David, can I tell you? I love how you've set yourself apart from your colleagues. Exactly. By not getting a Nobel Prize. Exactly. Someone's got to stand out.
You know what? The idea that you have that rebellious spirit at MIT to go against the grain Exactly. And not get a Nobel Prize. Well, then let's start with that. David, the real concern is, look, and let's step back for a moment. Yeah. We talk about disruptions for workers over time, you know, industrial revolution, globalization. Those were sort of the dynamics that really impacted workers. But those took place over time. So David, you're going to talk
more about the potential. Talk us through the previous disruptions and how AI fits into those
paradigms. Sure. So let me first say just to bring to the present first, what we should be
concerned about is not running out of jobs for say, but having jobs that where their expert labor is not needed. So a future in which everyone is like carrying the box from the UPS truck to the front door is very different from a future in which everyone is doing medical care. Right. So it's not the quantity per say, but whether specialized human labor is still
“needed. I think it will be, but it really matters whether we are replaceable, whether we are,”
you know, all, you know, kind of redundant versions, one another or rather we have, you know, related value in this economy. Now, we've been through lots of technological transitions. Some have been much more traumatic than others. The industrial revolution was very much so
there's a 60-year period that people refer to as Engles pause in the early first industrial
revolution where, you know, productivity is rising rapidly and yet working class wages were not. And artisanal labor, these, you know, people who had spent their lives developing expertise in weaving and so on, they were just wiped out. And it took decades before there was actually need for specialized labor again, a lot of what, you know, who, who worked in those dark satanic mills, it was basically unmarried women and indentured children doing dirty, dangerous, unskilled work. And it took
decades really into the late 1700s. I sorry, excuse me, late, 1800s, I'm sorry, until we started to say this. This is why you don't have the Nobel. I know, it's what God helped me back. You've got to know the right century. That's right, until we actually started to use specialized skills again where people needed to follow rules and need to master tools and their expertise was really needed. And so that was a very traumatic technological transition. And eventually
we came through it okay, but most the people who were there at the outset did not. And a lot of these transitions, they, they, you know, young people adapt them usually more successfully by choosing different careers. People don't make big career transitions. In mid-adults, they don't go from being, you know, a, you know, a steel worker to a doctor or a programmer to a nurse. They, they, and so those, those transitions are kind of generational. And so when it moves really fast as it did in
the era of the China trade shock, for example, people just get left behind. Laces eventually recover, but individuals much less so. And you know, you talk about it's very interesting and they're on maybe we'll ask you. We're talking about specialized labor, you know, and David is talking about the craftspeople who knew weaving in those things and they're replaced by automation and these kinds of things, manufacturing jobs that were replaced in the China shock may be weren't considered
as specialized, but still blue collar is AI going to bring about those same disruptions, but in what you would call like a white collar labor or less specialized knowledge and more administrative
“knowledge. I think it certainly will. Right. The time frame is unclear. Just to add to what David said,”
you know, this kind of experience is not a distant one. As David's own work shows, the China shock when it led to cheap imports coming and destroying parts of manufacturing had the same effect. You're talking about the 2000s when they were, yeah, when China was made into the WTO and yeah, starting in 1990s, but especially in 1990s, in 2000. But really after 2000. And robots, it a much smaller scale had exactly the same effects, huge increase in productivity for steel,
Electronics, cars, but blue collar workers, lost their jobs, many communities...
Chinese imports shock were thrown into recession. And the same thing can happen if there is very rapid displacement of white collar jobs. Now, the timing is very unclear. There is a lot of hype and a lot of reality to the capabilities of AI models. So far, we're not seeing mass layoffs. We may be seeing some slow down in hiring. It's unclear. And white collar jobs are less concentrated geographically compared to say textiles or toys, the things that were affected by Chinese imports,
or cars definitely or steel. But the numbers of jobs in white collar occupations is high. So there could be a lot of people who lose their jobs. Now, the thing is that despite the tremendous advances in AI over the last eight months or so, these models are not yet able to do the whole occupation for many of the white collar jobs yet. That makes to come, that may be to come,
“or it may take a while. That's why there is so much uncertainty. But uncertainties are very bad”
reason to be complacent. David, you know, a story that those that are behind AI tell us is very different, you know, when the people that are creating these AI models talk, they talk in utopian terms, we will be freed from the burden of the toilet, we will paint and write poetry even though AI is probably going to do that as well. But when they talk to their investors, they speak very differently. And I want to ask you about a quote that I heard. There was a gentleman who was talking
to his investors about AI and he said, it will allow you the benefit of productivity without the tax of human labor. He referred to human labor as a tax, as something that a company wants to avoid paying to retain productivity. That's what worries me is that, you know, we talk a lot about this and
it's always framed in terms of productivity. So, wouldn't you like to be freed from your podcast
“thing? Yes, in man, I've been toiling in the podcast minds for, I'm getting podcasts long,”
it's a terrible, it is a terrible crippling addiction. Yeah, so, you know, most of us are, you know, both workers and consumers and we're not going to be able to consume if we're not working. And, but, of course, from the perspective of a firm, right, they want their customers, they'd rather not have their workers, right, with labor, you know, economists will tell you this, labor demand is derived demand, right? It's not, it's not that firms want labor, explain that,
derived demand, what is, what is, yeah, they, they want to make stuff, right, and usually making stuff requires, you know, space in people and, you know, electricity and stuff and people. But if they could make it without the people, they would be just as happy. It's, you know, it's like final tap, you know, you have the sex and the drugs, they could do without the rock
and roll, right, you know, but of course, people have always been necessary. So, although firms
have always had this fantasy that they could fully automate, they'd never been able to do so. And often it's kind of, it turned out not how they expected, right? So, during the, you know, the era of numerically controlled machines, they thought they would de-skill and replace workers, actually they turned, you know, manufacturing workers into programmers. So, it doesn't always work out the way that firms expect it to, but it may this time, there aren't many, many more things
that are subject to AI automation. Then we're subject to the previous era because AI has a whole new set of capabilities, right? Previous computers could do routine tasks. They could follow rules. Rules specified so tightly that a non-sentient, non-improvisational, non-problems solving, non-creative machine could just carry it out without, without having to understand what it's doing. That really limited the set of activities that we could subject to computer programming.
But now AI learns inductively, right? We're entering unstructured information. It infers rules. It, you know, solves problems without our even understanding how it's solving them. That allows it to enter many, many new routes. Now, let's, you know, to make this very concrete,
“it's useful. I think to contrast like two occupations that one that people talk about all the time”
and one they should be talking about. Okay. So, the one they talk about all the time is long haul
truck drivers, right? And they're about three and a half million of them in the United States and they say,
you know, they're going to be replaced by autonomous vehicles. That is a problem we can handle, because it's going to go very slowly, right? The day that, let's say Elon Musk analysis tomorrow, he has a self-driving truck and let's just pretend we believe him. And it's, you know, so I've been operating for years. And so it totally works. We're not going to throw all our trucks in the Atlantic Ocean and buy new ones tomorrow. It's going to take decades to replace all of that
capital and all the infrastructure. So that's going to be a slow transition and labor markets can
Deal with transitions that happen in a couple of percentage points of year be...
new people don't enter. That's manageable. You're saying if it takes place over a generation,
absolutely, then that's something that even though it will be disruptive, it won't be catastrophic. Exactly. Okay. Now let's think of call center workers. There are about as many of them in the United States as there are long haul truckers. There are paid less, they're primarily women, but they're just as many. Those jobs can go very, very quickly, right? Because those can be the, you know, automation can encroach rapidly. And then until all go, the ones that remain
will actually be more specialized, they'll be at the top of the queue, right? When the AI says, I give up, you will be handed over to, you know, the last 20 people standing. So rather than 20 people,
five people will handle what's left of the human tasks. Exactly. That need to be handled.
And let's just say, that's a mixed bag, right? Those will be better jobs. They'll be higher paid. They'll be more expertise intensive, but they'll be fewer of them, right? And that will see this
“in language translation. We'll see this in call centers. We may see this in software as well, right?”
Software will buy for Kate. We'll have, you know, a small number of people who, you know, build AI models, who run data centers, who run enterprise software, and they'll be highly paid and highly specialized. And then we'll have infinity vibe coders, right? And they'll be like Uber drivers, right? You'll call them up to right now for you. They won't be paid nothing. They'll be a lot of them. They won't be highly paid. So we're going to see a bright impact, but the ones work that is just
that is fully cognitive work, right, is much, much more vulnerable, can change, much more quickly. Eventually robotics will also, you know, warn more enter the physical realm, but that's still, you know, some ways off. Ground news, it's this website map. It's designed to give readers a better way, an easier way to navigate the news. You know, if you go on the algorithmic, the twitters and the things, or the weaponized
news organizations or the websites, you don't even understand how they're manipulating your world view and how they're getting past the reptilian barriers that you have towards polarization and
“and all those different things. Ground news gives you the information you need to be able to battle”
that. It pulls together every article about the same news story from all outlets all over the world and puts them in one place and not incentivize for like the worst, most hostile, most partisan take. It tells you where it's coming from. They show you how reliable the source isn't who's funding it. Who's funding it? Follow the money. Know who's behind the headline. Telling you, man. The Nobel Peace Center has even mentioned the ground news in the next long way to stay informed.
Nobel Peace Center, that's I think the one that Trump started. I think it's the 3D Prince Nobel Peace Crisis that just hands him out. The platforms independently operated, supported by its subscribers, so they stay independent and they stay mission driven. They don't get sucked into this slot.
“If you want to see the full picture, go to Ground News. They can help you through the noise and”
get to the heart of the news. Go to groundnews.com/stuart. Subscribe for 40% off the unlimited access vantage subscription. Just count available only for a limited time. This brings the price down to like $5 a month. That's groundnews.com/stuart or scan the QR code on the screen. But so let's talk about that during, you know, when we talk about the sort of two areas of work, which is the human expertise that needs to be done, and then physical work where robotics do.
Everything is moving in that direction. AI feels like it's stripped-mind the entirety of human accomplishment. You know, the 10,000 years that we have spent developing these areas of expertise, these areas of knowledge, the kinds of things that made us feel relevant to the progress of the human condition. AI comes in and six months later goes, okay, what else you got? What else you're going to feed me? And then it starts to move forward. Are you confident that? So what David's
talking about is already a reduction of the human workforce. Is that the thing that you are most
concerned about? Or is it the eradication? Yeah, reduction is first and eradication is later
and in the process wages be stagnating or even declining. And David, you know, everything David said, I agree with it. But there's one other thing to add. Again,
It's a wild card because we don't know how quickly these AI capabilities will...
quickly they will be adopted. But all of our earlier examples of displacement, which as I said and David said, haven't been so good for workers such as during the first 80 years or so of the British industrial revolution or during China and robot shocks, they were confined to a few occupations. Even then it was very hard for people to relocate and get jobs and newcomers to find jobs. But you know, weevers in during the British industrial revolution once power looms came in,
they lost about two thirds of their earnings. But they could then become unskilled factory operators. Blue color workers went to construction or other things or some of them would do from the labor force. If Dario Amade or some of the other people who are most vocal about the capabilities of these models and what they will do to the workforce are correct, there are going to be many sectors at
the same time being hit. So yes, if the rest of the economy was booming and 3.5 million
customer service representatives were laid off, we could find other jobs for them perhaps with somewhat lower pay. But what if all occupations are going in the same direction, that is Armageddon. Now, I don't think that's going to happen any time soon. David just side, you said, you said Armageddon and David side. I will let David, I mean, that's not going to happen
“any time soon. But I think we have to be prepared for it because some people are saying,”
that's going to happen in the next two, three, four, five years. Either those plans are leading trillions of dollars of investment, which are going to come to nothing or there's going to be a grain of truth in some aspect of it. But either way, we have to be prepared for that. Now,
displacement is real. So you're talking about either this is a financial bubble where an incredible
amount of capital is being poured into a technology that ultimately will be a bubble that you know, resolves nothing and is not worth the investment, which causes a kind of financial catastrophe or it's real and it causes a personal human labor catastrophe. Is that? I would, I would say I'm somewhere in between. I think the speed of which will be much slower, which will then lead to a lot of money being lost because the investments need to be monetized
and they need to be monetized soon if these investments are going to pay off. So I am in the middle.
“I think that these capabilities will come at some point, but not as soon as these”
investments are being motivated by, but I am uncertain enough that either all of it being a bubble, all of it happening within the next five years, you know, can I say with good conscience? That's a zero probability event. I cannot. I mean, so many technologies are saying, look, in our model, in our labs, we have these even more amazing models. I don't believe it. I don't believe it, but I can't say, oh, that's certainly it's wrong. How do you, how do you mean test these hypotheses?
I cannot. We cannot. Nobody can. We can't do it because they're all based on what's going to come next year and we don't have access to it. So everything we're doing is we're looking backwards. We're looking backwards, but not forwards. David, you were going to say, okay. So first I don't think that the success of AI companies and the value of investments entirely depends on them displacing labor. If we just got much more productive, that would also pay off, right? So if we
got more efficient in health care, if we got, you know, better at transportation, if we did education better. So it doesn't all have to come from just throwing people out of work. And it's also
“important to remember that although these transitions have been wrenching, we're infinitely”
more wealthy than we were 200 years ago. We are much better off. None of us wants to live on the main, on the main. But obviously, if you look in certain, you know, I don't think the rust belt would say, yeah, that was globalization was great for us. No, no, they're not starving, right? They're not, they have a fit, they're well, generally not starving. They have actually, look, I don't mean to be on sympathetic. The standard of living in almost anywhere in America,
including the least privileged places people have indoor plumbing. They are not food deprived by enlarge. They have some acid education. They have some safety. It's much better than conditions in pre-industrial England, you know, 250 years ago. So I don't think, so although, although there's
always cost, and I don't mean to minimize, but I think they're real in the transitional cost or
enormous, and the beneficiaries are not the same as those who are harmed, so it's not like they
Just make this a.
We should only be sentimental about what will be lost. We should also recognize the opportunity
to accelerate science, to improve, you know, our adaptation to climate change and energy generation, to improve, you know, medicine, to do education better. We might do it worse, we could do it better to industry more of the world's wealth to more of the people in the world. I actually think artificial intelligence, like mobile telephony, can be potentially beneficial to developing world in a way, by increasing self-sufficiency, by giving access to expertise and engineering and
medicine, you know, that is not readily available. So can I just jump in there? Please, because David and I have been studying these things together and separately for the last 30 years, and almost everything you'll hear from David, I agree with it. And most things you hear from me, well, David probably would disagree with you. But anyway, but there is one place of disagreement
“between me and David and David put his finger on it. So let me expand, because I think this just”
again underscores the uncertainty. So David and I completely agree that there is a potential to use AI in what we call a pro-worker way, meaning you make workers more productive. They become better at their jobs. They gain additional expertise. They start performing new and more important and interesting problem-solving tasks. The place of disagreement between me and David is that I think that direction requires a complete change in the focus of the industry, and we
want to get it on their current path. The current path is very automation focused. Whereas I think David thinks, well, whatever the companies do, somehow better things might come out. So I think you're more optimistic about those productivity gains that could then create meaningful jobs.
“I think we really are squandering that opportunity. That opportunity is there, but we're squandering.”
And that's the most important reason why I love being in shows like yours, where people listen to,
as opposed to what I say, because I think we need to change the conversation. The conversation shouldn't just be about the doom and the gloom or the amazing promise of AI. It should be about, are we actually using these models, these capabilities for the right thing or the wrong thing? That's the main conversation we need to. Let me mediate the dispute between, before it turns physical. I don't want to get there. I don't know how close you are to each other's work. I know I've seen
a lot of fist fights on this podcast. It's exactly right. And I think you get out of control. And if we want, if we need to take it to the octagon, we'll take it to the octagon. I don't
“have a problem with it. But I think what we're talking about is sort of two separate things.”
So I want to see if we can tease those out a little bit. You know, you said a phrase during that I think is interesting, which is you want to make it. You said worker. Prover. Prover. You said pro worker. What David is talking about, I think, is sort of the patina over society that these advances allow us to fight diseases that we didn't have to do. But it's it's true. It's procurement to a certain extent, but not necessarily pro worker. So I guess
David, what I would say to you is generally those that are deploying these new things are not concerned about being pro worker in any way. Now, the increase in productivity may have it,
you know, they always say a rising tide lifts all boats. And I always say, unless you don't have a
boat right then, really you're just then it's just water and you're treading it. But so the people that it's sort of like globalization, what they learned was capital travels and labor doesn't. So if I can find ways to pay workers less or to give them less safe working to it. So globalization was by no means pro worker for workers that were accustomed to more first world conditions. But if you were a worker in the global south, those investments were wildly pro worker because
your conditions. So how do we tease out what we mean by pro worker and the standards of society that we're talking about raising? So you're drawing along with our colleague Simon Johnson, also no bell or further increasing my distinction, if I'm not having one. Just wrote a paper on pro worker AI and you know what we mean because you know tools that extend the usefulness of human expertise and the range, the things that we can do, give people new things to do, things that they
Didn't and let me you know say what do we mean by new things to do?
you know sort of blocks. But like there are a quarter million data scientists in the United States
right now. They learned about $120,000 a year at the median like those didn't exist 20 years ago. Now what is a data scientist do? A data scientist is someone who basically deals with we have enormous amounts of data, we have enormous amounts of competing power. How do we process that? How do we organize that and make it accessible? The data that we have on the internet is so complex. You know it's a video, it's text, it's images and data scientists all about how you
use that constructively. We had no tools. We had statistics, we had no tools for dealing like that and now there's tons of expert work and a lot of new work, a lot of where the value human that of work comes from is demand for new foreign to expertise. Like so you know we've had electricians and plumbers for a while. Now we have solar electricians and solar plumbers. They're people who do those feels but they're specialized even further. Much of our medical work right
you know we didn't have pediatric oncologists 50 years ago right or even you know people who do like you know someone who's you know a fitness coach that's also a new form of work. And and often that creates demand. It creates specialization. People earn a premium for that. It needs to
keep moving right and so expertise is always being actually devalued by automation and then reinstated
by new ideas, new creative and a new opportunity. And so both of those things happen but we have much less control and predictability about the new work. It's easy to predict what will be automated. It's hard to predict what will be how much new work will be and where will occur and
“most important who will do it. Most of the new work of the last 40 years has been for people with high”
levels of education and the majority of American adults do not have a college degree. It's only about 40 percent. And so we really and college graduates have done fine for the last 40 years. It's the majority of people who are not college graduates that we should be concerned about. And so in our view pro worker AI in particular is AI then enables people without as much elite credentials
to do more valuable medical care to do more programming to do more legal services to do
contracting, skilled repair. And we think there's opportunity there but I agree with Jerome there's no guarantee that that's where we're going that where tech firms or even where the market is pointing. Now I'll say I don't think with some exceptions that I won't name I don't think most of the tech's bros are evil. I don't think they mean to do harm. All right now you and I are going to have a problem. But I don't think they they don't really know how to control this. Right they don't
they don't if you told them if you said you know Dario this is how you make pro worker AI.
“I think he would be very interested in that. I honestly don't think he knows I thought we said that.”
I don't think he knows what that means precisely. But are they even interested in that you know I'm curious what you guys think you know no they're not interested John they're not interested right they're not interested because they've been locked into this AGI artificial general intelligence craze and your chops in this industry are measured by how close you can argue or you really go towards the sort of AGI and AGI if you take it seriously hopefully I don't think we have to take
it seriously anytime soon. But if you do take it seriously it means that these models can do everything everything better than the very very best experts and then once combined with advanced robotics that are flexible enough then they can do all the works better. So a lot of economic intuitions are based on what David Ricardo introduced which is comparative advantage. If you have an advantage in winemaking fine you'll make the wine and I'll do the podcasting.
You want to both podcasting and winemaking because you have limited amount of time. Now if indeed we get to AGI that framework is out of the window because these models can operate very cheaply and they'll have an advantage over all human works. I don't believe we're getting there anytime soon but that is the agenda and that's the agenda that's driving the industry. That's the problem. Is the agenda AGI in the industry or is the agenda to own the operating
system of our society? That's where I'm working certain you know bringing up where it may go but some of it does have to do with those that are the owners, palentier, uh, uh, open AGI. The owners of these new technologies and how exploitative they want to be for workers and also ideologically what are they going to do if they own you know when the companies were laying fiber up to cables or the companies were laying electricity or any
of those kinds of things there was not an ideological component but when you listen to the guys that are laying the new pipelines for whatever this society is going to be they are ideological.
“100% John you nailed it. You nailed it. I think there's an ideology of AGI, AGI is part of it.”
But let me very different try to illustrate that going back to what David said, which again,
That part was based on our joint work.
group with your own work. Your names on it buddy. So the capability of using AGI with non-expert
workers to increase their expertise to allow them to do new things is definitely there and I think it's the most exciting part. But fighting against that is the ideology and the practice of centralizing all information in the past of a few companies and a few people. Yes. And if they control that information and if they want to use it in the way of not make the novices more expert but get rid of the novices, get rid of the experts then you have a very different world.
And that's the agenda. Now can they achieve that agenda? Not necessarily true because there
“are technical barriers to it, but that's what they're trying to do. Yes, you're absolutely right.”
So the avocado is one of nature's mysteries as far as I'm concerned. I find it to be a very vexing. It's not, I want to say vegetable is I think it's a fruit. You know what, you'll google it, you're right. You probably don't even have to google it, you probably know it. Avocado green mattress. They sell mattresses, pillows, solid wood furniture. What more do you need? And no pits. It's all made from materials designed to support healthier living and more restorative sleep made
without the harmful chemicals. Can actual avocado say that? Probably not. The only use certified organic non-taxic materials. Their products are designed to support deep restorative sleep. So your body can properly recover reset and wake up and take on the day. Avocado products are made, not manufactured. And thoughtfully crafted with real materials
“to deliver lasting comfort and support. Go to avocadogreen mattress.com/tws to check out their”
mattress in furniture sale. That's avocadogreen mattress.com/tws avocadogreen mattress.com/tws.
Okay, so I would make three points. First, you know, you shouldn't take drone in me too seriously
about like telling you about the future of AI, right? We're not experts in this. I don't think you should take Dario Amade very seriously about projecting the future of the economy. He means, well, but he's not, you know, it's not, it's like people have been telling us forever. Well, run out of work because we're automating stuff. That hasn't happened. Right. Because I mean, it can happen. But just means thinking about it mechanically is not the right way to think about it. Second of all,
I don't even think when there's AGI that that will actually put all humans out of work. Many, many problems are not computational problems. They're political and interpersonal problems about who has control, who has ownership rights, who has the information. You know, if I say today, here's a better way to reorganize MIT. I've got it. What, you know, and I've talked to it. I, you know, I did it with my AGI.
MIT will not be reorganized tomorrow. Right. It's a political problem. Depends on whether you have dictatorial powers or not. If they also have the dictatorial powers, then it will be reorganized. Okay. Well, I mean, if we get, if we're, if we also throw democracy out, then we're, then we're more trouble. But David, so this is, let me talk about it in kind of, you know, you made some really good points about the historical precursors of the industrial
revolution and globalization. I just want to make a, a little bit of a point about human nature. When new technologies come along, that are truly transformative, of thinking of splitting the atom, right? So you have brilliant people working on splitting the atom. If you split it one way, you can use it to power the world. And if you split it another way, you can blow the world up. Which one did we try first? So when we talk about AI and we're talking about the technology,
it doesn't necessarily have to be transformative in the way that we're talking theoretically.
We can talk about how powerful it is for the general tools that humans use to rule over other
humans. And I'll give you an example. Palantir comes across with this incredibly powerful AI generated systems. And what do they do? They suck information out of the system. And then they funnel information about people who are undocumented. And the government then uses that information. It, it's not just about what it might do. It's about how governments or individuals will use these new powers to game the system and gain advantage over their competitors. Isn't that a
“more realistic conversation? Oh, you nailed it. You nailed it. You nailed it exactly, John. So I think”
for the next version of our paper. You want to write the paper together? Exactly. I was just going to say you have to become a call on that bell. What's my call?
Yeah, the direction of technology is highly malleable.
direction. And then the one you fear. And sometimes we find it the more dictatorial authoritarian, less democratic we are, the more likely we are to find that direction. Right. Nuclear weapons are much more likely under the time of war or times of authoritarian control. And nuclear energy becomes much more reasonable if it's subject to democratic oversight. Exactly the centralization of information, the ideology of AGI. And the sort of the
meetings of the mind around the surveillance state and the technology are very worrying precisely because they opened those bad doors for us. And anyway, many of the people in the industry
would have no problem walking through those doors head first.
And David, I want to ask you about that because you know, you're making really good points about sort of the ways that these new technologies can be used to uplift.
“But I'm in my mind, I'm thinking atomic, it's splitting the atom. And are you concerned?”
Because I think you're, oh yeah, you're more optimistic about where this thing is going about what I'm raising here. Oh, absolutely. I'm very concerned. And I think AGI is, you know, God's gift to authoritarianists, right? Right. It's great for centralizing control. It's great for monitoring. Right. It is, again, I think it's, you know, it's, we already see if we want to see, you know, master valence and censorship at scale, you know, go to China and they're exporting that model.
And we, we've privatized a lot, but we're still doing it. I'm very concerned about that. So I'm trying to emphasize that there's opportunity, not that we're destined to get there. I think we're destined to have a range of outcomes. Some of them quite terrible. Some of them quite good. And very unevenly shared. And the balance may be towards about it. Maybe towards the good. But I think we have to, if we don't bear in mind that we have an opportunity, we certainly won't,
“we'll certainly squander. Understood. Absolutely. I'm, but I think, I think we also need to,”
and this is a first, most important observation that David made, but we also need to have the
public conversation that those opportunities exist, and we're not currently targeting them, right? Right. We're currently targeting something very different. Mass automation, surveillance state, a new sort of merger between the security apparatus and tech companies, those are the things we are contemplating, or practicing right now. And there's another conversation when I'm having this, I just want to loop back to a point you
may John a little while ago about sort of, you know, all these stuff on the internet now kind of being immunatized. There's a really fascinating book by Max Kasey, who's a economist at Oxford, called The Means of Prediction, right? So, play on the marksy and phrase the means of production. And he makes it, I think what is a brilliant analogy says, look, you know, the enclosure movement in like, you know, medieval Europe, right? It was when all the common land, all of a sudden the
Lord said, hey, we own that, and we're just going to farm that ourselves. And it may have been
“actually a more efficient way of farming, but the commoners were just wiped out by this, right?”
Well, you could say that AI is, in some sense, enclosing the internet, right? It's taking all this common property and monetizing it, right? All of the stuff we put out there, all our photos, and all of our writing, and all of our movies. And, you know, this is, oh, they're not, you know, not enclosing it. I mean, it's still there, just where you left it. But of course,
you never thought you're, we're just going to compete with you, right? You never thought the
story you wrote would be regurgitated and sold and you couldn't sell your working list. So, I do think this unilateral transfer of property rights is a huge thing that is under-recognized, under-discussed. Man, yeah. Oh, yeah, this is, that's so important. But can I can I add one thing? One of them has to be with David, but it has an additional really bad effect. Which is that he always wants to be the black swan. Oh, I'm so lucky to go. Really dark,
so black swan. Yeah, exactly. Yeah, exactly. Go for it. But the kind of the useful things that David and I are mentioning that you can do pro-worker AI, that really requires very high quality data. It requires, if you're going to build a tool for electricians, that makes novice electricians perform the expert tasks that solar electricians and the best season once can do, you require the data from those electricians dealing with the hardest problems.
That data will not be produced unless there is property rights over data, and there are data markets in which people can get the returns for the data that they create. But this enclosure thing that David describe is a data extraction economy. So it's creating the opposite. Guys, this is blowing my mind. It's something that I had not thought of at all, but I think that's what you're bringing up is so interesting. So as AI strip minds the totality of human expertise
and experience, right? So let's look at it in terms of music. You get royalties. If you write a song
Somebody uses that song, they pay you a royalty.
or finds a way to take your melody and put it into their song, you're going to be paid for that.
AI is a human expertise laundering machine. It's basically taking everything that we've gotten
training itself in some ways replacing us, but without that royalty payment, where the royalty payment goes is to open AI or to a palliant or to any of these other places. And if you ask them what they're doing with it, they'll say, that's proprietary. Yeah, we're in the Napster area of AI, right,
“remember Napster that everybody's music and just, you know, burn it, rip it and share it, right?”
That was not viable. We wouldn't have a music industry if we hadn't gotten control of that, right? You know, we Spotify without a music where we pay royalties, when we listen to the songs, small royalties, but we do pay them. But the differences in the Napster, it was the consumers who were doing that replication. Now it's the most powerful corporations humanity has seen her who's doing it. But this is a failure of property rights, a failure of legislation. People say,
oh no fair use allows that. Well fair use never envision this, right? And so, you know, who cares
what the law said, it's not applicable. We should have been, we should be changing it, right? I, you know, people should be compensated and not just once. They should be compensating as their information is reused. And that's actually a manageable problem talk to people at Google who worked on this and said, yeah, we know how to do that, right? We just don't, you know, we don't have
“incentive to do it, but we know how to do it. And if the laws we would support it, right? So I think that,”
and by not recognizing that this enclosure is going on, that this sort of property rights are being reallicated, right? Yes, economics doesn't deal with that. It's reverse socialism. They're taking from the workers and they're funneling up to these five individuals. And it comes back to, you know, to torture this, this atomic analogy. You got the sense that people like Oppenheimer or Einstein were aware of the gravity of what was happening. And through the crucible of war, maybe made
some decisions they might not have made. Otherwise, in this environment, I don't think Altman, Car, feel, feel was as, you know, should the human race flourish and continue to exist? And he took
like a five second part of it. Let me think about that first. That's a tough one there. So the
nuance of what you're both bringing to the discussion seems utterly absent. And you know, you nailed it again, the war conditions, you know, P Einstein, who was very pacifist because he was worried about Germany, Third Reich, supported the atomic weapons and several other. And you know what? Silicon Valley is also creating war conditions. The framing of AGI is, either China gets their first and we become their worst state or we have to go first. And that's creating this war like
“condition. You know, you have to allow us to do anything you want even the worst things because”
otherwise China is going to do them. So that's creating the equivalent of the 21st century war conditions. And Abbotheimer, by the way, spent the rest of his career opposing the H bomb and eventually was stripped of his security clearance and went to no diet, had broken man effectively because he was persecuted for trying to control the invention that he was so instrumental in creating. I mean, maybe it makes sense to talk a little bit about what are some policies that we could have.
Yeah, please do. Okay. So I mean, I would, I would put them in three buckets. But let me start with one, that people call wage insurance. And wage insurance, an idea that actually was experimented with during the presidential administration that rained from 2008 to 2016. I'm not going to say what the president's, but you can guess. I don't recall, but I think I remember him in a tan suit. Handsome guy. Very handsome guy.
Handsome guy. Anyway, that's all I remember. But I, you know, it was, and the idea was, look, you lose a job in manufacturing, you'll see you're making $50,000. You're $25 an hour. And you can find another job, but it's going to be like a 15 bucks an hour, right? And it's not only is that low wage, but you're like, hey, that's beneath my dignity, right? Like, that's, I'm not going to take that job. So wage insurance says, hey, look, we get that.
We're going to make up half the difference for up to, like, say $8,000, box up to two years. Just take the $20, take the $15 an hour job. You'll make $20, right? And then you can look for something better. And it gives people, it gets people back into the workforce more crazy. It's like an earnings income tax credit for returning workers. This program was so effective in terms of saving unemployment insurance money and generating additional payroll revenue that it paid for itself. How, how is that different
David than unemployment insurance? Unemployment insurance, you get it while you're not working. This you get it if you return to work. I see. Yeah. I see. And now this needs to be scaled. And it makes up. So I get what you're saying. Yeah. It makes up in some ways the difference that you would have gotten from a job that was paying a little bit more. That's right. That's what's,
Okay, that makes sense.
we're not very friendly to our people aren't working. If you're working, that's okay with us, right?
And so an incentive to work rather than an incentive or something that's subsidizing work rather than subsidizing leisure, saying it many people can get behind, especially if it's pretty cost effective. Now, we need a bigger demonstration, right? What was done and people like Brian Kovak, at Carnegie Mellon University is trying to stand up a multi-state demonstration of this.
“I've been trying to, speaking with funders trying to get it going. So that's what like”
one really actionable policy. And let me say, this is a no regrets policy. I mean, I like it. If the argument doesn't come to pass, we go, oh, damn, why did we do wage insurance after all? You know, this is just a good idea. It was a good idea. 10 years ago, it's a good idea now. So let me pause there and turn it over to run for the next, the next idea. Yeah. Well, you know, that's a, that's a, that's a great policy. I am fully behind it.
But let me say before I talk about the next policies, I think the most important step.
Even before the policies is actually this conversation. This conversation that needs to just take place much more widely, that there are many different things we can do with AI. And it's a choice. What we do with AI? That's what's lost in the current media environment. For about 10 years, the entire mainstream media was so excited about the tech parents that they couldn't do anything wrong. Now, they're talking about, you know, killer robots and doom.
Okay. That's a useful corrective. But we're actually missing the most important conversation. Most important conversation. AI is not one thing. AI is a whole spectrum. And at the one end of the spectrum, as we've been emphasizing, there are some terrible things. And at the other end of the
“spectrum, we made, there are feasible things that we can do that are much better. Who's going to decide that?”
Who's going to empower to make those civilization changing decisions? Dario Amade, Sam Outman, Peter Thiel, no, I think it should be the democratic process should have hard on it and people should become more informed about it. I think that conversation is first. And then all the policies have to come on top of that. Folks, I don't know if you can hear it. I'm a voice. I'm tired.
Honestly, well, last night. I need a good night's sleep. I always need a good night's sleep. And
you know what I could do? I could buy a new mattress, a little, maybe a princess bed, maybe get a little four post or thing, throw some mosquito netting on there, spend a ton of money, or I could do the only thing that matters. Get some nice sheets, some nice clean, freshly done,
“comfortable sheets. That's what you need. The bowl and branch way. It's the best way to get”
a better night's sleep is the bedding. Get the nice bedding. You don't want to chafing bedding. You don't want, I sleep in Corderoi. Who would do that? Makes no sense. You can upgrade your sleep with bowl and branch. Get 15% off your first order plus free shipping bowl and branch.com/tws with code TWS bowl and branch. B-O-L-L-A-N-D branch.com/twscode TWS, download 15% off. Exclusions apply. And then there are many policies that we can worry about. Like, for example, in the United States,
we tax labor heavily. We subsidize capital. That's been that way for 50 years. How does that change the incentive? It's been a much worse over the last 25 years and much much worse with the Trump administration and how do you think that changes firms and technologists decisions? It makes them more leaning towards automation because automation is being subsidized. That's right. So let's change that tax and we can raise more taxes also because we're just giving a pass to
all capital income. But it's kind of a perpetual motion machine because what happens is when these new technologies come along, capital flows towards it in such massive ways. This giant, you know, trillions and trillions of dollars that flow in and building data centers and sucking up water and electricity and money. And then what they do with the profits is they reinvest not just in their technologies, but in their political power. Oh, 100%. So they take their money and they bring
it to bear on Washington. You know, it was a shocking moment to me at the inauguration of an American president to see in the front row in a room of the swearing in, not the people, but the tech companies that had the closest proximity and access to the president. And you know what's worse? We don't
Even know who owned who.
we don't know which is what David, you were going to say something though. Oh, I just want to talk
about another policy. Oh, no, okay, great. I liked around the changing of the tax incentives that can even out to talk about pro worker that makes we value capital over labor and I think the pendulum needs to swing back. So I think that was a really important point. But let me suggest another policy really to you please, which is what people call universal basic capital. Right? So not universal basic income, right, which is like right people check every month. But the notion that
when people are born, we give them an endowment of capital with voting rights, right? Like shares. And what does this do? Well, one, it diversifies it. Most people, you know, their entire income is bound up in their human capital, right? You're income comes from your ability to produce valuable labor. Well, that's a pretty risky bet, right, for anyone, right? Because, you know, value of labor changes over time. Specialized skills become so nice that become more valuable, so nice to be
more worthless. So, uh, we distribute, and by the way, you can call the Trump accounts if you want,
“right? There are already being done. They were calling it Trump everything. That's what I think,”
that's what I'm saying. This is actually the weekly show Trump podcast. That's right. We just add the word Trump. And the road has the Trump prize in economics. That's right. Yeah, just to return to our main theme. But, uh, so what does this do? Right? Why don't it give people a more diversified
portfolio? It's something they can invest in, right? They can't spend it until they're 18. Second,
it gives them ownership rights. What are they? Basically, you're getting, you know, it's just like, it's like getting a bond when you're born. Okay. Like the Alaska fund for everybody. Okay. That's right. That's right. But, but what it gives people a diversified income portfolio somewhat, it also redistributes voting rights. They have voting rights over capital, right? And even you could even set it up. So even if you sell your stocks, you maintain the voting rights. But what
is the voting right? Is it a, so the way that I would think about it is, it's reverse. It's Benjamin button social security. So rather than it's a large fund. And then when you're born, why you're the comedian, that's good. You're a given loan. I just watch a lot of movies. So when you're born, you are invested into this larger fund. That's right. That has been now, then the questions come up. Well, what is that fund invested in? And how does it grow? You know, it's invented. It owns,
you know, it owns shares of the tech firms, for example, right? It owns a piece of the economy,
“right? And so they get on it's a voting rights. And that's really important because if labor,”
there's certainly a risk that labor will become less valuable and capital more so. And if so, we want more people to have ownership stakes. Part of the brilliance of the labor market is that in a country without slavery and without labor coercion, everyone owns at most one worker, themselves, right? So it's intrinsically relatively equal, but capital is not like that. So the reason why I'm slightly dubious about that is, and I'll tell you why. Companies won't even do
that for their own employees. No, the government has to do it. It has to be done publicly. Publicly, but the government is going to give away shares of privately owned companies. Or buy them. That's fine. Or buy them. Okay. All right. Yeah. All right. Now I'm feeling a little better. But here's the problem. Here is the problem. I can completely agree with David's, you know, that that will be in nice addition to a functioning labor market. Yes. But here is what I want to
put a pin on, which is that the tech solution to these problems of universal basic income. I didn't say or I hate you. Exactly. So, but yeah, I just want to just underscore that or other schemes
“where people are somehow given a handout so that they can just not work. I think there are many”
problems with that. First of all, I think we don't know what to do with millions of people who don't
work that will be highly bad for their mental health, for social peace. But even worse, I think if you create any system like that based on dividends, based on income, based on other things, as long as society knows, oh, these are the creators, Peter Thiele, Elon Musk, etc. And the rest, living off the income that they've created, that would create a horrible, two-tier society, where there are those with very, very high status and all the rest. We have a horrible,
we have a horrible two tiered system. But it will get even worse now. I mean, look at in Norway, right, they have a, they have a sovereign wealth fund that's worse to GDP. And it's coming from oil, but people are public owners of that, right, and they're doing okay. And they're working, but they're working in Norway. I'm in favor of work, but I want to push back on just a couple of things within that. So the system that's already been designed is a two-tiered system. And
there's already that sort of Randean philosophy that there are makers and takers and exactly. But when you have an economic system that requires labor at its cheapest level and you have outside pressure of globalization that continues to drive those wages down and conditions down,
Well, we've created the conditions for that permanent underclass.
as though their poverty is a function of vice, is a function of a lack of virtue. And that's what I want to push back on. I don't view money that goes into those communities as handouts, I view them as investments. And we have to find a way within this. I love the idea of giving people
“some ownership over the industries that drive the country. I think for too long, we have”
allowed these companies, the providence of the stability of this country, the subsidies of this country, the investments of this country, and asked for no big. And I do think that house should
always win, and the house should be the American people, and there should be a rake.
Right. I want 100 percent. Yeah. 100 percent John. Now you're definitely a call. Give me my price. But you also put your finger in passing on something that's very important. And you might want to have Michael Sandell on the show to talk about this sort of this ideology of meritocracy that somehow all of those who are so successful are well-deserving and virtuous, and all of those who have lost out of globalization, or technological change,
“or social change, are losers that deserve their fate. I think that's been very, very”
pernicious. I think you cannot understand the rise of Trump, the rise of anger in this country without that form meritocracy ideology, and he's been the most eloquent subscriber of this, and I think it's a very, very important thing that you put your finger on it. Not Trump. Michael Sandell has been so slow. Who the fuck? There's so much fun to be had at MIT. No would have thought that. Please don't have Trump on your show, John.
Folks, I'm going to be honest with you. You know, a lot of times I'll be pitching new products. And my crazy bottom, I don't know. Maybe I muster some enthusiasm. But then every now to get a product comes along, and I'm like, "Oh, I actually use... I actually... I actually wear those,
“they're super comfortable." And that's what we got now. Folks, bomb us this in the house.”
Bomb us, baby! That's the eliteration I'm looking for. Bomb us, I can't even tell you how excited I am
that we got bomb us. Bomb us is, first of all, on about you, but I'm a sock man. Like, I like a nice
comfortable sock. If you give me a sock, every other part of my body is immune to discomfort, but my feet. You throw on a nice pair of socks, man, and you can have yourself a fine day. And bomb us is the most comfortable socks in the world. Man, just get rid of all your old socks. You know what happened to me recently? I had some socks in my drawer, and I put them on. And it was as though the fabric had expired. Like, when you pulled it on, it made in noise,
like, the universe was coming apart. Like, it went like, like, almost crackling. It was
neither the society, never good day that day. Here's even the best part about bomb us. For every item
you purchase, an essential clothing item is donated to someone facing housing and security, a one-for-one model. With over 200 million donations and counting, head over to bomb us.com/weekly and use weekly. And use code weekly. For 20% off your first purchase, that's b-o-m-b-a-s dot com slash weekly. Code weekly at checkout. These are really interesting. And I really do like them. And what I love about it, the most is these are actionable, specific ideas. What's so frustrates me about
our political process in this moment. You know, we have this incredibly powerful technology that sits just on the horizon. But we have a political system that is unable to articulate mostly anything but platitudes. We have to start talking about pitch and table issues. We're working families must get the thing. So you think you think like creating American AI Dominion and cryptocurrency are not actionable issues? Well, let me tell you, as a proud owner of
Melania Coin. I can tell you that my future is set. But you know, we are in this position. What's so, I don't want to say ironic about it is we could probably plug these questions into AI
Come up with more specific and actionable and interesting solutions than what...
offered by our political system. Right. And that's the part. I can't wrap my head around.
What, where do you guys see? Why is that the case? Well, I actually think that. So the idea of wage insurance is in currency. It's being discussed. I've discussed it with people in the Trump
“administration. I've discussed it with people in the Democratic leadership. I think there's enthusiasm”
for that. Or, you know, there's also, I should say, there's new efforts around doing modernizing training in a way where we can measure it and monetize it and return the revenues to, you know, Rajshedi and the group of odd opportunity insights at Harvard. They're working on this in a really innovative way. Harvard Safety School. Talk at MIT. So I do think there are a set of policies that again, I'll call no regrets policy. We won't be sorry. We did them even if the worst doesn't
come to pass. And we know how to do them well. They're not totally out of reach. So I absolutely would do our own. We need to shape the conversation. We need to deploy the technology constructively. But we also, we got to recognize we are in for a rough ride. Even if it goes well or in for a ride, because the transition is going to be so fast. So we should have policies that support people, support their income, support job transitions, write, and give them also an ownership stake.
So that there are on some of the upside of this, not just the downside. And that's distributing capital war broadly would have that effect. David, I can't tell you how much I love that. And how
“much I think that in some ways, over the last few years, I think that's what's gone wrong with”
the economic condition in this country is that labor has never been offered an ownership stake
in the value of their productivity. And Dorona, I want to ask you about that. And then, and then, and I've so appreciated this conversation. But great. You know, when we talk about productivity gains, because that's always how it's framed. It always outstrips wage, always. And maybe that's just the way the decision is. No, it's not how it was until the 1970s. Exactly. But I'm saying, I'm saying, since the 1950s. Yeah, yeah, for 50 years. Since the Reagan Revolution. That's right.
But by, you know, like people say that about like, all the capitalist system. Well, it was a capitalist system in Europe in the United States, from 1940s to the mid 1970s, where that's right. wages grew faster than productivity. Workers would less than a college degree had faster wage gains than managers. That was feasible. There's nothing in the laws of economics or in the laws of democracy, against that. We just chose a different path since 1980. And do you think at this point,
those powerful corporations have, there's almost a, that they kind of have us at an extortion point,
where they say, you know, oh, if you try and do anything to regulate us, or you try and do anything to tax us, will, will leave. Well, look, this is such such an important point. This is such an important point. First of all, these corporations are absolutely enormous. I mean, it's not a fair comparison, but I just did the calculation last week. Each one of the largest seven tech companies has annual revenues in current dollars twice as large as the British Empire's GDP in the middle of the 19th century.
These are enormous, enormous corporations. Right. They need to be regulated. And, but the rhetoric that they cannot be regulated, AI cannot be regulated. That's false. China proves it. Okay. I don't approve of what China does. I don't approve what they intend to do, but they show very clearly AI can be regulated. Tech companies, Ali Baba, is now completely subservient to the interest of the Communist Party in China. We could also make Google and open AI and Antropic,
be much more in line with the democratic priorities in the United States. There's nothing in the laws of economics in the laws of physics that says these companies cannot be regulated. They're not delicate flowers. When Sanom and says, when Sanom and says, oh, if you's charges for intellectual capital property, you know, will be put out of business, that's not only say they're not true. It's kind of pathetic because they say, we don't produce anything of value. If you actually make that's
“pay for inputs, no one would buy him. Right. That's crazy. Right. So, I mean, look, I think, yeah,”
there's constructive ways to see what we don't need to shut it down. We don't need to regulate it to dance so it can't move. Right. The U.S. is innovative and that's great. We have a lot to be proud of in that we have led this technology. We're building it out quickly. You know, it's valuable. But we need to, it's an opportunity and we could squander it. We need to steer it. We'll not just left to its own. It's going to do. It's not going to be a pro worker. What you're hearing
from me and David is that AI is a very promising technology. But it's precisely the reason why we've got to put the care to make sure that we use it for the right thing. Gentlemen, you have done the impossible. You have done the impossible, which is you have somehow not a laid my fears, but you've
Given me hope that the future is actually not yet been written and what it do...
opportunity. And when you have those opportunities to write it in the proper way, but I think what you've done really well today is you've given specifics that none of this is platitude. This is all the
specificity of here's what it could do. Here's the damage it's going to do. Here's a way to mitigate it
“and here's some ways to give us a shared prosperity for it. And I think that's, that's truly,”
I think that's the conversation that the two of you have you thought about having a podcast. We were hoping we would join you after this. What? Oh, yes. Unfortunately, what I've done is I had my data scientists. They've been strip mining this conversation. I don't need you.
We're done. I've created AI avatars of the two of you and now we're done. Fantastic.
But guys, that frees some time. Man, thank you so much for this conversation. I've truly appreciated a drone. Some Oglue, Noble Laureate, Necanomics, MIT and Superfester, David Otter, Rubinfeld, Professor of Economics and MIT. Guys, fantastic and really appreciate it. And I hope to continue the conversation with both of you. Thank you so much for having us on this superb. We love what you're doing. It's great to have this conversation. This was fantastic. It's a lot of fun. Thanks, John.
“Holy smokes. I'm feeling something at home. Are you listening to this? Are you feeling something?”
I'm feeling the possibility of futures unwritten. The opportunity that it gives us to correct our path to put us on a righteous path towards a more positive, productive equal future. My God. And I apologize. We don't have our our normal staff chat today because as you can see, I'm on the road. So we weren't able to accomplish that. But man, I so appreciated what those gentlemen were saying in the specificity of it. And I hope you did too. And it's put
“me in something that I've needed for a little bit, which is a better mood. I'm in. I am now,”
and by the way, maybe I'm drinking the cool late too. But I am in a slightly better mood than I was at the beginning of this whole Schmogeggi. But man, that was I enjoyed that conversation tremendously.
And thanks as always, to our fantastic team, Lee Producer Lauren Walker, Producer Brittany Meadowek,
Producer Gillian Spear. Video editor and engineer Rob Vittola, who he and Nicole Boyce are audio engineer. They had to work today. Today was a day when I couldn't figure out how to log into Riverside. They had to do the next work today. And as always, our Executive Producer's Christmas Shane and Katie Gray. Very nice. And we shall see you next week. The weekly show with John Stewart is a comedy central podcast that's produced by Paramount Audio and Fust Boy Productions.
Paramount Podcasts.


