(upbeat music)
- Welcome to Econ Talk Conversations for the Curious,
“part of the Library of Economics and Liberty.”
I'm your host, Russ Roberts of Shalom College in Jerusalem and Stanford University's Hoover Institution. Go to econtalk.org where you can subscribe, comment on this episode and find links down the information related to today's conversation.
You'll also find archives with every episode we've done going back to 2006. Our e-mail address is [email protected]. We'd love to hear from you. (upbeat music)
- Today is February 24th, 2026 and before introducing today's guest, I want to give you the results from our survey of your favorite episodes of 2025. Here are your top 10.
10, a tie between the economics of tariffs and trade
with Doug Irwin and why Christianity
needs to help save democracy with Jonathan Roush. Nine, how to walk the world with Chris or Nadie. Eight, the music of John and Paul with Ian Leslie.
“Seven, Will Gaderra on Unreasonable Hospitality.”
Six, Econ Talk episode number one thousand that was a solo episode with me. Number five, the magic of Tokyo with John McRennel's number four, the perfect tuba with Sam Kenonus. Three, shampoo, property rights and civilization.
With Anthony Gill, your second most favorite episode was a mind-blowing way of looking at math with David Besis and your favorite episode listed by 33% of listeners in the survey. What is capitalism with Mike Munger?
What do they get everyone for voting and for your comments, which I love receiving? And now for today's guest, Tyler Kownow of George Mason University, Marjoral Revolution and Conversations with Tyler.
This is Tyler's 20th appearance on the program. He was last year in November of 24, 2024 talking about the great Vasily Grossman novel life and fate, which many of you read profitably and enjoyed appreciate hearing from you.
Tyler, welcome back to Econ Talk.
- Always happy to be here, Russ.
- Our topic for today was inspired by a recent talk. You gave at the University of Austin that we will link to and we'll also link to the top 10 episodes. So you can go back and revisit them if you'd like. Your talk was about how AI will, or should,
or could change higher education, with some other things along the way. Before we get there onto that topic and some of your thoughts, I wanna start with your current thoughts on the disruptiveness of AI to the job market.
A lot of people been saying recently, we're in for a very tough time, we might lose all our jobs, AI could do everything better than a person except for maybe comforting someone with a warm look, is that gonna be the only occupation
left for us, poor humans? A lot of jobs are gonna probably go away, then not come back, but how many? A lot, all of them are most of us gonna be unemployed and very poor, there's an immense amount of doom and gloom
this week and the last couple of weeks on social media and I wanna get your thoughts Tyler, do a great. - We'll be plenty of new jobs under AI. Just look at the energy sector, to the extent AI takes off, we'll need much, much more energy,
those jobs require people, it will change where jobs are and what individuals do or look at biomedical trials. Again, to the extent AI does well, it will produce all kinds of new and interesting ideas, for drugs, medical devices, these will need to be tested,
they will need to work their way through the regulatory process. I also think somewhat counterintuitively, AI will lead to more lawyers. I'm not sure that's a good thing, but we'll need to write a lot of new laws for the AI's.
Now, a big part of me believes the AI's would write those laws better than humans could, but I don't think we'll let them do it rightly or wrongly. So humans will use AI assistance and drafting those laws.
I think lawyers who work in government will be a growth sector for the foreseeable future. So those are just a few areas, but as you well know, it can be very hard to predict where future jobs come. If you go back to the early days of the industrial revolution
and you tell people all these agricultural jobs are going away, would you have two people sitting around, the campfire saying, oh yes, a lot more of us will become podcasters. Well, no, right, they have no idea.
So where are in that same position? I do think there'll be more leisure time,
“and if that's what one means by fewer work,”
it's mostly a good thing. It may not be a good thing for everyone, but I think that will be one effect of this. There's already more leisure time, because tasks you do at work,
the AI can help you with more quickly. It's just not reported to the boss that this is going on. - I used suggested that the legislation that AI would write might be better, or would likely be better
Than what humans would write.
I don't know if this was an actual quote,
but I saw something in quotation marks from Sam Alman suggesting that governance would need to be improved in a world of a much more important AI in the job market. I don't think AI's going to be good at that kind of thing.
Tradeoffs, the kind of things we're going to care about as human beings, it's not an optimized governance. Governance is almost inherently about tradeoffs. Do you agree with that? Or do you think there's a role for AI
in figuring out how we ought to restructure, say, regulation in a more a bigger AI world? - Well, I would vote for Claude or GPT over most of our current leaders, or even people in the regulatory apparatus,
but that's not how it's going to work. It will be used as an aid. The real problem is whether humans will listen to it.
“I think it gives, on average, better governance answers,”
it's not exactly my point of view,
the different leading models. But again, better than what we typically have in office, I think in the short run, some governance will be worse. Just imagine the process for regulatory comments being overloaded by high quality,
but pointless AI generated comments. I think we're already seeing this. So there'll be a lot more spam. Any kind of open process that receives input will become overloaded, I think.
- Are you pessimistic at all about the economic financial implications of AI of a world we're as much more integrated of the workforce, these the doom and gloomers suggesting there might be a collapse of aggregate demand,
have the people who don't have jobs, so they won't enjoy any of the benefits of the low prices. What's your take on that?
- There's a lot of different issues wrapped up into that.
You said on my pessimistic at all, the words at all carry a lot of weight there. I can tell you my biggest worry, and that is AI will change governance in ways that are hard to predict.
We have worse political models than economic models, in general, no matter what your point of view. And it's possible governance becomes worse, and if governance becomes worse, that's bad for the economy. I don't really, I don't rule that out at all.
So that's a significant worry, but in terms of normal economic mechanisms, I expect we will have more wealth, we will not have fewer jobs. Many people at the very bottom
will get all kinds of services for free or near free. I do think we will have more billionaires and more mega billionaires, because you will have small numbers of people building these companies that are quite large
in revenue terms. That will be easier than it is today, but those new companies will mean new projects, and that will create many new jobs. And I don't think we're headed for anything like mass
unemployment, absolutely not.
“- So that means, it doesn't mean I have no worries, right?”
I've plenty of worries. - When you said your biggest worry was governance, that might affect the economy, is that your biggest worry and economics, your biggest worry overall?
- It's my biggest worry with it with that with AI, right? That if politics gets worse, economies become worse also. There's plenty of negative mechanisms operating today. Most of them do not have to do with AI,
but if you add AI into that mix and just see it as a big change where the people in charge may not regulate it well, may not regulate it properly, may not do whatever.
This is so many scenarios where things politically get worse. And again, with economics, we have models, like the price system, high-tech, comparative advantage, says laws, you know, often true, different ways of thinking through how things will go.
Politics, I don't think we have very good models. There's median voter theorem, that's worth something,
“but we don't even know what the median voter wants”
when it comes to AI. - I guess I'm a little worried about a lot of leisure, if we get some kind of Nirvana of not having to work very much. The Kains essay for his grandchildren world that he imagined, if we get a lot of wealthier,
a tense not to have been a, he's right about half right there. He said we were gonna get a lot of wealthier. He was right, he thought we'd take a lot more leisure. He was wrong, we don't, at least we don't take it
at a point in time. We might take it over our whole lifetime. But people still work obviously very hard. I guess the question would be if I did displace lots of skills that could be troubling.
And now so I guess the speed, you know, I worry a little bit about driverless cars, which I think eventually will come and what that does to the millions of folks who drive cabs and trucks.
And if that happened quickly, it would be hard to, that transition might be politically very unpleasant thoughts. - If you take something like trucking, a trucking job has a lot more to it than just driving.
There's all sorts of ways in which you,
you know, load, unload cargo,
deal with points of contact.
“I think those changes will come relatively slowly.”
When will Tesla be ready to displace waymo as a truly cheaper alternative with driverless vehicles? Again, I expect that within 10 years, but I don't think it will all happen in two or three months.
So a lot of humans still will want a taxi driver, a Tesla and waymo are not free. Taxi drivers do not earn that much. It's not completely obvious to me what the equilibrium there looks like.
I know there's a difference between fixed costs and marginal costs, but a lot of systems end up having higher marginal costs than you think at first, once you make them universal, and they have to handle all possible problems.
So we'll see, but jobs have changed over the long sweep of human history. I think they will change somewhat faster at this time. I'm not that worried about additional leisure time. It's bad for some people.
We saw that during COVID, but if you want to work, your chance to control manipulating managed projects will be far, far higher than it ever has been in the past. And that'll keep us busy.
Whether it's for earning monetary income or not. - Have you been at a driverless car? - Yes, it's fun. - I think it's fabulous.
I could never, I would never choose a human driver
over a driverless car in the current situation. - Well, you're not paying the full cost of a waymo, right? - Sure, no. - So it subsidized to you. And people claim the Tesla network
will over time prove better because it's accumulating data than the marginal cost of that will be very low. But as things change on the roads, rules change. I don't know what people expect changes.
Maybe we'll want people driving vehicles to be performing other services. It will be connected to package delivery. I'm not sure, but we shouldn't over-predict the future. And if that one job truly just totally does go away,
I think that will be fine. - Yeah. - The things, again, things will get cheaper
“over all kinds of ways if that's what it ultimately happens.”
I guess the thing that seems to me when I was thinking about it, some length, and I think you're sympathetic to this. As long as there's growth, and I think there's an assumption
in the current world that AI will mean that there'll be 11 people who will be able to make a really enormously extraordinary living, and everyone else is gonna be having cheap products. And otherwise, we'll have very little to do with their time.
That's not the world I envision. As long as there's growth and there's a chance for people to improve themselves, there, economic opportunity, I think world the world will be in a much better place with an AI world.
And to me, that's the only question, will there be opportunity for self-growth, career paths that are interesting? There will be things to do for humans, I think, for sure. Will there be a vision of improvement that'll be possible?
- Once there's more goods and services, which is what it means to say AI is working, it's relatively hard to get to a conclusion where most people are worse off. The goods and services are sold.
If need be their prices fall, the production, marketing, distribution of the goods and services generates income of its own. Cains had this one scenario in mind where you produce more, but it's all horded in the form of currency,
the liquidity trap. That's not plausible for an AI enriched world, where there are all these new and fascinating things people want to buy. So some version of sales law is likely to hold
the production of these goods and services, generates incomes and incomes for people to buy those same things. So that is by far the most likely scenario. - And the only disagreement is that
when you use the phrase people will be better off, it sounds like what you have in mind is they'll have more goods and services.
“That's not the only thing, they do care about it.”
We do care about those things. - Your jobs will be less routine also, right? - I hope so, that would be great, right? - Yeah, I think work as a source of meaning is not unimportant in the modern world.
So it'll be interesting to see how that plays out. I don't know. - Many people may prefer routine jobs. That's one worry I have. Another worry I have is it's possible
the people who are most displaced will be the upper upper middle class white collar workers and they will have to move to Houston and work for energy companies, which is not the end of the world,
but politically they will hate that and rather than being say a consulting partner
for 1.4 million dollars a year,
they'll be sent to Houston and they'll earn $300,000 a year. And politically they're a very influential group. So that, I don't know how we survive that politically. What are they gonna vote for, right?
- Yeah. - Those individuals, you could say,
In the sense run the democratic party
and they're not gonna be happy under a lot of scenarios. So that gets back to my main worry being the politics. - I guess my thought was that a lot of people were saying, well, consulting firms are all gonna die because AI can do in a tiny fraction already,
a tiny fraction across a really pretty good job giving you advice companies advice.
“But I think a lot of what people pay for”
when they hire a consulting partner isn't the solutions because they're often, I don't think particularly good, but for the chance to talk to real human beings about their organization and to react
to the observations of an outsider and sometimes make a difference. I don't think they're paying for the report per se. - I agree with that, but I think they could do that
with say a third of their current employees.
- Fair enough. - And I think in the short run, they'll be a boom in consulting because everyone needs consultants to tell them how to integrate AI into their workflows.
Though the consultants themselves may not know how to do that, but in the medium term, I do think the demand for consulting services will be down. - So let's turn to higher education, an area that many people think is gonna be changed by AI,
but as you pointed out in that talk
“and I think as is the case in many industries,”
there's a lot of inertia. Higher education is not the most nimble institution in America. - Here's a good light word, yes. - So it's not obvious to me that it's gonna be revolutionized anything like overnight
and it's not clear that it can be revolutionized at all. People are paying for a variety of things when they go off to college, but let's think about just the education part. You start off with a piece of advice
that's kind of startling.
You suggest that a third of college courses
should be devoted to using AI well. - Or third of total course challenge, yeah. - So explain what you meant by that. - Well, almost every job in the future it will involve knowing how to use AI well.
And most schools that isn't taught at all in any formal sense, particular professors might teach it as indeed I do, so I think what we should do is devote a significant part of the curriculum to a skill everyone will need and right now is quite scarce.
And keep in mind when you're teaching people how to use AI well, it's not at the expense of teaching them other things. So you can teach them here's how to use AI to better read and understand Homer's Odyssey and you're teaching them Homer's Odyssey at the same time,
but you're teaching them the combination of Homer's Odyssey and AI.
So to take a third of curriculum time
and devoted to AI, you're not pushing out other things very much, you may in fact be enhancing them. Everyone will be learning better. The main problem is our own faculty, don't know how to do this.
And our administrators are probably even less so. So who's gonna do the teaching, the students? You could have the students maybe teach the professors because the students probably have been using it to cheat. - But we'll come to cheating in a minute
“'cause I think you have a really nice insight on that,”
but could you really imagine where you can tell me about your own experience? When you say teach people how to use AI, do you mean how to write a better prompt? What do you have in mind there?
- It depends what the area is. So right now you might be teaching people something like Codex, how to use AI to program better and programming will do your tasks. But for some of the humanities,
it is just how to write a better prompt. So if you're asking it questions about Homer's Odyssey, how do you ensure you get the best possible answers, the smartest dancers, which advanced model should you ask? Whatever it is you need to know,
but over time, less and less of it will be about prompting. That will, good prompting will occur automatically. People will learn that pretty quickly. - I think, you know, at this stage, I think the bigger challenge is people don't think to use it.
- Right. - Sure, just telling people they need to use it, but if biology is the class, how to integrate AI systems into a lab would be the thing to be taught. Again, I fully recognize there's no one there to teach it yet,
but that's what you'll need to know. So I shouldn't be teaching people what they need to know and have that as our goal. - Yeah, I'm gonna talk about what that means and I'm gonna need to know, but sticking with the question
of using it, I just wanna point out that you can use AI to understand the Odyssey, Homer's Odyssey by your struggling to remember what happened back in book two, et cetera, et cetera. For me, the biggest value of it in reading the Odyssey
is saying, give me a list of the characters. Tell me what page they first appear on and fakes translation. And tell me what their main characteristics are so I can keep straight, right?
I assume there are people, many of them all over,
but I think young people are probably pretty good at it,
but older people would go, oh, you can use AI for that, but that'll all be gone in the next year or two. I think people have figured all that stuff out and how to prompt in thoughtful ways, right? That's not gonna be very small at the time,
consume time source. - But through some of the questions, one might have at that Homer's Odyssey, so to teach people how to ask better questions is an unending task.
So for instance, you probably know much more ancient history than with the students who are reading Homer's Odyssey, you live near the Mediterranean, but just what are the questions about the historic era of homework that one should ask?
Was it composed orally? How was it passed down? What was the role of oxen in these economies? What did they use for money? What do we know about whether these events really happened?
Maybe it's trivial to you to know to ask those questions.
You've been podcasting. How many episodes? - A lot, over a thousand. - A lot. But other people need to learn how to ask questions better. That's why we're not all podcasters.
- Yeah, that's a great observation. And I think the next set of questions would be about its impact on the rest of the world. And what it's like there is and so on. - Or if someone said, let's say it's an archeology class,
not a classics or literature class, and then you want to have good questions to ask about homers Odyssey. I'm not even sure that you and I would have the best questions for no.
- And we need to learn how to do that. In fact, you can learn it using the AI. - Yeah, I'd ask them what are the good questions. It's kind of easy.
“- But you need to put more structure on.”
Like, what are the good questions for which purposes?
How do I follow up and so on? - So I want to talk about writing because I think about that a lot. - You had some suggestions on how we ought to deal with writing in a modern curriculum
that has AI in it. And how to think about how to catch people who are overusing AI to their perhaps to their own detriment, but in pursuit of a credential or better grade.
- The cheating problem with AI is much overrated. We're simply unwilling to do something about it. Now, it's not that you can detect AI style necessarily. Maybe you can now, maybe you can't. But over time, you won't be able to.
But just take students and for say, two or three percent of their output over the course of their college career, lock them in a room and test them. And if what they're handing in
and how they do on the test diverged dramatically, just call them in for a chat. I'm not saying send them to jail, but look into the matter. And it requires a certain harshness
that you're actually willing to pursue, a strong differential performance.
“I don't think you have to kick them out,”
but I think as an incentive against cheating, it will work much better than anything we're doing right now. - So you're saying that you put them in a room where they can't have access to AI or the internet. You make them write an essay,
and then you compare that to an essay they've written with the freedom to not be in the room. And if the essay is much better when they have the freedom to not be in the room, it suggests they've used AI.
- That's the claim. - Exactly. It's just a sampling problem. And you could just make them right more of their essays locked in a room.
If you think something fishy is going on, you don't have to expel them. You don't have to write this up in a manner where they can't get a job in the future, 'cause some people just get nervous
when they're locked in a room, right? Nonetheless, I think there is much more cheating today than there would be under my recommended scheme. - But I want to define what cheating is and push back on that a little bit.
I think in the same talk we're discussing, you point out that you don't use AI for your writing for the columns that you write. Which I get. But most people use it for their writing.
And they use it anywhere from zero to 100. Zero might be your style. I don't want to have my style tampered it all with by AI. I'm not going to even have it look at it. The other end, you say, after write a piece
or on how AI is going to affect employment, please write that cloud and cloud spits out of perfect 500 word at 750 word, op-ed piece. But in between, there'd be all kinds of things where I'd say, theory, I don't, I don't use it either,
but that's 'cause I'm old and set my ways. But I could say, I feel like this little disorganized, could you reorganize it for me? Is that cheating? If I could say, is there a sentence here
that you find awkward or confusing? Could you fix it for me? Or is the definition going to be you can't use it for anything in those writing classes that you're talking about?
“Well, I think you need to split up the tasks.”
So a big portion of those writing classes, you force the students to write with AI. This is what I'm doing with my current history of economic thought class. And you just say, well, the standard for a good paper is higher.
Use AI, you have to use AI.
Try to teach them how to do it.
And you grade the joint product.
“So you should teach them that and how to write on their own.”
And you're teaching them how to write on their own. Mostly is a way of teaching them how to think. Most people may not need to know how to write on their own, for its own sake, but they will need to know how to think. And writing is a great path to thinking.
As you and I both know. Yeah, that's a strange thing, by the way. All right. It shouldn't, it's not a separate skill, but I think people tend to think of it as a separate skill,
but obviously my ability to think comes greatly for my ability to write. And they get all tangled up. I can't think of the power. Like, I hear these stories of people.
The idea came to me in the shower. Like, I'm just wet. I'm showering. That's my think. You know, nothing else comes.
The soap comes. So I need to write to think or I need to talk to people. Yeah, we did it. We did an episode with Lauren Bachman on that topic. I think it's a, well, link to it.
That's a, I have to agree. I think it probably depends on the person. So then the question would be the following. So as I think, you know, I urge to learn college. We have a core curriculum where people sit in most
of their first-year classes, studying the same thing,
in groups of 25 or fewer. They might be reading a great book. They might be reading Plato's Dialogs. They might be reading Aristotle's Dickib and Kean Ethics. They might be reading the Eliot or the Odyssey.
And they're struggling alongside their classmates to grapple with the meaning of the text. The import of the text. The lessons to be gleaned from the text. The questions that could be raised
that cannot be answered with the S or no about the text. And it's, and it's import. And that's an extraordinary experience that most of us, and I can't speak for you, Tyler. But there was very little of that
in my undergraduate education and my personal experience.
“In fact, the only thing that probably parallels that”
in my lifetime as a student was in graduate school
when we would sit in groups of, in our study group
or four people, and struggle to answer problem sets that had no clear answers. And a huge portion of my education as a graduate student came from those sessions, no instructor. Just four of us arguing, struggling,
has a huge impact, and seven are also have an impact. Um, I can't do that. I don't think. And when I say that, I mean, help you internalize deep lessons and understandings and what we might call wisdom
and common sense through the process itself. Or do you disagree? Well, you and I were both fans of Adam Smith, right? And we know what Adam Smith's proposal was that different classes and different professors
compete with each other. So I gave my talk at U Austin, which is not UT Austin. It's University of Austin. It's a small school. In a semester, they told me they offer 30 classes.
30's not a lot. It does not cover the entire sleep of human knowledge. I made them a simple proposal. Each year, let a student take one class with AI. No more than one, just one, or even one every two years,
or even one every four years, just once. And see what they think. If you want, you can have the students take the class in a small group.
“Well, you need to recruit three people to do it with you.”
And you choose the topic of tutor England, which they do not currently offer a class in. It's an important topic. I asked the student body and my talk. How many of you want a class in tutor England?
Seven or eight hands went up. So let those people try with AI. And just see what they think. See what works. Let different groups of students design different kinds
of AI-driven classes. If they don't like it, they'll just stop doing it. This is Adam Smith's point. So let people in your institution, and I will post the same challenge to you, just try it once.
And see what they think. But layout, to be clear, we're very proud of the fact that we are not selling a credential per se. Obviously, we provide a credential or students graduate from an accredited institution.
But that's then what we're selling. We're selling transformation here, right? I like to say that people come to Shalam, not to study something, but to become something. So it's a very different environment here
in terms of competitiveness and grade consciousness. We're not so big on all that. So that's great for your experiment. Because I want to put that to the side. That clouds the conversation.
But why don't we elaborate for listeners? It didn't maybe hear your talk at University of Austin. And you say, take a class with AI. Let's get it to the weeds a little bit. If you're talking, you generated a syllabus.
Talk about that and then how it would be assignments
would be done and so on.
You would at least at first work with the coach.
Let's say the class is in tutor England. The coach does not have to be an expert in tutor England. But they have to know something about how a class should be structured. So you prompt the AI, it generates a reading list.
You go off and you do those readings. You prompt the AI to generate quizzes. I did all this for the audience during my talk. There's a link where one can do this. The AI can grade the quizzes for you.
And again, students would decide should this class have a paper, only quizzes, free short papers, one long paper. But the AI would grade the papers, the quizzes, whatever you have. And at the end of it all, if you want,
you can reintroduce a human to grade the whole thing.
“I don't think you need to, but I understand people”
will feel a lot better if we have the coach come along and just certify that somehow the AI is we're not insane lunatics here. And then you have a grade and you have a course of study. And there you go.
Now I have more radical ideas that I think are actually better. But let's just start by having AI try to copy a human class. My more radical idea is you just chat with the AI for say three months, 15 weeks, whatever. And at the end of it all, you have a different AI
grade your chat with the first AI.
Like what did the person learn from this chat? A lot, a little, B minus A plus. I think that is eventually how it will work. But I know that's too radical. Let's start just by copying how a human would teach a class,
but put in an AI instead, has zero marginal cost to you. And again, if it's missing in human warmth or insight or depth or in-person discussion, and that really matters, students won't take it. But I think you'll have a lot of students
who want to learn about say tutor England. And I suspect your college also does not teach a class in tutor England. We do not. And they'll do that instead for one of their classes.
And then just see over time, where were the students flocking? But they want more AI or less, I think, whereas I atkeens, we can say we're not sure. But let's find a market, discover that,
as Adam Smith himself had indicated.
“- So I think the fascinating example of having the conversation,”
which by the way, when you first do it, it's a really extraordinary, right? When you first, I'm sure you've done this. I've done it. This is topic you wish you knew more about.
So you approach AI and you say treat me like a high school student, or treat me like a freshman in college, or treat me like a novice, and you start going back and forth, and then you say, give me three examples. So I can just see whether I really understand it.
And then you said, didn't really get it. I don't think, can you make it clearer for me?
And it never gets tired, never gets bored, right?
You just relentlessly is waiting for you to talk to it. It's a kind of an amazing thing. Now, whether you could sustain that over 13, 14 weeks, you're talking about, I think it's a little harder. I'm not, maybe we'll get used to it,
but that strikes me as difficult. And it would be hard because you wouldn't know exactly what you should be talking about. So part of the challenge you'd be setting it up, so you told the AI what you wanted to talk about
to help you learn something from that, 'cause you know you've got the exam at the end, from the other AI. But I think the creativity is gonna come for educational entrepreneurs,
in doing more than that, as you point out, that's a great, interesting pilot for step. And it's a particularly important pilot for step where you're trying to transfer information, right? So if you try and understand, say how the cell works, right?
You need lectures. Yeah, you're not gonna figure that out, sitting around a room with a coach in the absence of anything else and acquire the kind of information
“you need to become a, have an understanding about halaging.”
But if you're reading the Odyssey and, or just take a, you know, a point in example, my students who've, you know, come back from war, literally, and are reading the Iliad, which is about wrath and vengeance and bloodshed,
and the challenges and trauma of war, doing that on your own in a 15 week conversation with a machine is not the same as doing it alongside people who've gone through that as well. So the question is,
- Keep in mind my initial proposal. If the topics are requires, you can mandate groups of two, three, five, 10, whatever. If that's important and it may be for the Iliad, especially in Israel.
So that's fine, you can do that. - Well, that must be-- - Don't the person doesn't have to be alone. - Well, we're going to see the whole 15 week thing as highly artificial, right?
- You think. - We're gonna move away from that over time. - You think. - I do. - It's a weird thing, isn't it? It's such a weird thing.
And you got to fill it up. Somehow, even if it doesn't deserve being filled up,
Given the topic of the class,
it got to teach us the whole time.
Imagine the class in the Iliad,
“and you have everyone read it in six weeks,”
and then they move on to another text on warfare. So there's so much more flexibility in the AI model. But just to pose you this as a challenge, your president of a college will you allow this experiment, that a student can take one class with AI,
and just see how they like it. As Adam Smith, more or less recommended. - But what I was gonna-- I was trying to get to what you beat me to it, 'cause what we agree is that if you think
that doing it in a group is important, that can be part of the experience. And of course, the extraordinary part of this, that's as a president of a college, I'm very aware of, is that the coach might be cheaper
than somebody with a PhD in classics, right? - Most cheaper, right? - And they won't insist on all kinds of other treatment. - And they won't insist on passing on their own at the area of the LIA that they learned from whatever.
It's a very appealing vision, but I'm just trying to think out loud about how this group experience could be captured.
“So if the four of us, me, you and two others say,”
we're gonna read the Odyssey together, right? - Right. - How would we, some of the time we're gonna be alone, we'll be reading the text alone, usually.
And not always, we might read it all out together in parts of it.
Harder parts, challenging parts, provocative parts, but a lot of what we'd read alone. A lot of what we'd talk to the AI back and forth on around where we couldn't understand something, we're trying to clarify.
What would we do when we come back together and what could the coach do that would make that more akin to what is the current experience of a great teacher, and a great class? - We could help each other as other with our papers.
- Yeah. - Sure. And help directly, but help use the AI to learn about the topic your papers on. Just discuss with each other,
and you could have the AI record, a group discussion, and then just ask it. Well, what do you think?
“And then say, well, people made these points,”
were there any factual errors in the points people made,
or would you add something to this? And it can speak out loud if you want. It could have your voice, right? We can do this. - Sure.
- And we want to take a class with the president. We could ask the AI, what do you think Russ Roberts would say here? Everyone, we get to have some weird version of a class with you on Homer.
There's so much material from Russ Roberts. The AI is an excellent model of you. - Really? - The possibilities are endless. - Well, let's talk about your history of economics,
class, what do you do in there? Do you talk? - I do. I lecture. I also, I think through something
about the vividness of human face-to-face communication, but I gave them an assignment last week. I said, use AI to teach yourself the recording and model. And they've all been doing this. And then I said, this week, which is later today,
I'm going to go in and I'm going to teach you the recording and model. And I said, you don't have to report back, but I just want you to mentally compare how it did and how I did.
You don't ever have to say anything. But that's a big part of the lesson. - It's fine. - And that's what we're doing. Your jokes will be better.
But it's that may be your only advantage. - Tyler, I worry. - We'll see, but clearly over time, I will lose some number of what might be my current advantages. And if I end up doing different things,
then what I do now, I'm fine with that. I'm ready to adapt. I do much more podcasting because of competition from AI, which competes with my writing more than my podcasting. And I do more personal appearances,
which the AI can't do at all. So I'd say I've adapted at least half of my time usage already because of what you might call AI competition. So I'm very ready for this. - You mentioned an application or a computer
and I know what is called LearnLM, which is trying to improve the quality of tutoring of AI. What do you also want that's going to actually do? Do you do anything about that in terms of nuts and bolts,
what they're trying to achieve? - Every little, I've seen quite a few projects of people who take an AI, there's a base model, and they modify the base model. So it won't tell you the answer right away
or it talks you through the steps of learning or 30 different other things. There's a lot of ed tech startups. My intuition is none of those or a few of those will succeed. The people are just going to use the basic foundation model.
I'm not even saying that's better, but it's what they're used to, and I don't think the bells and whistles on top will be the equilibrium. So when I teach using AI, I just stress not here's some company
With the neat little thing that we'll walk you through,
talk you through, just here's the base model,
here's how to use it. That's what I think we'll be doing. So let's give it a go. - People want one model to work with, I think. - Yeah, that's true.
Let's just be a little more radical even than the last version you gave. So as you said, I'm president, so I can let's pretend I can do whatever I want, which is we know it's not true.
Even in a corporation, let alone a college, but let me say it differently. Tyler, let's say you start a college. - Okay, yes.
“- The college is, you have to design your own major,”
your own curriculum, it's all AI, everything. With some coaches, let's have some human coaches. - Right. - And let's have the potential for interaction with the other students as well,
either socially as well as education occasionally. But, you know, 15 weeks is artificial. Four years is artificial, eight semesters is artificial. If I walked into your college, I'm 18 years old and I'm bright and curious,
which are the two things I care, usually the most about when I think about education. And I say, I want to be transformed, I want to become something, I don't want to just become,
I don't want to know the base of knowledge of say economists. And as you and I both know, most economic education is telling people what economists think about how the world works.
It isn't teaching people about how to think about how the world works, which should be the same, but they're not. So, let's say I'm that person. Let's just say, let's do economics.
I come to you and I say, I want to know what you know more or less about how the economy and how economics works and what I can learn from it. I'm an idiot, right?
I'm a tabularosa. I might need your advice, but would you let me, I would eat a match at a world where I, then get to not just create my own class
on tutoringland, but my own class on economics. Maybe, you know, our former colleague, late Walter Williams, which is one of my favorite things.
He would give out on the first day of his graduate class.
I think a hundred questions may be a little more than a hundred over time, it grew. And you'd say, the final exam will be 25 or 10 of these questions. So you get the questions in advance.
The prime is in our questions like what's the capital of England. They're really hard questions. And you can find these online, what put a link to them.
It's a fabulous educational resource, because it says, to answer these questions,
“you have to know a lot about how to think”
like an economist. And you'll learn a lot about how the world works. So could you imagine a world where I give a degree in economics based on something creative? What would it be?
Now that I have this incredible tutoring tool,
how would I certify mastery? You just test people, grade their papers. I mean, British have a tutoring system to this day in many parts of the country. Yeah.
It works acceptably well. It could be 10 or 20 times better with AI assistance. So we know some version of that works, right? We can just do it now much better. Now it may be possible to improve on it further yet.
I would say, get a few years of data, feed it into the AI that have been doing this and ask them how to improve it. You don't really quite have that same possibility without the AI's.
So they'll be figuring out what works and what doesn't. That's another reason to do this. You're feeding them the actual data. When education was somewhat elite and not expected to be, I'm going about higher education.
Higher education is for a small part of the country, small part of the population. A lot of these issues weren't relevant. People came to quote be educated to get mastery of a set of subjects.
It's so many different things in America right now in the most places. The acquisition of wisdom is not the focus of those education. Is there room for a college that a startup that would certify that experience?
You think that would sell like hot kicks? Is it not here? Because your college do that? You know much more about that than I do. It doesn't university of Austin do that. I don't know.
I know what we do, but I don't think so.
“I think the phrase I used before I think”
is we're thinking a lot about to become something rather than to study something. A lot of what I think we do here at Shalam is to help people figure out what they want to become. Not just to help them become that thing.
It's both.
It's happening at the same time.
People come here with, if I can use a fancy word, I think it's the right word, encode ambition to make their country better. They're not sure how to get there from here. We don't give them a path, but we try to give them
the education that will equip them to make a difference in their country and to make it better. That's such a crazy goal. It's not anything related to what we normally think of. I think is education in America.
When I was going through that experience as a faculty member.
But it's an amazing goal.
It's an amazing goal. It's a fabulous goal. It's what everybody would want if they believed it would work. And if they believed they could still get a job and our students do when they do very well,
but there's an anxiety about that naturally by many people. A lot of people do that off campus, of course. That's how you and I will mainly learn. It's called life. Yeah.
It's called life.
“So there's life which includes the internet and AI, right?”
And we don't learn in 15 week batches, you and I. We pick up things as we wish. We learn, we stop, we go forward, we stop, we pick up another thing. So we're the supposed experts.
And that's what we're doing. And we insist that everyone else has to do it some quite different way. That to me is what's weird.
But isn't that because we've read this second business
of certification, right? We want to stamp on their forehead that they've acquired some minimal level of competence, either in knowledge or in mastery, not complete mastery, obviously, but some minimal level of competence.
And once you contaminate the educational experience and I'll use that word, contaminate with that side project of telling, say, employers that this person is either smart or knows this set of stuff, it changes everything, right? The AI can outcompete us in certification easily.
We're not doing that yet, but it's the future equilibrium. Just have a person spend a day with the AI. And in this case, you have the AI pre-arranged to be testing the person across a number of areas. You'll get great certification,
strengths, weaknesses, temperament, what they know, what they don't know, way better than these A's and B's, or I guess that Harvard and Stanford, it's only A's you get.
So, again, it's only an issue of will. We can solve that problem whenever we want to. I get that we don't want to do it 'cause we don't want to unravel the bundle.
“But sooner or later, that's what will happen.”
- I asked a high ranking former member of the Israeli military establishment. How they would, if they were in my job, change the admissions process to help select for leadership. So, I care about two things here, right?
I care about intellectual aptitude, which is a combination of brain power and curiosity. And then I care about ambition to make the country better, and the capability of actually achieving that. So, I was asking, how do you do that second thing?
How would you interview people? What would you do differently? He said, well, I'd take him for three days, I'd put him in the woods, I don't think that's gonna be a market effective marketing strategy for my college.
It's interesting, I might appeal to certain group people, but probably not, not gonna be what I can do. But, I'm thinking about you. We had a great conversation about talent. And you have to seek out talent for your philanthropy project,
emerge adventures, which is an incredible project.
“And we talked about how do you interview people, how do you?”
So, if you thought about, and maybe already do, use an AI for that, I mean, do you say to people go off for a day and send me the transcript? - Look, the AI gets to know you. - Yeah, I for it.
- Yeah, of course. - You're from me. They asked the AI, well, what's Tyler gonna ask me, right? - Yeah. - So, I need new questions all the time.
I think AI soon will be better than most human interviewers. It may well be already. I'm not sure it will soon be better than the best human interviewers, but again, if it beats most, we've gotten somewhere. - It seems a lot of the challenge that that would be the fact
that it is awfully obsequious. - Well, you can change that very easily. - Right, you can, but I'm just saying, if you told me to go off and talk to an AI, I guess you'd have to ensure that I told it,
don't suck up to me too much, 'cause I need this to be somewhat objective, right? - And this part of what will teach people in the third of the curriculum devoted to teaching the AI, how to get different moods from it, right?
But it's not hard. - And eventually, there'll be a greater diversity of models available, so it'll be easier yet. - So, I don't know if you saw this post from a pasty kind of guest, Noah Smith.
He said, you know, I get so much,
I get the kind of pleasure for using any AI
that used to get when I first started using social media,
and then I found out that social media is ruining the country and corrupting our institutions. And I don't remember the exact wording he used to apologize if I'm getting this wrong, but what he meant was, as social consequences
of social media weren't as attractive as they were for me, sitting by myself, scrolling. - Do you think about that about AI at all? I mean, it's obvious I think to me and probably to listeners that you really enjoy this world,
this door we've walked through, and there's parts of me that I just find it so extraordinary, right? I love using it when it's, when it's, you know, a lot of it's just that it can do it at all what you ask it to do is so fun.
And it's gonna get better and issue point out, and this is an important thing I want you to talk about. No people don't really know what it's capabilities are
'cause they're using the free model, you know?
- That's right. - There's really important. There's a lot of users of it, but most of them are using the free model, and they're very, very few people using the higher end models,
and they're very, very different. Are you confident this world we're walking through is gonna be a world we're gonna be happy to live in? - I don't know what the word confident means here. I think people on the whole do not love change,
and these are big changes. - Yeah. - Did people love the industrial revolution at the time? No, some did.
“Is it arguably the best thing that ever happened to humans,”
basically yes? So I think it will be like that. I said once in some other interview, like the more people are upset, the better we'll know that things are going,
that was tongue and cheek, but there's some truth to that. And it will just change expectations about what jobs will be like, or what future your kids will have,
in a way that the people who are glued in will find quite unsettling. I wouldn't deny that at all, it worries me. It gets back to this point. We don't know how the politics of this will evolve.
Including in China, we're only talking about America, but China faces its own version of this, the EU does the rest of the world. We're gonna have a lot of different decisions made, but I think for the most part,
it will prove too difficult or too costly to stop. - I think that's true. You wrote a book a lot back, we talked about called "Stupper Nutanchements," which is one of my favorite books here,
is maybe my favorite book. It's a defensive growth. And I hear the echoes of this, and your assessment of where we are, that we're gonna have more stuff.
I have no doubt about it. Maybe an enormously larger amount of stuff. It was when you say the Industrial Revolution was maybe one of the greatest things that ever happened to humanity.
“That's him, that's what you have in mind.”
- Well, I'll take out the maybe, but it's not just stuff. It's creativity, it's opportunity, it's liberation of women, it's human rights, it's much more than just stuff.
That's part of the course management. - For management. - Expand. You need resources to pay for making people's lives better, and all kinds of humanitarian ways.
Very poor societies typically do not have a lot of tolerance, do not grant rights to women very readily. There were places to live, not just because they don't have, you know, the flat screen televisions, there were us on human rights and dignity,
and most of the other things we care about. So GDP per capita and what you might call non GDP gains, they seem to correlate by about 0.95, which to me is quite striking. So you want economic growth,
and for Israel in particular, there's a national security angle. - Yeah. - If you don't have AI, I mean, you're toast. - Yeah, or if you're Brazil, you might be safe anyway, but you are not Brazil.
- Yeah, I remember very vividly, you told me,
“and I think probably our first conversation about AI,”
that Israel should have its own AI initiative, and I thought that's interesting, and obviously over the last two years, I've thought a lot about that comment. There's an immense amount of AI happening, research happening here.
So there's nothing to worry about, at least in terms of effort. All right, I'm pretty confident we're on the cutting edge, or very close to it. It's kind of an amazing, amazing technology,
society, innovation, society here.
As I mean, we may always make the right choices,
but on that of small countries, don't have that up. - Yeah, host, not yet. - Yeah, that's understandable. - It's very straightforward. - Geopolitics will change radically.
- Yeah. Let's close with advice. So there was some point, my youngest child is,
I think 26 or 27 right now.
So eight years ago, when he was thinking I'm going to college,
I, there was a part of me that said, maybe shouldn't go, does he really need to go in today's world? Would he not be better off taking four years to do something extraordinary?
Doing something he couldn't do because he was sitting in those 15 week long classes in that four year rigid experience, but I didn't give him that advice. He went, he got a lot out of it,
I think both educationally and life wise, but it's an interesting question whether a person should go to college these days, but what is clear is that some of the advice we were giving 18 year olds five years ago
was not good advice. You've got to learn how to code.
- Well, that turned out not to be good.
By the way, I was told that about Shalam. You know, it should be a require class in Shalam. Coding because, you know, in the modern world, that's where all the so much is happening
“and you have to understand it, you have to,”
so that probably wasn't good advice, but how do you tell a young person in 18 year olds today about this brave new world there, that's about to hit them? What are your thoughts?
Tell them to learn AI, tell them to look for what Luis Caracano called messy jobs and a very good online essay. He said in the AI world the premium will be on messy jobs where you do many different things that cannot be routineized or turned into formula
and that involve a lot of face-to-face contact in solving difficult problems with and/or caused by other human beings. So that would be my advice, that is my advice. I get this question really literally every day. - Yeah, I think the face-to-face thing is obviously important.
People do, I think we'll value face-to-face even more than they have in the past. The human skills of empathy, communication, listening are all going to be important. I guess the question is, you know what I refer to earlier
about the ability to grow in your career, right? It's not enough to just have a pleasant, for many people. It's not enough to have a pleasant job. The pays a decent amount, they like to aspire. They like to be creative.
They like to imagine what could be around the corner that would be even more interesting.
“And that's a harder question I think to think about”
with terms of giving advice to you. Any thoughts on that? - Well, we're talking about advice for them. But this is the supplies to us also. We're not done.
We can reallocate our energies at any moment. If anyone has the ability to do that, it's the two of us. So start with yourself. You know, it's another thing I say, and I've reallocated my time and energy quite a bit.
Emerging ventures is part of that, traveling more, doing more face-to-face presentation, is part of that, doing more meetings, is part of that. So try living your own advice.
And then maybe then, give some more. - Do you write as much as you used to? - Somewhat less for this reason. Now I've become more efficient generally. So my writing hasn't declined as much
as my other outputs have increased, but I do write less, and it's for this reason. And I write for the AI's.
“I think, what do the AI's need to learn?”
And what do they need to learn about me? They're my best readers. They're very patient.
They always understand the background to what I'm saying.
And yeah, full steam ahead. They're reading. They're listening right now. - When you say you're right for the AI's, do you mean it's been the use
your writing is training data? - Of course. But I also want to build a model of myself, so they know what I want and how I think, I use that, and people in the future can use it also.
- How are you dealing with privacy issues at all? If you're dealing with them at all, do you give AI access to all your emails, all your work? - No, not at all. Now, if you use Gmail, there's a complicated question,
like what is Gemini reading? What can they read? Gmail is not important for me. But my dialogues with the AI's, they're very formal, very scientific.
There's nothing embarrassing in there. I don't need to restrain myself. That's what I would want to do anyway. But I would not put detailed information about your personal life into the AI's.
Probably not. I have a pretty high degree of trust in those systems,
I don't know, things can change.
Company goes bankrupt, company gets hacked by China, whatever.
I don't know, I wouldn't do it.
“It's not that I'm against those companies.”
I'm not, but for the time being, I just wait, same with a lot of confidential job information, national security questions. There's all these reports that US military uses it for actual planning.
I don't know, maybe it's an expected value positive,
but I don't have to do things like that.
“I'm asking it how to read home or so to see.”
And that's fine. The world could see my logs. I think it actually be very flattering for me. Well, that should be your next book. Should publish the logs.
That's right. People would love to read those Tyler, I should. I don't even need to write a book out of it.
“You should just have a parallel margin of illusion site”
where you just publish your daily back and forth with Claude. I send people those logs all the time. Or to ask me a question, I'll say no, wait. No, GPT has a better answer than I do. And I send them that.
But they still get to hear back from me. I hope they're not insulted, but I feel I'm being suitably modest. It's funny. My guess today has been Tyler Cowen.
Tyler, thanks for being part of E-Kontalk.
And honor as always, rest, take care.
(upbeat music) This is E-Kontalk, part of the Library of Economics and Liberty, for more E-Kontalk.org, where you can also comment on today's podcast and find links and readings related to today's conversation.
The sound engineer for E-Kontalk is Rich Goyette. I'm your host, Russ Roberts. Thanks for listening. Talk to you on Monday.


