This is the Ted Radio Hour.
Each week, groundbreaking Ted Talks.
Our job now is to dream big. Delivered at Ted conferences to bring about the future we want to see. Around the world, to understand who we are. From those talks, we bring you speakers and ideas that will surprise you. You just don't know what you're gonna find.
Challenge you.
“We should have to ask ourselves, like, why is it noteworthy?”
And even change you. I literally feel like I'm in different words. Yes. Do you feel that way? Ideas.
Worth spreading. From Ted and NPR. I'm a new sh-zamarote.
On the show today, building AI that puts humans first.
Fifteen years ago, millions of people around the world might have had their first interaction with AI when they talked to... My name? It's Siri.
Yes, Siri. Apple's voice-controlled virtual assistant was added to new iPhones in 2011.
“That was a year after Apple CEO Steve Jobs had set his sights”
on the technology and the people who'd been building it. He's kind of surprised us. He literally called us on our iPhones at work. And it's like, "Hey, Steve, yeah, show your Steve." Right.
Tom Gruber was Siri's chief technology officer and head of design.
Steve Jobs invited Tom and his other two co-founders to his house. I mean, I imagine being in Steve Jobs' house in Palo Alto. There's an Ansel Adams on the wall. There's a beautiful carvert amp, like his taste this guy. And it's all private and quiet.
And for three hours we talked about what would be like to build products together. And the reality distortion field completely worked. That's why we were so seduced. They were made and offered they couldn't refuse and sold their company to Apple.
Who we should mention is a financial supporter of NPR. Siri was folded into Apple products and debuted on October 4th, 2011. Celebrities were ready to demonstrate how seamlessly this little voice could fit into our everyday lives. "I'm going to clean up tomorrow."
"Okay, I'll remind you." "Fanny is still with a sales organic mushroom from Arizona." "This organic market looks pretty good." "Timer for 14 minutes." "Okay, 14 minutes and counting."
It was one of Jobs' last projects ever. The day after Siri launched is the day of Steve Jobs died. And apparently we're told that he did get to see a demo. So that was 15 years ago. At the time were you worried about the ethical considerations
since people did attribute human thinking to Siri? I mean, did you start to think, "Hmm, this is at the forefront of getting this kind of technology into regular people's hands. We really need to think about what's okay and what's not okay." "Well, yeah, absolutely."
If people took Siri too seriously, it means they're out there on the part of the curve where people are inclined to see agency in an analytics, more than other people. In general, I saw that, yeah, of course AI was moving the head. While I was at Apple for the deep learning networks came to power. We were then saying, "Look, we've got to lay down the ethical foundation because the
stuff is coming fast and we need to make sure that we don't use AI to exploit people that we use it to augment and work with people." In the fall of 2016, big tech companies, including Amazon, Facebook, Google, IBM, and Microsoft, competitors with each other, they came together to form a non-profit called the partnership on artificial intelligence to benefit people and society.
Apple soon joined as well. Tom Gruber was their representative. In 2017, he shared his thoughts on what their goals should be from the TED stage. "I'm here to offer you a new way to think about my field, artificial intelligence.
“I think the purpose of AI is to empower humans with machine intelligence.”
As machines get smarter, we get smarter. I call this humanistic AI. Artificial intelligence designed to make human needs by collaborating and augmenting people." That was around the time that you gave your TED Talk and which you introduced the idea of humanistic AI. "Can you define what that was in your mind?"
The idea was really like, "Where do we stand in the world of the future of AI?" There was really two paths I saw. One path was, you can sort of say, a machine intelligence or a machine centric, delighting and celebrating how smart machines were getting relative to humans. There were businesses being formed to use that to automate human work.
Then the other way of looking at it, which I call humanistic AI, which wasn't...
A lot of people had this idea, was that the purpose of AI should be to actually help people do things they're trying to do
by either augmenting their intelligence or collaborating with them as an intelligence. It turns out that down the road now, that was a watershed because now we see companies that are raising money by the billions with the explicit goal of auditing white collar work. And then we have other companies that are raising billions of dollars in saying that our job is to help solve some of humanity's big problems and make people smarter.
“So that's why I wanted to give it a label, and that's why I call it humanistic AI.”
"Yeah, I mean, I have to admit I was in the audience. I didn't get it at the time. I was like, this guy lives in the future, but the future came pretty fast. 22, 22, 23, LLMs, like Chaggy BT and Claude and Perplexity, regular people started using them. And do you get that sense that people get it now? Oh, I think so now. It is 2026, and in the USA, AI anxiety is at a fever pitch.
From fear is about the destruction of white collar work. What is happening? Why is this happening so far? Value my brain. I value my ability to think. I don't want it outsourced." To protests and towns where massive data centers are being built. To debates over the government using AI as part of its military operations.
And foropic refusing to allow its powerful AI models to be used for autonomous lethal weapons.
For those who have been calling for ethical guidelines all along, this moment represents an opportunity. It's time they say for these conversations about how AI is built to move beyond Silicon Valley to the public. Because AI is no longer a futuristic possibility, it's here now, and decisions need to be made.
Recently, a declaration of human rights for the AI age was signed by a cross section of leaders from across the political spectrum, including Ralph Nader and Steve Bannon. Ensuring AI allows humans to flourish, could just be the one idea that everyone, no matter their politics, can agree on. Making sure tech is built to serve humanity has been Tom Gruber's goal for decades.
It all started when he was studying psychology and computer science in college in the 1970s. I was a little frustrated that the psychology of the day didn't have very good experimental methods to truly understand the mind. There wasn't really cognitive science yet. Then computer showed up at my university in the late 70s,
so I was able to start programming and just by reading some papers and so on and discovered artificial intelligence. And so I said, wow, this is the way it starts studying intelligence by making it. A few years later, Tom began a graduate degree focusing on cognitive science and AI, working to build machines that could mimic the mind at least a little bit. The models that we could build then were nothing like human intelligence.
They were just crude approximations.
“So then we decided, okay, well, how would you augment human intelligence?”
And one of the ways is to shore up where there's some kind of impairment in human capability.
And so the first project I did was to build what I call the communications assistant for people
who can't speak, who have like cerebral palsy in this case, but also ALS and folks like that that have neurological conditions that makes it hard to speak like we're speaking here. And so we built an AI program that had an LLM, a little language model, maybe a T-LM, tiny language model. That actually predicted the next word in the next phrase, based on single motor action, like a switch on their muscle.
They could actually communicate incidences. So was that voice then or no, not voice? Yeah, it was voice even that. But I also sounded like a strangled Swedish person back then. But it comes something called a Votrax voice synthesizer, very primitive and no prosody.
But it was voice. I mean, this is pretty common, right Tom that like a lot of extremely hard technical problems. They're often built for people who need them the absolute most. But then once the tech gets going, often developers find a way to get it into the hands of all of us. Is that what happened with Siri?
Siri was so much by disability, but it was definitely the kind of handicap. Imagine you're in a car and you're trying to text somebody or you're trying to get to directions. And you are cognitively loaded. Meaning you're concentrating big time.
“Yeah, you should be concentrating on the road.”
And if you're distracted, it's dangerous. You are not supposed to touch it. So it's as if you can't use your hands. Yeah.
You're not supposed to look at it.
And so how would you use your mobile phone?
“Well, you have to have a voice interface.”
That's a dialogue. And so that's the kind of thing that Siri was built for. Where are we now with AI in our lives? It's not just a voice in the room. We're sort of at a tipping point in terms of understanding how they might change the way we live.
No, absolutely. We are at a tipping point.
I mean, everybody now has access to a real amazing intelligent partner to help them do things in their lives.
It's almost free. It's unbelievable. And it's the same conversational user interface in the front end. So it's really kind of held that stuff.
“But the back end has gotten extremely smart.”
Like, you know, I mean, I shouldn't get the excited. I've seen AF at 40 years, right? And I've said, it's just like a million times different than it has been in the past. As you've already said, you know, anything that's made of words AI will own.
I mean, who else has read everything in the world.
And can talk about it. So anyway, I am simultaneously freaked out and excited and scared and unbelievably optimistic. However, we have to act now on the basis that we have superhuman ability to talk a good game. Which means persuade people do things against the will, convincing them and so on, but also helping them in ways that they couldn't do otherwise.
We're not seeing any regulation around AI here in the United States. Where does this go back to your idea of humanistic AI? What sort of guardrails do you think we need to be putting around it right now? Yeah, the humanistic AI framework would say that the objective function of the AI that is the thing that the AI is optimizing for should be human benefit, not say, profit or something else.
And human benefit is hard to measure. So there's no easy prescription. So we have to decide if we're going to put guardrails on a thing. We have to solve the engineering and scientific problems of detecting that something the AI is doing is harmful to humans. We have to build theories of human harm and human benefit into our objective functions.
“So that's what I think we should be working on.”
I think it's not just a matter of regulation, it's a matter of the scientific agenda for AI research.
We have a choice in how we use this powerful technology. We can choose to use AI to automate and compete with us.
Or we can use AI to augment and collaborate with us to overcome our cognitive limitations and to help us do what we want to do, only better. In a minute, how ethics could be built into AI going forward? Today on the show, AI that puts humans first. I'm a noosh Zamorodi and you're listening to the Ted Radio Hour from NPR. We'll be right back.
It's the Ted Radio Hour from NPR. I'm a noosh Zamorodi. We're spending the hour talking to Tom Groober, one of the inventors of Siri about how to build AI that puts humans first. Because it can feel like right now AI is on a collision course with humanity. Recently, for example, the U.S. Department of War struck a deal with Open AI while its competitor andthropic fought back over military use of its models. And people responded some deleted Open AI's chatGPT in protest or downloaded anthropics clot to show support.
Tom believes that if regulation falls short, it will be on consumers to push the market towards AI that feels safer. Meanwhile, he says there are technologists who are trying right now to shape AI behavior that could prevent future doomsday scenarios. For instance, imagine a scenario where the AI escapes and takes over, maybe takes your money, maybe starts running cyber bots and attacks people and so on. And so there are things that are prerequisites to those scenarios. Like, can it lie to you effectively?
Masquerade itself is something else, prevent itself and being turned off. These behaviors you can put guard rules around and people are studying that. Yeah, because I feel like every so often there's some report that comes out that's like, Oh, Claude manipulated the developers in some way. They got to fix that. Oh, yeah, we're just seeing the beginning of this.
So we have to come to grips with this. This is one area of just safety that there's no easy answer. But I would hope that we have competition among the big AI model companies in like which ones are going to be safest. So for example, back in the old days, there was a brand allegiance to Volvo because it really went to the great extent to be safe like the best airbags and the best heat seats and all that sort of thing.
I think we should be able to have AI compete on how safe it is, so that peopl...
Do you see that happening anytime soon?
“Globally, yes, we see a lot of distorted thinking at the national level here and it ideologically driven policy and so on. I think hopefully we'll get past that.”
But the key thing is that we still have a free market system and we still have the freedom to choose among a set of AI products.
I mean, you probably know a lot of these, mostly guys running these companies. What do they say? Do you bring this up with them? I don't really hear much about safety from Sam Altman. You know, you don't hear from him or in line, but you hear from Dennis, from Satya, from Microsoft. Definitely from Daria, I'm not a from Anthropic. So I think of the studios that make the foundation models.
Dennis and Daria, Anthropic and DeepMine Google, have always been safety conscious and have serious, well-funded teams working on that part of the problem. And that's the sort of healthy, normal thing you would expect from a company that's worth billions. I mean, when I first started really digging into tech and reporting on all of this 15 years ago, I have to say I was a lot more optimistic and excited about the possibility of building these sort of altruistic systems. We've been down this road, Tom, with social media, with the surveillance economy, attention economy.
It seems like any chance for a tech company to take advantage of its users it will.
So what makes you think this could be different?
One thing that's different is that the foundation infrastructure of AI, the models like Claude or Gemini and his models. They're super expensive to build both in money, time, especially talent, which is very rare. For now, anywhere they're hard to build these things and there's only a few of them, maybe 10. The good news is that they're fairly omnipurpose at that point. And once you have the models then the rest of the world builds applications on top of them.
So the rest of the world can compete on humanistic applications. Look, you can go out there into the dark web and get the most nasty evil software in the world. Or you can go out there and find lovely games or educational things or whatever. You can find everything in the application world. I think we're going to see that with AI too.
“Is that kind of how Apple created the app ecosystem and you have to adhere to certain privacy standards?”
Yeah, that's right. That's a very good analogy there. It would be great if there was something like an app store for AI that would do the work. At least minimally establishing whether an application is in the benefit of humans or not. You know, X is using AI to do evil things right now.
It's already happening. So we can't stop. I can't stop them. You can't stop them. I only a government could stop them and they're not stopping them this year.
Right? So, okay. But you don't have to use X. You don't have to use Crock. It's a talk that I know you're giving it that says that instead of talking about giving up our privacy and self-determinism.
We should start thinking about instead of big brother being what tech has sort of become surveilling all we need to think about big mother. Yes. Exactly. I'm trying to figure out what would be the right symbol.
“And I found I think African elephant is kind of a cool big mother.”
It's smart protected. The nature oxer, the ones that run the herds are the wisest and smartest and so on. So imagine a female African elephant with her calf. And think about her value alignment. She will do anything to protect the calf. Obviously humans too.
Human mothers, they nurture their children. They teach them right from wrong and truth from falsity. And then they show them skills to survive in the environment they're in. That's what AI's can and should do. Machines can now have access to everything in your life that's digital.
And with that, you can build amazingly good. Recommendations about how to use your attention. Maybe give you insights on things you could discover or learn from.
At the same time, that same data is extremely powerful if it's used in a surveillance economy against your interest to add it to you.
To make you buy things. This is why it's a real ethical choice. How you optimize the AI is actually a massively important societal choice today. It's not a technical thing that we lead to the engineers or to the boards of directors of companies. That's to be done at the societal level.
And so I think big mothers and ways to go ahead have a lot of data like mothers know everything about their kids. But mothers are aligned with their kids in just. That was Siri inventor and AI pioneer Tom Gruber. You can watch his TED Talk at TED.com.
As Tom said, there are people around the world who are trying to build AI wit...
Sure, knowing more about our minds than mere humans can, but using those superpowers to make us better without trying to take advantage of us.
Priya Lacani says she is one of those people.
“I think AI could be the single most positive technology to impact everybody's lives in educational context.”
But it has to be developed responsibly. Fifteen years ago, Priya was an entrepreneur building schools in India. But back home in the UK, she found that schools were struggling too. Twenty percent of children cannot read or write well enough in the United Kingdom. There were some statistics that were just as poor to do with mathematics.
And it was just a real shock to my sister. I thought there's a fundamental problem here. And if we date fix this in the UK, then we're definitely not going to be able to help the sorts of places where I'm trying to make a difference. And so just that of pure curiosity, I went to schools. What you saw were two big problems.
You may have maybe richer resources, more teachers and different ratios. But once you close the door of the classroom, you will often find there is a teacher stood at the front. Delivering a sort of one-size-fits-all delivery of education.
And then the second problem that was prevalent across all schools was every teacher,
workload was a massive problem. There were teachers who were stressed, they were anxious, they had far too much to do. The point where they were considering leaving their roles, we were tens of thousands of teachers short in the United Kingdom. You know, how can we have a professional who has trained for this position? Enjoying their role in inspiring children in passing knowledge of a subject that they're really passionate about
and being able to do so confidently and in a way where they're not exhausted. She realized the outdated tech in classrooms could be the solutions. If it could curate a tailored education experience for each child and give teachers, instant data about where exactly each student was struggling.
So she looked around at what was on the market.
There was a one very large company in the US that was talking about it already. It was touted as this massive AI system for education. So similar sounding, personalising, learning, reducing workload for teachers. But they were using a type of machine learning that is very common for retail systems. So recommend a systems that track user behavior and then recommend what you like.
Right? So if a student is on the system and they're learning biology, you get a lot more biology. Kind of like TikTok. Right. The problem with that is that that is not going to work in education.
“That's because in education, sadly, sometimes you need to give someone what they don't like.”
It's often you'll find, particularly with younger people, they don't like the subject that they're struggling in. Right. You can't ignore that. What you need to do is have a system that's a little bit more complicated. Give them something they like to keep them engaged. But you do have to hand them the foundational prerequisite knowledge that they're missing in order to increase their knowledge and skill set in areas where they're struggling.
And so it was going to be a more complex system from the outset. And it didn't exist. And that is where the journey to create such re-ready started. Century tech is the educational tech company that pre-affounded in 2013. She says the platform's goal is to support overwork to teachers, not take their place. There is of course a big debate in education right now about the role of AI in teaching.
Some believe funding should go straight into schools, not into technology that's trying to make money off of learning.
“Others think the only way to make education accessible to every child will be with the help of AI.”
That's the camp that Priya is in. But she says that doesn't mean replacing the hard, even growing work that happens in the classroom between teachers and their students. Here's Priya Lacania on the TED stage. We need to combine artificial intelligence with neuroscientific theory and the learning sciences to learn how every single brain in this room learns. Because if we can fix learning, we can improve outcomes.
We can personalize education for every single one of us and provide intelligent insights to teachers to reduce the workload. So 12 years ago I built a team. They built the technology, it exists, students use it in over 140 countries. I thought it would be really important to share with you some student feedback that I have on our platform. Because it tells us what children's expectations are when they use an AI education partner. So I get feedback like, I think century will help me achieve things that I thought were impossible.
It's a golden child, right? My life's purpose has been fulfilled. And then these sweet, lovely innocent children send me messages like this. I don't like this website. It makes me able to do my homework.
Wait, I'm being bright.
I'm not joking. You just need to give me no work. Give me a button to do the work for me.
“If I was to go into a classroom as a student to use century tech, what would my experience be like?”
You know, the teacher would walk into the classroom and say, right, everyone, you're on a login to the machine. And they could either set them learning material. Usually they set them a set of questions. The teacher has their dashboard open and they can see how the students are performing. So information would light up about who is struggling. The teacher would then walk around the classroom and be able to make those interventions real time.
Okay, so this student, why did you answer that question in that way? And the teacher is then using their expertise. So that's a blended learning environment. And the teacher would then utilize that information to then be able to not just stand in front of the class assuming everyone's taking in the knowledge and asking people to put their hands up and spot checking with the knowledge. You can walk around and have that very sort of targeted intervention with students.
“Okay, so that's in the classroom, but what about at home?”
So half of them will use it as teacher sets homework to minise you've been given this assignment by your teacher. It's on Pythagoras's theorem. You may be a student where before that it has automatically given you some math work on roots and powers. Because it knows that you don't understand that. And there's no point you're doing the Pythagoras work if you don't understand roots and powers. In the same way, if you were master that work, it can then do what's called a smart recommendation and stretch you as well.
So some people will use it for assignments in that way. And then the others will use it in a flipped learning way. So teachers will be planning on teaching a lesson they'll say right. We would like you to learn the lesson on centering the week before. Students do that in their own time. Teachers then receive the information of how the students have done.
And then that lesson the following week is a far more targeted lesson as to what did they understand, what didn't end son. So 50% of the platform is about the teacher because we fundamentally believe that empowering teachers is one of the best ways to improve education. They should not be doing data analysis off spreadsheets in the evenings, which is what many of them sadly have to do. They should be receiving insights from our platform instantly so that they can then go into the class the following week.
Focus on that particular concept, but not have to just teach it one size fits all. They should be able to say right. I can see that here is the misconception the third of you have.
Or here they can pair off the class into different peer groups. We have various dashboards that show you which kids are excelling in this particular topic, which ones are really struggling. And all of that data is turned into actionable insights. So it's a highly flexible system and the reason that has to be the case of an issue is because teaching is as personalized as learning is. I don't think it's right to build a system and say here's the system and here is how one must use it.
Teachers are different, they're professionals and some of them will prefer to be standing at the front and inspiring you. Some of them much prefer to be having those sort of one-to-one interactions walking around the classroom. So you're not replacing the teacher, you're making it possible for the teacher to shine in the way that they shine as the idea. 100% and I don't, this is a really controversial topic for discussion and education about this replacement of the teacher. Because a lot of big tech, for example, they've said, this is the future of education.
I don't think they understand what education means. Education is not transfer of knowledge from textbook into brain. Education is so much bigger than that. These schools are, they're like the old village. They're providing an enormous amount to students. Beyond the formula for Pythagoras, this is about augmenting the teacher and being asked to push us all ahead. And if you can improve the baseline sort of standard of education, we can focus on skills that really matter in an age of AI.
The problem is that a lot of neuroscientists and cognitive scientists have discovered is that we can over rely on tools and we can completely bypass the cognitive processes.
“Okay, I think I have an example actually, tell me if I'm right, tell me. I was going to spend time in Italy recently.”
And so six months ahead of going, I downloaded Duo Lingo and I took Italian every single day. And I got to Italy and literally all that would come out of my mouth was Chao Bungiorno Capuchino Por favora. That is it. That is highly useful Italian for someone like me. Well, yes, but it did not get me very far. And so what I can only imagine is that I was able to figure out very quickly what the game wanted me to say or how to please whatever the owl wanted me to do.
But when it actually came to real life application of the alleged knowledge I had put in my brain, I had none. I really was distraught. I'd spent a lot of time and I didn't have anything to show for it. And I guess my fear is that in these classrooms, it looks like the kids are killing it on their test scores on all the work that they're doing.
When it comes to actually being competent adults in the, quote, real world, t...
Well, this is what I like to call automation complacency.
And it's a very digitally transactional memory. Yes. And the problem is that you've done it very quickly, but you haven't thought deeply about that particular answer.
“It's this sort of reliance on technology, we're actually kind of weakens the productive learning behaviors, right?”
It creates this unrealistic expectation about the ease of learning. So basically, people are poor judges of their own learning. So when information is presented fluently or quickly, when you're on these apps, right? You're doing it quite quickly, you're feeling quite good about yourself. You tend to believe that you understand it, even though your actual retention is low. When we come back, Priyolakhani explains why kids need what's called a productive struggle in the classroom, and how she thinks AI can help them get it.
On the show today, how to build AI that puts humans first. I'm Manush Zamaroti, and you're listening to the Ted Radio Hour from NPR. Don't go away. It's the Ted Radio Hour from NPR. I'm Manush Zamaroti. Today on the show, AI that puts humans first. We were just hearing from Priyolakhani the founder of Century Tech, an education technology company.
Priyia says they use AI to create friction to make kids work to learn, unlike chat bots that try to replace human thinking or simply transfer knowledge. Here she is on the Ted stage.
Think about how we felt when we first use chat GPT. I think he thought, "Wow, I never need to do any work ever again. This is amazing."
“And then it hallucinated, and I think we've ended up with this sort of sinking realization of acceptance, right?”
That the shortcuts don't really replace the work, they're very helpful, but we still need to learn and we need to think. Now, when we read those long answers that an LLM cat bot gives us, it feels very fluent. The problem is, is that fluency, we often mistake, for learning. What we actually know about learning is that learning requires what researchers called a productive struggle. It's this sort of mental effort that builds understanding. It sustains mental effort, strengthens the parts of the brain, and it's positively correlated with growth in the brain.
Durable learning does not come from shortcuts, it comes from certain types of effort, and this is why AI is amazing for education. Because AI can spot patterns in how we all learn, it can force you to generate an answer, rather than just reveal the answer, and it can provide amazing structured feedback against expertly designed rubrics from teachers. So, AI that's effective in education doesn't spit out the answers or nudge the students towards the right answer, like in gamified apps or chatbads, like that might be fun, but it's not terribly educational, you're saying.
Exactly, so these sort of AI generated explanations are very quick learning on quick apps, amplify that effect, because they make you think that it's clear, it's confident, particularly if they're anthropomorphising the technology, so it feels like it's very human, you're kind of conversing with it, you're talking to it, think about that working memory, right?
Now, you can retrieve that as a short period of time. The problem is, is can you do that later?
So, we can be cognitive mazes, trying to conserve mental effort and a learning context. If the technology replaces the very thinking that a student needs to develop, then learning can suffer. We've actually had teachers come to us and say, "Can you put in more coins and badgers and characters, the kids love it?" But the point is, if you learn that, "Oh, when I put in effort, I get a badge. You then start to build up an extrinsic value for learning." And that's actually really unhealthy, what we need people to have is intrinsic value.
I'm learning for learning sake because learning is good, so learning agility, just the ability to learn how to learn. I believe fundamentally that those overly gamified applications are bad for young people.
“That's why we don't do it. And it's a business, right? Companies are very focused on engagement.”
You're measured by your daily averages, as you're monthly averages, how often they're coming in, when do they engage? Why do they log off? Because the more engaged that people are, the more money you're going to make. There's a perverse incentive to overly engage them.
What are you measuring for them?
My key success metric is how quickly can I get this kid off the screen?
And so we give guidance that's completely the opposite of a typical company. We say to schools, "You really should not be on our system for more than about an hour and a half a week." If you're sitting there for nine hours a week, the system is actually not performing at the level in which we would hope it to be. It's really challenging because that generous of AI markets, it's a race by the big tech companies. So when it comes to Jedi, how they build those tools is going to mean everything for the future of human flourishing.
And it really sits with a handful of people in this world, which, hmm, look, I wrote a tech company. I have shareholders and investors, but we're associated enterprise.
We have a very different set of metrics as to how we're measured.
There are so many instances where, for example, we have said, "No, we're not going to build it in that way. Our neuroscientists will turn around and say, "No, can't do that. That's generally known to be bad for kids."
“This is why educating the public about AI is really important, which model was it built on?”
What was it trained on, which data? What is it trying to do for you? Is it beneficial for you? Is it providing you with something where you are then exercising your brain, rather than just sort of transferring the skills that over to the AI? If you can then answer these questions, you all are going to be very, very well equipped to decide whether that is good for your bad for you. That is powerful.
That was the founder of Century Tech, Priya Lakhani. You can watch her full talk at TED.com. Supriya says her technology won't make teachers obsolete, but there are fears that educators and many, many more careers will be replaced by AI that can do their jobs. But there are others that say, "That's not how innovation works. The workforce won't disappear, though it will change," says Vlad Tenev. Vlad is the co-founder of Robin Hood, a financial services app. He's exactly the kind of Silicon Valley billionaire you'd expect to tell you everything with AI is going to be fine.
But his argument isn't, don't worry, it's look at history, because what we think of as a job today won't be the same tomorrow. Let's take a moment and reflect back upon our lives when we were 20 years old. Think about the opportunities for work and career that lay in front of you.
“How many of you had a pretty good idea of what you wanted to do for your career?”
Not too many. How many were overwhelmed by all the options? I know I felt the same. Vlad recently gave a talk called AI is coming for your job, now what? He told his own story of being 20 years old and graduating from Stanford University was a degree in mathematics.
Nobody had sat me down to tell me that my pure math major wasn't going to be the most desirable qualification, and I probably wouldn't have listened if they did.
Now, my first month in graduate school, Lehman Brothers went under the start of the global financial crisis.
Most of my friends, particularly the ones that felt the most secure, found themselves packing up their cubicles. Some of us wondered whether the economy would recover at all, or whether we were in store for another decade long, great depression. But amidst the uncertainty, some of us found a source of optimism. The iPhone app store came out that very same year, 2008.
“And the idea that anyone could build a digital game or service that could be delivered to millions of people, and a device that lived in their pockets?”
Well, it was the digital equivalent of when the first pioneers found gold in California in the 1840s. Vlad was all in. I still remember when the instruction manual for how to build iPhone apps was released. I was up all night reading it, learning, trying to understand. I saw an opportunity for a new level playing field.
Pretty much everything I've done since then was a product both of the economic malaise, but the technological optimism of the time. But times have changed. The average 20 year old today also has quite a bit of fear. But this time emerging technology is not the antidote to that fear.
It's a source. And they're asking themselves, "Will that career I'm looking at even be around in 10 years?"
One reason why it feels different this time is because AI, unlike the iPhone,...
A few years ago, I founded another company with the mission to build mathematical superintelligence. Artificial intelligence that can reason and solve problems better than any mathematician.
I always thought of mathematics as the pinnacle of human intellectual activity.
So a superhuman AI at mathematics could potentially be superhuman at everything. Combined that with my day job, which is running a global financial services platform, it's led to me spending a lot of time pondering one very important question.
“What do we do in a world where the vast majority of today's jobs are gone?”
And I want to analyze this question rationally without fear and hyperbole.
One way to do it is to look back through history and see if there's been a time where we face this type of job disruption before at anything near these levels, and how we as humans have navigated it.
Now, I'm a technologist, not a historian, so with that caveat, let's go back in time to a world a 20 year old would have known tens of thousands of years ago. In approximately 50,000 BCE, most people were hunters, gathers, or toolmakers, very few of us today know anyone with those job descriptions. The main occupations of the Paleolithic era are largely gone, but they didn't disappear overnight. Instead, they were subdivided into lots of other more specialized jobs.
“The next era, the Neolithic era, saw all kinds of different vocations popping up, thanks to advances in how people stored their food and built shelter, domesticated plants and animals.”
Humans have mastered a few new things, farming, keeping livestock, the invention of these things allowed us to spend more time doing what we consider creative work and less time on pure survival and subsistence. And this opened up a lot of new jobs. You had artisans, like weavers, potters, you had farmers, these jobs too, largely all gone. In the US today, we should say farmers make up less than 2% of the workforce. Let's move ahead through the changing jobs of the Bronze Age, the Iron Age, the Dark Ages, the Renaissance, and the Age of Exploration.
Each age, the same thing happens, jobs are lost, sure, but more different jobs take their place. Too many jobs to count, a lot of them are gone. Any blacksmiths or explorers in this room? I didn't think so. It might come back with space exploration. If you think about it, most of our last names are from jobs that our families no longer do.
“Potter, butler, butcher, smith, any fletchers in the audience?”
Anyone know what a fletcher is? I was going over my talk this weekend, and my son said, "I know what a fletcher is, Dad. He plays Minecraft." A fletcher is someone that makes and sells arrows, so if you know someone with that last name, their relatives were arm Steelers.
Now my point in all of this is job disruption is then an essential quality of human evolution.
We want work to disappear, because it means that we're doing our jobs as humans, making our lives better and easier. So with AI, maybe it's not the job disruption itself that makes us so nervous, but the speed with which it's happening. So why don't we accelerate? We're going to go right through the industrial revolution into the modern era. In the 20th century, a young person in the wake of companies expanding and automating would have found an entirely new menu of jobs that their parents never had access to.
So instead of working in a factory, they would have had the selection of a wide assortment of new office jobs. And some of the parents were probably thinking, "You sit in a chair all day. That's not real work." Now the internet era, we see all around us jobs that didn't exist before. We have people getting paid to play video games, eat at restaurants, travel, talk to their friends on video. Those last people we call podcast bros.
We take our jobs very seriously, but if you took someone from the 20th centur...
and they could peek into our world today, they would think that all of the predictions around technological unemployment came true.
“So where does all this leave our 20 year old at the dawn of the AI era? One feature that we found is recurrent throughout generations is this feeling of exceptionalism.”
We'd like to think that somehow we're at a discontinuity where history ends and we're in a new world with no precedent. And maybe it's true this time, we really don't know for building a super assistant or an apex predator. Certainly all change in disruption brings with it a painful transition.
Jobs will disappear, perhaps they'll disappear at an accelerating rate.
But at the same time, we see one undeniable trend.
“There's going to be new jobs and lots and lots of them across every imaginable field.”
Where the internet gave people worldwide reach, AI gives them a world-class staff. The jobs will not look like real work. Much like to our predecessors, our current jobs would have looked like leisure. And I bet that we would feel the same about our descendants in the future. And I can tell you with near certainty that a humanity that's capable of building a super intelligent AI also has the creativity to navigate through this potential job, doom and gloom scenario.
“Although we'll never stop worrying about it, being hyper-vigilant about threats to our survival is a key part of evolution, what makes us human.”
I can tell you that you shouldn't take predictions about future job disruption to keep you from doing something you feel very passionately about. You know, when I was a kid in the 90s, teachers discouraged me from becoming a computer programmer. Back then, it was a common thought that all those jobs would be shipped off to China. So, even where it seems obvious, sometimes our predictions of the future end up being completely off.
Humanity has always excelled that providing itself with meaning and purpose, even in the darkest and most uncertain of times.
I feel very, very confident that the 20 year olds of the future, perhaps in collaboration with AI, will continue to build new things, which simultaneously we're going to be scared of, but also excited by. That was Vlad Tenav, the CEO of Robin Hood, a mobile financial trading app, and one of the places where he thinks people will go when they no longer need to perform labor to earn an income, but do things like trade on various markets to earn money. Critics have called the platform risky, especially considering that billions have been invested in AI companies that aren't profitable yet.
You can see Vlad's full talk at TED.com. Thank you so much for listening. Make sure you watch some of the videos that we've been making with our guests. If you're on Instagram, you can find them at Manush Z, that's M-A-N-O-U-S-H-Z. This episode was produced by Phoebe Lett, and edited by Sonaz Meshkenpur and me. Our production staff at NPR also includes Matthew Clutier, James Delohousey, Fiona Giren, Harshanahada, Rachel Faulkner White, and Katie Montelion.
Our executive producer is Irene Nuguchi. Our audio engineers were Damian Herring, and Simon Jensen. Our theme music was written by Romteen Arablui. Our partners at TED are Chris Anderson, Roxanne High-Lash, and Daniela Bella Rosso. I'm Manush Zamorodi, and you've been listening to the TED radio hour from NPR.


