The DSR Network
The DSR Network

Siliconsciousness: The King Kong v. Godzilla of A-holes and Other Big Stories of AI

2h ago40:566,790 words
0:000:00

Sam Altman and Elon Musk are facing off in court, but beyond the high-profile legal drama, a wave of industry-shifting stories is unfolding. Mat Honan, Editor-in-Chief of MIT Technology Review, joins...

Transcript

EN

There is cheese.

A new KNOUS-BREGASY has the most delicious cheese.

Now there is a package of meat. That's the best. Now there is cheese. Now there is cheese. Cheese.

Now there is cheese. Now to test it. Time for 18 years. I bought and bought the most delicious meat in the world. Only a long time.

A long time for action pack and cheese minus action. Now there is cheese. A new KNOUS-BREGASY has the most delicious cheese.

Now there is a package of meat.

That's the best. Now there is a very delicious cheese. Now there is cheese. Cheese. Now to test it.

Time for 18 years. KNOUS-BREGASY has the most delicious meat in the world. Now there is a long time for action pack and cheese minus action.

β€œTo stay up to date on all the news that you need to know,”

there is no better place than right here on the DSR network. And there is no better way to enjoy the DSR network than by becoming a member. Members enjoying ad-free listening experience. Access to our discord community. Exclusive content.

Early episode access. And more. Use code DSR26 for a 25% off discount on sign up at the DSR network.com. That's code in DSR26 at the DSR network.com/by. Thank you and enjoy the show.

Welcome to Cilic Consciousness. The DSR network podcast focusing on the artificial intelligence revolution. Politics and policy. Hello and welcome to this week's Cilic Consciousness. I'm David Rothkoff, your host and this week.

As every week, we're going to take a look at big stories that are in AI and in related technologies.

β€œAnd as we do every so often this week, we're going to talk to our friend Matt Honen,”

who is the editor at the MIT Technology Review. And therefore has his figure on the pulse of, well, pretty much everything. How are you doing today, Matt? I'm doing great, David. How are you? Well, you know, I live in Washington.

And you know, the every last vestige of free speech is being pruned away this week. But apparently, we're still able to talk about AI for now. Can we criticize Jimmy Kimmel this week or not?

We can criticize Jimmy Kimmel, but Jimmy Kimmel can't criticize the first lady or make jokes.

Got it. Okay. That works. Okay. So if you've got any first lady jokes tucked away, I want to hear them. But that's okay. I thought the argument, by the way, she sent one of her spokespeople.

I didn't know that she had spokespeople, but she sent one onto CNN today. And he said, well, you know, the reason they should take Kimmel off is half of America doesn't agree with what he says. And I thought, well, wait a second, that logic doesn't really work. If only a third of America supports what the president says.

But, you know, you know, the arithmetic may be beyond them or me. I'm not sure which. Anyway, let me start with something that's kind of newsy. And then go into this sort of big feature that you guys have that seem really up the alley of our listeners.

But you start off with a breaking story or an analysis because Elon Musk and Sammall then are headed to court.

β€œAnd I think Elon's showing up the first day.”

And this is sort of tightens of the industry. And it's kind of, you know, the King Kong versus Godzilla of assholes. And I just, I just wanted your take on what's going on there. I mean, I think it's fascinating. And we have a reporter covering a Michelle Kim and who also, I should say,

is an attorney herself. You know, I want, I won't say that it's the King Kong and Godzilla of assholes versus each other. But it's certainly to pretty, pretty interesting people who probably are going to have a whole lot of stuff come out, come out in this trial.

They don't necessarily want to come out. That's, that I'm, I'm looking forward to. If you remember, when Musk tried to get out of the Twitter deal, and all of his text messages with various industry players were made public during discovery, it was some of them were just,

were just colossally stupid. And, you know, if nothing else, this, this promises to be a really fun trial to cover. It also, though, has huge implications. You know, if Musk wins, you know, you're talking about,

Putting the brakes on a, you know, potentially putting the brakes on an IPO,

potentially replacing the leadership of the company, potentially, you know,

meaning Sam Altman isn't the CEO of OpenAI anymore. I find that pretty unlikely. As Michelle was made clear in her story for us yesterday, there's, you know, questions as to whether Elon truly even has the understanding to bring this.

It's, you know, but it's the, I got to tell you, it's the hot ticket in the Bay Area right now. I mean, the, you know, you've got, you know, I've just seen scenes from outside the court room, and there are, you know, protesters, counter protesters,

people who are, you know, Elon stands. There are, you know, the quick GPT people who are trying to give people a bit of away from chat GPT or OpenAI,

I guess, in particular, they're all there.

And, you know, it's, it's classic, you know,

β€œjust sometimes when I think the Bay Area San Francisco”

has become too homogenous and his lost all its weirdness, they're all there at the trial, you know. Well, they, they all seem to be taking their lives and they're going, you know, it does bring up something, though, you know, you didn't say that they were,

it can't kind of get Zella Vassels, I did. I will own that. But it does bring up this thing that is a, you know, it's kind of a big subtext. You and I've talked about a little bit before.

AI has been kind of unfortunate because it's had all these breakthroughs and it's doing all this. Coming up with technologies that are super interesting and helping people and yes, they'll be disruptive and so forth as all technologies are, but the faces of AI are not super appealing.

You know, I, I mean, I think you could do the most shocking article you guys could do right now is called the good guys of AI because who are they? Yeah, I, I totally agree with you. You know, so I, I will say, Musk has a very widespread reputation

as being an asshole to work for, right? And certainly Sam Altman did not come out of that run in Pharaoh piece or any other reporting, just about over the last, you know, 15 years, seeming like, uh, seeming like someone who, you know, really

want to put your faith and trust into. But even if you, if you go and look at, you know, Dario Amoda, who's the, you know, one of the good guys.

β€œUm, my, you know, I think there's a real disconnect there”

when you think about someone who's saying, like, we're building this

incredibly powerful technology and it's going to take away half of your

jobs. I mean, he's the one out there saying that louder than anyone. And he and the AI industry, writ large, have made people very nervous about what they're trying to do. I have a little bit of a pet theory, which is that, um, I, I mean, I'm going to take a minute to self, but I wrote an essay about AI malaise.

Um, we've got on the site right now, but I have a pet theory that a lot of this really began with the Super Bowl when a lot of, I mean, I think people already had certainly had concerns. They had fears, you already had, you know, bipartisan pushback on data centers and it already affected an election in Georgia.

But then the Super Bowl comes and, and every other ad is for AI. And even the ones that are, like, if you think about that anthropic ad, which was making fun of open AI putting, putting advertisements, and ad making fun of advertisements, um, into its chatbot, that was a deeply weird ad. It was creepy, you know,

notion that, I mean, it wasn't, it wasn't creepy because the, because the therapist reads the ad, the therapist who was supposed to be an AI is just creepy.

β€œAnd I don't, I think that a lot of the sort of self-assuredness,”

um, and disconnectedness that a lot of, you know, these Bay area, especially CEOs, have, um, is not doing them any favors in terms of public opinion. And, you know, there's horrible stuff happening that they, that they just seemingly can't or won't get their arms around.

I mean, the, you know, the Florida is now looking at criminally prosecuting up an AI over two different, um, you know, uh, two different cases where, in both cases, where the AI had either, um, you know, advise someone on how to commit or cover up, um,

commit suicide. Um, and I just, you know, like, as it took us a while, I think, to, to come to the conclusion that maybe, uh, Mark Zuckerberg and Jack Dorsey and the other social media CEOs were weirdos, but it's,

you, you can, you can, you can really see it pretty quickly with the AI industry. And I don't think that, um, this trial is going to do anything but reinforce that. I think people are going to see a lot of, you know, self-dealing and, you know, just pettiness and concern over,

You know, over putting themselves above, um,

all of these other, uh, you know, other issues. I think, you know,

β€œone of the things that I think is so interesting about this,”

is that Open AI is founded as a nonprofit

to promote safe AI and, and Elon is basically,

you know, saying, I want to force them to, to uphold that. Meanwhile, he's got grock out there that's making images of teenage girls and bikinis, you know, I mean, it's, it's, and promoting, and promoting,

not see as a, absolutely. Yeah. You know, you know, he's, he's hardly the guy in the white hat that he's trying to present himself as here. Um, but, you know, it's interesting you bring up the superbow analogy because those of us were old enough to remember the Apple,

superbow lad, um, which was called 1984, right? Yeah. And, and the whole idea of the ad was that here was this technology that was going to empower you to stand up to the man, to stand up to the establishment that was going to help the little guy.

And, and frankly, there's a case to be made that some AI could help people do that. Um, but there isn't anybody making that case and furthermore, all of these people who are politically toned up five years ago just didn't want

to have anything to do with politics. And, are now the biggest single category,

second biggest to farma of lobbying in Washington.

They've opened offices here. They bought homes here. They've all made a terrible mistake. I've been in Washington 30 years, but I'd like your reaction to it.

And the mistake is what I'm going to call the BB net and Yahoo mistake. And the BB net in Yahoo, speaking of somebody else who wanted to power around with Ela. But the BB net in Yahoo mistake was,

for years Israel had a special relationship with the United States because it didn't play politics in the sense, I mean, it tried to advance its issues, but it did not associate itself with one party. But if you look at what the AI

a moguls have done, whether, you know, any of the people we've spoken about, particularly, you know, palentere and others like that, they are seen as more magas than Trump. Yeah.

And so all of a sudden they've become a political issue

β€œand I think we're coming into an election season in November,”

where all of a sudden we've got AI and the ballot in some places. It is actually, and it's not, you know, it's not a marginal issue like some technology issues. There are big groups of voters, particularly younger voters,

under 40 voters who will say, if you support AI, I can't support you. You know, we are against building any data centers. We are against it. And to me, this is all of a sudden, you know,

they've made them a mistake of making AI a magazine or a Republican thing in a country that's just, I mean, as far as we can tell, is about to have at least one house of Congress that's Democrat, maybe two that are Democrat,

a good shot that a Democrat will be president, and they will be seen as the funders of the other side. And I'm just wondering what your reaction to all that is. Oh, I think it's, I think you're absolutely correct. The, you know, Greg Brockman, who's the president of OpenAI,

I think, gave $19-20 million to Trump real moves

to become a jacet at the president. And you can say that the, you know, Meta's is kind of an also ran in AI, but they're spending money because they're trying not to be. You know, I think those are certainly the,

the Brockman donation is driving part of this quick GPT movement, where people are trying to move away from OpenAI. And I mean, Palantir, I mean, I don't even know what to say about Palantir. Palantir has become so closely associated with us. Well, say something because Palantir is in the minds of a huge portion of population.

Fucking evil, you know, they're like, you know, they're, the, the, the leadership there is crazy. They're putting out crazy things. They don't want women to vote that, you know, they're worried about the anti-Christ and they're embedding themselves

in the intelligence community, the defense community. They're giving technologies that let people use their IRS data against them. You know, they are as vilified as as any company, you know, that I can think of in recent American history. 100% and, and I absolutely agree with you on that.

The, I think Palantir has an image problem.

β€œI think the question is, does it matter to Palantir?”

You know, did it matter to Raytheon? Um, you know, I don't, I don't, I won't say that Raytheon is as vilified as Palantir was, but I think that Palantir views itself as being of that, you know, of that mold.

Raytheon, all those defense contractors always had Democrats and Republicans ...

gave money to Democrats and Republicans played both sides of the fence

and didn't set themselves up for a series of democratic house investigations next year that are going to, how did they get this no bid contract? How did they get this? Who are they involved with?

β€œAnd that's what's going to happen to Palantir and people are going to start saying,”

we have to take the contracts away from them. Because they're, that's just a Trump front. You know, I'm not going to, I also, I'm not going to pretend to know it's going to happen in D.C. But I do think that, I, I, I talked to plenty of people who work in tech all the time and I do think that Palantir in particular is becoming a toxic brand

if you're an engineer, if you're, you know, I mean, not for everybody. It means certainly as some people are just going to go over the money is,

but it's, it's the kind of thing that I hear about more and more is kind of thing

where people will make a side, you know, sort of little jokes about like at least I don't work for Palantir type of thing. And I, you know, no matter what happens in Washington, I don't think the Palantir is going to be able to walk back the last year of its existence and explain it away. Donations or not, you know, I mean, if they hadn't donated to anyone,

just the, some of the things that they've said publicly that manifesto, they rolled out a week was just weird. But it's just very weird. It was beyond weird, right? Because if they are as influential as they are, it's dangerous.

If this is, if this is where they're trying to push the country, people are going to push back.

β€œI think you're right, and it's also even happening internally there,”

which is amazing to me, you know, there was a lot of reporting around,

what was happening internally in there, you know, in their chat rooms and time, kind of, it's actually slack or not. I'll be surprised, it was actually slack, but to happening in their internal systems where people were basically saying you're making it harder for you to do my job and go out and sell Palantir.

A number of years ago, in another lifetime, I remember we did at a different organization. We did a lot of reporting on Palantir. This was more than 10 years ago now, and what they were trying to do, and what they were trying to become. And it was even if there were already concerns about where they're heading

at that time, it was a much more measured company. It was a much more, you know, we are, you know, we are a company that's trying to responsibly help, you know, keep Americans, keep the world safe, you know, sort of their messaging. You know, yes, we do these defense technologies,

but really we're just helping people understand what's happening, and it was a more palatable Palantir. You know, the mask off Palantir, I don't think is going over well, either with constituencies, political constituencies, work force, you know, and certainly probably not general public,

β€œbut I think just think of them as a, you know, merchant of death with this point.”

This podcast is underwritten and part by the US Embassy of the United Arab Emirates. It's that editorial content is completely independent, and the views expressed are exclusively those of participating experts. It is part of the U.S. embassy of the United Arab Emirates. It's that editorial content is completely independent,

and the views expressed are exclusively those of participating experts. It is presented live without editing. For further information about the UAE's efforts in the areas of artificial intelligence and technology, go to the website of the embassy at www.ue-emBC.org

and search for UAE-us-tech cooperation. We thank them for their support. We thank them for their support. We thank them for their support. Thank them for their support.

We thank them for their support. We thank everybody who is supporting this podcast for their support, and we look forward to it developing and growing over time because the issue is so important. What you know, another thing I don't quite understand

and then I'm going to change the subject, but another thing I don't quite understand is why there haven't been a whole host of lawsuits against the boards of companies that give Elon these kind of pay days

or let Palantir leaders get away with where they are.

Because frankly, the board has a fiduciary responsibility

to the shareholders to protect them from things that might undermine the shareholder value, morality aside, ethics aside, and they're just not doing it. But these boards, which may be because of the way they're compensated,

may be because of the way they're selected, they're given these guys free reign. Yeah, I think some of the boards would argue like we've got to give Elon his pay day because Elon is what makes us valuable.

Elon is who returns the value to the shareholders by his continued presence here and where he took lock, he's not going to lock. But yeah, I take your book. Yeah, what also, I mean,

there's giving him a pay day and there's given him a trillion dollars, right? I mean, it's like, you know,

β€œthese people say, "Oh, no, you need to have billionaires”

because people need dev incentives to do the work." Right? Well, how much incentive do you need? I mean, if I said, Matt, you know, if you work really hard,

you can have one billion dollars.

Would you consider that adequate to show up every morning? Probably. But you know, I'm in a different situation. You know, yeah, I didn't total him a clarinet

at 21 or whatever. Wow, I agree with you. This is sort of related, but you know, the, like I do think that there are some flaws with the proposed legislation,

but like the billionaire tax and California, I think, is pretty telling. You know, Sir Gabriel, who was out at, you know, doing airport protests in 2016, you know, is now, is now moved to Nevada,

has his new girlfriend, formerly of the Vanderpump Rules, is very, apparently pretty pro maga. And, you know, again, it's an optics issue,

but I think when you see folks like him out there arguing against taxing billionaires, or the billionaire throughout California arguing against taxing billionaires, it mostly makes people want to tax billionaires.

β€œYeah, well, I think that's partially because”

billionaires were an abstract concept at one point,

but now they're in face, right? And the face is not the great face. So on the home page at Tech Review, you have a big feature called 10 things that matter in AI.

It's been up for a couple days now. It's part of your great coverage of AI there. And I thought it would be good to walk through what matters in AI right now. So let me start with the question,

what are your big takeaways from it? Well, you know, let me, if you don't mind, I'd like, I'd love to zero in on one thing, which is the AI resistance,

which I think is sort of the most interesting thing, happening in Tech right now. But the, when I think about what matters in AI right now, what we're trying to get at is this,

everything is changing so quickly. Will heaven, who's our senior editor for AI? He said something really interesting at this M Tech AI conference we had,

you know, of a, you know, last week, which was, you know, he's, he's got a PhD in Computer Science. He's been covering AI for 15 years,

and he said, you know, for most of that time, you know, no matter where he went, who was in conversation with.

He was as or more knowledgeable than the person who he's talking to, whether they're industry or from server or whatever. And he said,

now that's just all that's changed in the past, you know, few years, and he can because he can hardly keep up with what's happening,

no matter how hard he tries, there's just too much going on for him to keep up with. And so this was, this list of sort of our attempt to help

people identify, okay,

here's what we think matters right now.

Here's what, here's what's worthy of your attention right now, because there's just so much going on. And so it gets into things,

you know, as I mentioned, you know, sort of the resistance, you know,

the resistance movement that you're seeing happening, which is,

β€œI think related to what we were talking about earlier,”

you know, it gets into world models, which are like a type of, you know, basically if you think about,

you know, language models is being based on, you know, based on on our words, world models being based on sort of understanding the

while we live on and making, making presumptions and predictions about that, talks about the way they're being used, as being used in war, talks about how agents are,

we're, we're anticipating agents are going to be used, talks about, you know, things like the,

like how we're training robots, similar to world models, and that you know, when you think about, you know,

training, training a language model, using all this world's world robot, you've got a train of real world, real world movements,

real, you know, it's got to understand it's environment, which is why you're seeing some of these, you know,

these grim videos, I would say they're grim videos of workers, training, you know, training robots by doing,

You know,

they're repetitive tasks, like you're, starting to see something, especially coming out of like, India and China.

Um, and so stepping back, it's our attempt to say, okay, we know there's a lot going on.

Here are 10 things to focus on. It's not, you know, I mean, you can always get,

who's up who's down at the benchmarks, uh, from somewhere. We're not trying to do that. We're trying to,

we're not trying to pick, you know, company winners are trying to,

basically look at what we think are big,

or important ideas and trends. Well, what are the things, I mean, the resistance comes out,

no, it's, it's, it's, it's one of the 10,

but I know that, but three or four of the 10 are kind of, problems, right? It's not, you know,

weaponized deep fakes, you know, uh, the new war room, and what you talk about how AI's used

in warfare, and by the way, it relates to what we were talking about before, because if you were to say to,

an average group of 25 or odds at a coffee shop in San Francisco, who is responsible for the deaths of, you know, 100 school girls in Iran,

half would say Trump, half would say AI, and, and you also have supercharge scans on the list. So you do have an emerging set of things

to watch and worry about. You know, I will say,

β€œI think that's true with all technologies,”

that you should be thinking about

how they're going to, how they're going to proliferate, but it's particularly true right now, because we're starting to see this stuff happen. You know,

we're starting to see it happen. To your point about the, you know, I still not clear who made the targeting decision for that school in Iran,

but there's certainly, we certainly do know that, um, that at this point, one of our reporters has been able to determine

very conclusively that, you know, they're using Claude as a sort of conversational interface over Maven, and that's helping,

whether you're saying the AI is making the decision or helping make the decision is making the, you're effectively having targeting decisions influenced by AI. Um,

and I think that that is rightfully worrisome, um, because where it could potentially go, what point can we intervene, who's to,

you know, who's judgment already relying on, you can't, you know, you can't bomb many thousands of targets

in the time that we have, um,

and not make some mistakes,

and whose responsibility are those mistakes? I think it's worth paying attention to with the scams. You know, same story, you know,

you know, it's just everyday, you kind of hear some, some new incredibly clever, detailed scams,

β€œsometimes using someone's image or voice,”

which, which relates back to the deep fake things, all these things are sort of interconnected, but I feel like it's our job as, you know,

people who report on the industry to really think about this stuff and get it in front of people. Um, and we don't want to be, um,

you know, we don't want to be polyanish about this stuff in the way that we certainly released, I certainly was about social media, you know,

I mean, when we all just were talking about how it was going to be a democratizing force across the world and do nothing but good things, and then, you know,

it's, it's not just the Arab Spring, it's also the genocide and Myanmar. And, um,

we need to get out ahead of this stuff, and, you know, if you, let's take,

let's take, you know, Sam Altman, and Dario, and Elon,

and Zuckerberg, at their words about, you know, what this is going to, how transformative this is going to be.

Well, we really need to get our arms around some of the things that are obviously big apparent problems, or big apparent concerns, the things like the, like the weaponized deep fakes and the,

you know, and social media or scams, AI, AI powered scams. These are going to affect a lot of people.

Um, it's, it's important to get the industry talking about them. It's important to get people talking about them. It's important for lawmakers.

It's sort of thinking about this stuff. I mean, we still don't have any kind of meaningful legislation around social media. Um,

and it's, you know, that we've been living with that for, you know, a long time now,

so as maybe you can argue that we, that if we, you know, if we really needed some, we would have it.

I don't, I don't, by that argument, but it's, you know,

it's still very,

β€œI think our absence of gun control legislation,”

disproves that point. Yeah, I agree. It's, it's,

it's early days of this stuff as, as the type of people love to say, and, um, but it's happening,

and it's here, and we need to, we need to, we need to focus on this bad stuff. We need to focus on it as much as we,

you know, maybe want to think about drug discovery, you know, those are, they're great use cases for AI,

but they're also these really grim ones that we, that we've got to get our arms around. What was the sort of, verbal of conversation outside of them, tech meaning?

I will say for the most part, because it's such, the, most of the people who are there, are working in the industry,

and are working, you know,

Are,

are,

those conversations tend to be,

pretty deep, uh, in terms of,

β€œin terms of sort of knowledge and expertise,”

and a lot of times there, their, their conversations about sort of particular things that are happening, world models, for example,

with something that I had people talk to me about a few times, um, and, you know, I wouldn't say that they're necessarily reflective of,

of sort of the, you know, um, what I'm thinking is sort of the lay discussions around all of these issues. I don't think the lay discussions are really happening yet,

to the point they should be. Um, I did talk to a number of people about AI resistance, and specifically data centers, which are top of mind for everyone working in the industry,

and, and to me, are really, uh, you know,

are just, are so interesting. You know, I've got, uh,

complicated thoughts on data centers,

but on the AI resistance, it's, you know, it's interesting to me in that, okay,

you know, I've been covering tech for 30 years, and there are times when you see people say sort of like, uh, you know,

I don't really want this, and they, and, and they vote with their wallets. They don't buy the Oculus headset, right?

They don't, they don't, you know, they don't do the thing that they don't want to do, but with AI,

there's this perception that it's coming, no matter whether or not I buy it, right? You know, I can, you know,

I can avoid using AI, but I can't avoid my, you know, insurance company using AI. You know,

I can, I don't,

I may not have a paid claw to count,

but my employer is laying me off, and the, you know, in the name of productivity, and it's beyond their control.

β€œAnd I think that's one of the reasons that there's,”

or it feels beyond their control. I mean, it's one of the reasons there's so much firm, no about this stuff. Um,

data centers. I may have even said this here before on, uh, on so consciousness, but I,

but I, I, I, I think of data centers, the data center fight.

In a similar way that I think of the, protests against the, the Google buses and the tech buses, that was happening in San Francisco a dozen years ago. It's a physical,

something that you can stand in front of and stop, you know, like you can't stop Google, but you can stop a Google bus. Um,

and it's why I think you see people, you know, putting cones on the way mo's and having push back on, uh, all of those things in various places.

Data centers of this physical thing, right? You can, you know, and so I think of the AI resistance has coalesced around it.

I don't think that it's necessarily a, uh, you know, something where like, where people who are,

say behind the quits, and see movement are, are astroturfing these campaigns either. It's, you know,

it's people in Georgia, and Nevada, and everywhere in between, you know, failing to understand what this is going to do for them,

other than potentially raised their rates, and you know, make a lot of noise, uh, and are coming together to stop it.

And I think that's,

β€œthat's what you're seeing with this AI resistance movement,”

sort of for large, which is why it's so interesting to me, is that it's, you know, the communities of not made the case to,

normal people as to what this is going to do for them. I, yeah, and I think, you know, in some cases,

they've gone in and they've used old arguments, like, oh, this is a sign of economic finalization, and this is great,

and then the people discovered that, you know, creates one net job, and they could just as easily have gone in and said, we're going to,

you know, create a 10% of the revenue we generate here, is going to be used to make power, to be perfect for you, or build your infrastructure,

which front, they're kind of getting the message now, some of them. But you're absolutely right about that. One of the things that I also think is interesting on your site,

which pertains to data centers, as you've a story, three reasons why deep seeks new model matters. And one of the reasons, and this is, of course,

the Chinese AI company, but one of the reasons that it matters is, as before, with the launch of deep seek, it is coming up with models that use a lot less compute.

It's in one, in sort of the extreme case of sort of their, the new versions like version, which is for not super heavy applications, 7% or 10% of the compute of some of these other traditional models,

and it suggests, you know, the Chinese are going about this in a different way, and that the models that some of the big companies here are saying are inevitable,

may not be inevitable, and it's deep seek reminding us of this again, and then on top of all of that, this model was done on Chinese chips, and it's once again a message to Nvidia,

saying, AI doesn't, it's not, you know, an old and an invented in America,

saying, and it could go in very, dramatically different directions, and I was wondering if you wanted to comment on that. First of all,

I mean,

the efficiency thing is real,

and you know, the amount of, you know, density that they can get and these open source models is,

it's amazing, and you know, the frontier models seem like they get so far out ahead, that no one is ever going to catch up with them, but then these open source models are able to come along,

really quickly behind them, and do so and do so by using, you mentioned a lot less compute, well it's power, and in many cases,

not, you know, these Nvidia chips that cost ten, twenty thousand dollars a piece. And so I,

you know, I think that that's, I think that's super interesting,

β€œbut I think it's even more interesting though,”

that it's, or not more interesting, it's, it's, I both are interesting,

is this is deep sea,

because like a real example of soft power,

right? Like, this is a, you know, this is something that,

I can adopt all over the world, and that is going to be cheaper for me to use, it's going to be more efficient, and, you know,

the, maybe I'm not getting, you know, the most advanced frontier capabilities that I might be getting with a,

with a much more expensive subscription to, you know, to one of anthropics models, but I,

you know, I also don't have to worry about burning through my budget, and tokens in, you know, in an hour and a half.

And I think that that's that, you know, just as like the, you know, American entertainment industry,

spread, you know, American values and, you know, kind of a,

you know,

β€œthe sense of America as a nation throughout the world,”

I think, you know, Chinese technology, not just deep sea, but certainly deep sea,

um, manists,

some of these other things that they're putting out there,

they speak to a, to a different sensibility, and they speak to the sensibility of efficiency, they speak to, you know,

sort of we understand you, you know, we, we can, we can support you.

Um, and it's, you know, it's, it not only,

um, I think, questions, the amount of capital that we're putting into, you know,

developing and training these large models, but also what, how we want to think about our place in the world, and what we're delivering to the world, and delivering internationally.

It's, it's, it's, it's, it's super interesting to me. That's a really, it's a really, um, great point.

Uh, in 1997, I wrote an article for foreign policy magazine, which I accidentally later became the editor of, but it was entitled,

in praise of cultural imperialism. And of course, it was designed to be a provocative title, but the point was that at the beginning of the information age, everybody thought that, you know,

the information age was branded by the United States of America, and the tools and the manuals, and you had to speak English to be part of the information age. You had to use these products, and we really felt we owned it.

And clearly, that's not the case here. And clearly, the Chinese are not going into this accidentally. You talk about in the article,

um, you know, the, the fact that they're using an open source model to enable people to adapt what they're doing.

More broadly, and of course, in terms of applied AI, lots of places are doing better, but China's doing great.

And one of the places we see applied AI brought to life is, in, for example, cars next generation of cars. Absolutely.

And, and they're so far ahead, they unveiled a car this week that has unbelievable range, and it looks like a portion, it performs better than a Ferrari,

and it's $50,000. Yeah. This can't even be ID, right? Yeah. Yeah.

Yeah. And it's, you know, there have been, you know,

β€œdevelopments like that every couple of days, right?”

I mean, the, the, the charging of these cars is now down to six minutes, for 80% of a range of 1,000 or 1,500 kilometers.

Anyway, the point is, well take it. It's the soft power implications of the choices we're making right now,

or really profound. And I'm not sure we're having this debate on that, we ought to have. So I'm, I'm glad you brought it up.

Anyway, we've taken a lot of your time here. I am exceedingly grateful. As I always am, I hope I have a,

you're back again soon. And, uh, for now, thank you, Matt.

Everybody for listening, and join us again next week. Bye-bye. Thanks for having me David.

Thanks for listening.

Thank you. Thank you for listening. Thank you for listening. Thank you for listening.

Compare and Explore