Prof G Markets
Prof G Markets

Violent Backlash: What the Sam Altman Attacks Signal for AI

1d ago34:296,394 words
0:000:00

Following the violent attacks on Sam Altman, Bradley Tusk and Brian Merchant join Ed Elson to break down why AI is facing growing resistance. They explore the future of AI regulation, how politicians...

Transcript

EN

Support for the show comes from Virgin Atlantic.

some bad flights and I've been on some truly miserable flights. But it's a whole different story

when an airline shows up for you and the crew treats you like a VIP. Virgin Atlantic offers warm, one-on-one service from the moment you step on board. It's upper class cabin features four course meals, fully lay flat seats, and drinks delivered on demand. Make the journey as exceptional as a destination when you fly Virgin Atlantic. Go to virgin Atlantic.com to learn more. Support for the show comes from Odo. Running a business is hard enough so why make it harder

with it doesn't different apps that don't talk to each other. Introducing Odo, it's the only business software you'll ever need. It's an all-in-one fully integrated platform that makes your

work easier. CRM Accounting, inventory, e-commerce, and more. And the best part Odo replaces multiple

β€œexpensive platforms for a fraction of the cost. That's why over thousands of businesses have made”

the switch. So why not you? Try Odo for free at Odo.com. That's Odo.com. Recommendations can be great. Maybe someone recommended this podcast and here you are. But home projects are a little different. If the podcast isn't your thing, you might lose a few minutes from your day, but if you hire your cousin's neighbor to mount your TV, you might end up with a love-sided screen and wall damage. I know a guy isn't a good strategy for your home.

That's why thumbtack works so well. It matches you with top-rated local pros, with photos, reviews, and credentials, all in one convenient place. For your next home project, try thumbtack. Higher the right pro today.

β€œToday's number 12,000. That's how many comments Trump received on an image he posted of”

himself depicted as Jesus before it was taken down. According to the president, he meant to be portrayed as a doctor that was right after he called the Pope week and terrible. And it other news, panic on Wall Street, as trade is prepared for the Rapture. Welcome to Prof. Markets. I'm Ed Nelson. It is April 15th. Let's check in on yesterday's market vitals. The major indices rose as President Trump's signal he was open to talks with

Iran that pushed the Nasdaq to its tenth straight game, while the S&P 500 came close to a record high. Wall prices fell below $100 a barrel. Bank stocks were mixed after earnings, Wells Fargo fell 5%, while city group rose nearly 3% will be breaking down all those bank earnings

β€œon tomorrow's episode. And finally, Amazon shares rose nearly 4% after the company acquired”

Stalin's biggest competitor global star. Okay, what else is happening? AI has a popularity problem, and it is now getting violent. Last week in Indiana, a local councilman's home was shot at 13 times after he voiced support for a data center project in his town. The sign reading quote, "no data centers was left at his door." Then, Sam Altman, the opening AI CEO, was targeted twice in the same weekend. A man threw a Molotov cocktail at his home on Friday and threatened to

burn down open AI's San Francisco headquarters. Police recovered a document from the suspect warning of humanity's quote, "impending extinction from AI," as well as a list of names and addresses of CEOs and investors of AI companies. The 20-year-old has been charged with attempted murder and faces a second count of attempted murder for the security guard, who is at Altman's house, separately two people were arrested for firing shots at Altman's house on Sunday. These attacks

are an extreme manifestation of the rising anti-AI sentiment in the US. Among 31 countries surveyed, Americans reported the lowest level of trust in their own government to regulate AI at just 31%.

And people are now acting on that distrust. In just two years, $64 billion of data center

projects have been blocked or delayed due to local opposition. Here to discuss these stirring headlines and AI's general popularity problem in this country, we are having another panel's discussion with two experts. We've got Bradley Tusk, founder and CEO of Tusk Ventures and Brian Merchant, tech journalist and author of the Blood in the Machine Substack. Bradley and Brian, thank you very much for joining me on the show here. Bradley, I'll start with you. I mean, this news of Sam Altman

two attacks in the span of just a few days. It really is just a striking example of this growing

Feeling in America that I've talked about.

just don't like AI at this point. What do you make of this news and what does it say about this moment?

β€œYeah, I mean, I think people don't like a lot of things. And to be clear, regardless of what you”

think of either AI or Sam Altman, no one should be throwing Molotov cocktails at his home or anyones home. But I think you've got a combination of one. Just general distrust that are happiness in this country, right? Whether it is the fact that we are 23rd in the World Happens Report or 60 Seconds for people under the age of 25, whether it's the fact that our government seems to be hijacked by string as on both sides of the aisle, whether it's the fact that

we haven't regulated internet at 2.0 yet. So even things like social media have never been dealt

with by Washington, let alone AI. And then combined with the fact that AI is really unpopular. I saw a UFO that showed that people have a 47 to 27 by margin of people distrust of AI. People think that AI will replace all more jobs and it creates almost every different survey mechanism out there shows that people are fearful. And then anecdotally, when you just talk to people, they feel the same way. Then you mentioned in the intro local opposition blocking the

β€œconstruction of data centers, I think that's often the fault of the hyperscalers who seem to”

think that it would be okay to pass along all their energy costs to regular consumers. And if you are living your data center, the idea that your electricity bills should go up 30, 40% to subsidize Sam Altman or Jensen Wanda, whoever it is, who they can become trillionaires, is unacceptable. And in this case, I think it's actually let the officials on both sides of the aisle acting in the interest to protect their constituents. And so, yeah, when you have a government that

consistently fails to regulate technology, when you have a government that feels run by the extreme, you have a society that's generally unhappy, these are unfortunately the kind of things to come from it. Brian, you've written about this before and your book is about actually the

lotite movement, which is sort of the first iteration of technology coming along, people getting

very worried about it and revolting. Essentially, what do you make of the attacks on San Altman, what does it say to you? Well, I mean, what it says to me is that this discontent, these grievances that people have, are real, they are pronounced. And we have to look at them as if some of these people are obviously on the extreme and whether it's a political spectrum or an ideology, at least one of these shooters was one of these X-risk AI safety advocates

who's really worried that AI is going to rise up and become sentient and humanity. And so, if you believe that, then doing all you can may look like a rational outcome as a horn

β€œas it looks to everybody else. And to step back a second, we do have a long history, right?”

When there is a disruptive technology, number one, number two, that is being developed and sort of unleashed by a particular sort of group of interests, right? When you have in the lotite time, that was the factory owners who were spearheading factorization and automation. And they were doing it without community input, without asking what workers and communities, what they wanted. So, we have a dynamic that looks an awful lot like what's happening here today, where you have

a few industrialists who had the backing of the state, they had all the resources, they had all the capital, they had all the power. And they were saying, this is the way it's going to be. We're going to automate jobs this way. And you're either going to sort of work in our factory or you're going to get out of the way. And the lunites who actually registered, this is one of the things that people will get wrong about the lunites today is they weren't dummies, they weren't backwards

looking, they understood quite well what was happening. They were technologists, they used this stuff every day, they used the automated technologies and smaller iterations. And they're in their workshops and at home. And so, they understood what the industrialists were trying to do. And that's what motivated their response. They didn't want to see their way of life, subsumed by factorization, given over to a relative handful of interest. So, it was really about power, it was about democracy,

and it was about losing agency. And so, today, a lot of the backlash we see against AI is motivated by these very same fears and concerns in no small part because the AI CEOs and tech titans themselves have come out and use this language right from the beginning. They've said, oh, this technology is

so powerful. It could be big trouble for humanity. It could be the greatest thing humanity's

ever faced. If we're not careful with it, it's going to eliminate 20 to 30 to 50% of jobs given,

You know, depending on how Dario Amade and Thropic is feeling.

disruptive event. And that's how they're forecasting, how they're describing their own project,

β€œtheir own business. And so, again, why would anybody, you know, not take that seriously, right?”

We take it seriously at different levels, but and some people will attach themselves to the X-risk element and say, well, we don't want to exterminate humanity. And most people will say, hey, I'm out here listening, and you're saying, you want to automate all the jobs with AI tools. You want to automate what, why would I be okay with that? Why would I trade that for a, why would I, why would I allow a data center in my backyard to help you in that project? So to me, all of this

backlash, you know, I'm honestly a little surprised. It hasn't arrived a little bit sooner, just how aggressive the industry and its leadership has often been. Yeah. Yeah, this gets to the sort of

the PR and comms point. And broadly, I mean, you've worked in exactly this sector, you've worked in

politics, you've worked in tech, and politics, and how they come together. And there is this interesting question, which is like, well, all of the big AI CEOs are telling us that this technology is in a lot of ways quite scary, and in some cases bad, like it's going to destroy things, it's going to destroy white collar work, it's going to completely disrupt the economic model as we know it. And they've done it in a way that is legitimately quite scary. And I guess it does make the

question of like, I mean, why say that? If you're the CEO of a technology company, why would you come out and say, this technology is going to be really bad, and it's going to really negatively impact a lot of people's lives? I mean, what do you make of the constant strategy that? Yeah,

β€œI think that keep in mind from their perspective, comms is a couple different things at the same”

time. It's the way we're talking about it right now, which is how the public might perceive something, how regulators and lawmakers might perceive it, but it's also fundraising, right? So open AI and anthropica are still both privately held companies with giant valuations, open AI, and nearly trillion dollars at this point. And as they raise money, a lot of what you just said interpreted slightly differently is very appealing potentially to investors, right? So when you're

talking about, hey, this is going to wipe out lots of jobs. What investors here is, this will be the tool instead that businesses are going to use to replace workers, and instead they're going to pay money to open AI to anthropic to all of these different companies. And so I think that the language that you use potentially to recruit employees for the New Yorker has a great piece this week on Sam Altman, and a lot of the recruiting that he did at Open AI was around the idea that he was the

responsible person trying to protect humanity from the potential parallels of AI. That clearly does not seem to be the case, but he used that language to incentivize people who did care about

β€œthis issue genuinely to come work for him. There's language they use with investors. And I think”

what they're finding right now, and I think sometimes this is sort of the both naivete and arrogance that you will see in the tech world, which is a lack of understanding of how their words then land with real people or with people in politics and government. And a lot of what they're saying is now coming back to haunt them, but the real question to me is, we know that the public is concerned. And we have seen at least at the local level elected officials, protect consumers from things like

paying for the costs of the energy needs of data centers, but when it comes to the larger issue of catastrophic risk, state-flect New York and California have done some regulation around frontier models, but some of this really needs to be done at a federal level. And right now we're seeing the opposite from the White House. We saw this White House issue an executive order in December telling states, "You're not allowed to regulate AI and luckily Governor's just from both

parties around Lake Nord that, but there are areas where you're going to see Washington need to step up, and I think whether or not they do so may dictate how this whole thing plays out." Stay tuned for more of this panel right off to the bread. And if you're enjoying the show, please follow on new profiting markets' YouTube channel. The link is in the description. This is advertiser content brought to you by version of Atlanta. Again, a couple weeks back.

I got you a birthday gift not to pat on myself on the back, but it was a pretty good one. It was indeed. You surprised me with virgin Atlantic upperclass tickets to London.

So tell us all about it. It was pretty incredible. From the moment I entered that upperclass

cabin, I have to tell you I felt like a VIP. Anything I needed a drink, snack, assistance with the seat. Flat seats, flat seats, exactly. Had the four-course meal, got my champagne very delicious and enjoyed the food. And the journey home.

The journey home was great.

Heathrow Clubhouse was awesome. Got myself a coffee, headed over to the meditation pod that they

called the soma dome, kind of felt like a sort of spaceship where you relax and think nice thoughts. So I did that for a little bit. Then we went over to the wing, which of these acoustically sealed boots, where you could do some work. You could even record a podcast. I didn't do that, but maybe I should have. It was a very enjoyable experience. So, Ed, they call it real question here. Is what do you plan to get me from my birthday? See the world differently with virgin Atlantic,

flying should be more than just transport. It is part of the adventure. It's Virgin Atlantic.com. To learn more. Take it's a lounge access provided by Virgin Atlantic.

Support for the show comes from Odo. Running a business is hard enough. So why make it harder?

With it doesn't different apps that don't talk to each other. Introducing Odo. It's the only business software you'll ever need. It's an all in one fully integrated platform that makes your work easier. CRM, accounting, inventory, e-commerce, and more. And the best part, Odo

β€œreplaces multiple expensive platforms for a fraction of the cost. That's why over thousands of”

businesses have made the switch. So why not you? Try Odo for free at Odo.com. That's Odo.com. You hear a lot of talk about AI replacing humans. Curiosity invites a better question. How will humans shape AI? That's something SAS has been working on for decades.

They're celebrating 50 years in data and AI, and long before responsible AI was trendy,

they were building systems around transparency, governance, and trust. If you're curious about what responsible AI actually looks like, visit SAS.com to learn more. That's SAS.com. We're back with approaching markets. It's a very difficult time in a lot of ways to be an AI executive because, you know, on the one hand, as you say, there is an economic incentive or maybe a fundraising incentive would be the right way to put it to say that this stuff is

going to be very damaging and it's going to just structurally completely append the entire economy as we know it. But at the same time, I also wonder if they also actually believe that. And that seems to be something that you also have to kind of reckon with, especially in the context of a government, which seems pretty unwilling in general to promote any form of policy, any form of regulation. And if you're building in the AI space in that environment, and you seem to recognize,

this administration doesn't really want to do anything in terms of regulation, then maybe

β€œyou do feel you need to sound the alarm and say, hey, this is going to be, this is actually a”

big deal, this is actually going to be a problem. And then on our end, it becomes very difficult to understand what's true and what's marketing and what's hype. So I guess, I mean, Brian just turning it to you, which parts of the story do you think are real? I mean, when Sam Alman goes out and says, yes, this is going to be massively destructive in a lot of ways. And when Daria Ahmadis says that, I mean, to what extent should we take that seriously versus write it off as, you know,

marketing? Yeah, I mean, I think you're absolutely right that it's both of those tendencies are kind of bound up in this same trajectory. And part of this is necessity, right? Like the tech landscape is such that if somebody wants to, you know, release a product that can compete with one of the giants, like meta or Amazon or Google, then you need just truly an immense amount of

β€œcapital. If you want to compete rather than angle to get bought up or something. So you need a story”

that can command the kind of capital that can compete with one of, you know, three or four of the tech all-agopolies that are out there, right? The tech monopolies that have sort of over the last 20 years sort of concentrated their power. And so that story then becomes not just, hey, here's a cool product. That's not going to get you there. You need a story that is on the magnitude of we are creating that software that can automate every meaningful job. And, you know, that

language is right there and open a ice charter still to this day. You can look at that as intrinsic to the pitch to investors. And so I think there are a number of different factors there. I think if you look at the last 10 years of the history of sort of this latest AI boom, then you really see

It beginning in earnest around at least expressed fears about x-risk and the ...

as sort of presented by Nick Bostrum and others that AI could become super intelligent, become

β€œthis danger. But I think one of Sam Altman's key intuitions was that, you know, early on when he was,”

you know, just sort of heading up, you know, just quote unquote, you know, heading up y-combinator. He sends that there was a lot of energy here in this space that he could tap into one where the other and so he reached out to Elon Musk and kind of mimic this language and was able to sort of use that concern just as a lightning rod to get some interest and power and momentum into AI in general. And then from there, it's hard to walk away from that narrative. You see that the more you talk

about it that it does affect investors. It does sort of compel people to pay attention. It does get headlines. And so I think it does sort of balloon on and on and out. So some of these guys, I think like Dario Amadei, I am sure he's legitimately concerned about all of this stuff is his marketing department aware that he can win a round of headlines by expressing that concern in the release of mythos. Of course they are. So they present every sort of white paper,

every released or unreleased model, you know, with the same sort of level of gravity as though it were a new set of promotional materials. And so it becomes difficult to distinguish between the two. But I would say it is yes and both. And now we're in this pickle where the AI industry can't really walk away from its promise that is attracted so much investment in the first place. They can't say, "You know what? We're not going to automate all the jobs." And soft bank might say, "Well,

then what was that $30 billion for?" Right? You know? So it really were sort of up on the

β€œon the brink and the precipice here. And I think Bradley was absolutely right. You know, it's not just”

the politicians. It's also the AI industry, you know, meta and open AI and all these guys are bankrolling packs right now to the tune of $100 million to sort of influence elections. They supported the moratorium to band state level AI law making. So the very least they could do if, you know, they want to de-escalate the rhetoric, as Sam Altman says, is, you know, is stop interfering in the democratic process, right? Is to let voters feel empowered. It feels some sway over this technology

that is, you know, being integrated into every port of society. Yeah, it's a great point. I mean, if there's one thing that's going to make you dislike AI even more, it's to read a headline that Mark Andreason is bankrolling millions of dollars into these pro AI super packs that we are continually starting to read more and more about. And Bradley, what is the right policy response here? I mean, what we've kind of identified is that we don't seem to have much regulation at all.

Americans are very scared. They're getting increasingly angry about it to the point where we are seeing literal violence against these texty years. Like, what are we supposed to do this from a policy perspective? Yeah, I mean, I think you almost have to think about it from a taxonomy

of how to regulate AI because I've never, you know, I've been working in around politics for over 30

years and there's never been anything quite like this. So there's in my mind kind of four different categories. The first is consumer protection. And that typically tends to be the province of state and local government. So that's things like regulating chat bots, especially around things like mental health, regulating data centers and the negative externalities that can post on others, regulating the use of AI, hiring decisions, things like that. The second would be catastrophic

harm. Like we said, California, New York have tried to pass regulations around frontier models but to a 50 states. And this is the kind of thing that really should be done by the US government. The EU has a framework that covers, you know, 22 countries. We have two states. So that's number two. Number three would be jobs. And I don't think there is any plan whatsoever for how to deal with the fact that we could be seeing 10, 20% unemployment at some point because of AI. And look,

I do believe that at some point in 20 years whenever it is, all kinds of new industries that we can't conceive of today will be created that will have a lot of jobs, thanks to AI. But a lot of

β€œpeople are going to fall through the cracks. Look, that's why I think Andrew Eng was right way back”

to, you know, the decade ago when he proposed universal basic income because I think that we are going to be in a world. And I will say, I just saw a white paper today, Daniel Striber, who's the founder and CEO of Lemonade, which is an insurance tech company, funded a study for in Israel

that had the idea of basically creating a new type of tax that adds corporate profits increase

because they've reduced head counts, taxing that as sort of a VAT and then redistributing that to people. And what he calls it, I could have income tax, but effectively is a form of universal basic income. So there are ideas out there, but you have to think about them. And right now politicians

Just say job training, but like we cannot become plummeters, that's not going...

And then the fourth would be, you know, where I, where AI can do good. So if you think back to

Doge and it was a total disaster, but where Doge could have been really great is how do we bring AI into government to do things like procurement, compliance, licensing, permitting, data management, facilities management. There are a lot of ways that we could make our government a lot more

β€œefficient and a lot more cost-effective. And so the challenge is you have to be able to think about”

all of these different categories at the same time. And that really requires thoughtful leadership and because we live in a world where I believe every policy outcome is driven by, you know, a political input, politicians are thinking about their next election, they're thinking really about their next

primary, basically. And they're not thinking about all of the different complications that we just

outlined. And so, you know, this is a time where we really need truly transformation or leadership at all levels of government and by and large, we don't really have it. One final piece, at least a small measure that I'm trying to do, which is to use AI in a way to cut against some of that institutional power. At a my foundation, we're coding a tool called how to create societal change that will be an agent where you can put in there. Okay, I want to ban cell phones

β€œon my kid's school. I want to stop sign on my corner, whatever it might be. And then the agent”

trained on basically my, you know, decades of all of our work here will say to you, okay, great. Here's the current law that govern cell phone use your kid's school. Here's who's in charge

of it. Here's what we need to say. And then here's a full campaign plan for how you as an individual

could go about changing it and it will be totally free. So it's a very small act of defying and so I get that, but we are in the process of coding it right now and my hope is to release it in the fall. I mean, just to follow up on what policy makers and policies should be doing, how should you be positioning yourself as a politician? I mean, we've seen that burning AOC have they've been like stopped the data set as period. Right. And they've said, I mean, people

saying they want to end AI outright. That's not quite true. They've basically the idea is until we have a framework of policy press pause no more. And I guess the question becomes like, what is going

β€œto be the popular thing to do? Should you be super against AI? Should you be pro AI, pro innovation?”

I mean, that seems to be like the big question. I guess I'll follow up and ask that question to you Bradley as somebody who's walked in exactly the space. What would you be doing? Yeah. I mean, the question, it sort of depends on what you're running for. So if you are a member of Congress, let's say, and your district is Jeremy Mandor, which is true for all of about 25 of them at the house and turn on your primary is going to be 10% 12% something like that. Odds are being radical,

like an AOC of burning might be or on the far right too, and just opposing AI in all forms, probably is the right political play. Now, if you're running for Senate or governor or president, where there's a larger electorate or potentially a competitive general election, then you can't quite be, you know, so extreme and you need more nuance. I actually do think that and this might be very naive and maybe I'm just falsely hoping for this. But I could see a world in 2027 where a

democratic house, Republican White House and probably a Republican Senate, but we'll see, actually do manage to get together and come up with a comprehensive bipartisan deal around AI, not necessarily because they even care about the problems that the three of us do and that we're talking about here. But simply that if they fear that 2028 is going to be the AI election and it looks like they haven't done anything about it, none of them want to have to go stand before the voters and say, oh, well,

I couldn't do anything, don't blame me. And so I do have this hope that simply because there's so much attention focused on it and so much anxiety around it, and this might be the one place where everyone actually could get together and come up with some thoughtful ideas. Yeah, it seems to be one of the few issues in which like both sides kind of agree in its general dislike of it, or at least anxiety towards it. I mean, if you look at something narrow like

those, the dozen or so states that have passed, chatbot restrictions and regulations, those are totally bipartisan, but in terms of who's voting for them in the bill itself and in the types of states doing it. Yeah, Brian, I mean, just going back to the Lodites and just for context for me, I mean, this is what happened when the factory was introduced and then you had all these textile workers in England who revolted, they smashed up the machines, etc. I mean, in a sense,

I wonder if this is just what happens, like when a new technology arrives, you have violence,

You have disruption, you have chaos, but also maybe not, and maybe there's a ...

supposed to prevent this. I mean, what lessons can we learn from that period of history and

β€œand how should we take it moving forward? Yeah, no, that's not absolutely, it's absolutely”

not a given that we'll see violence and mass disruption at this scale. There are a couple things that tend to signify that you will see it, right? When you have an immense concentration of capital and power and the development and deployment decisions around a technology are flowing expressly from that and being imposed anti-democratically on a population, you're much more likely to see sort of angry uprising in rebellion. And again, it's another way that this moment sort of

maps relatively and worryingly neatly onto the luttides and the dawn of the industrial revolution.

Because at that time, you had this moment where automated machinery was beginning to be produced

and mass and factory owners or to be factory owners realized that they could amass a bunch of these machines put them in those early factories and divide and automate labor in a way that could break

β€œthe power of sort of the workers and the guilds, they weren't actual guilds, but the industries”

and the cottage industries that had developed and had shared interests. And so when you have all of that sort of power and decision-making capacity and money sort of concentrated in a few hands, it is a recipe for disaster. Because the, I mean, the cloth workers, they went to Parliament for years and years and years for a decade, full decade running up to the actual lettite rebellion, saying, look, the new factory owners, they're using these machines in ways that violate the laws

on the books, they're hiring, they're hiring workers that haven't been a apprentice that shouldn't be allowed to work, all these things that we have to regulate the trade, they're ignoring all the laws, all the standards, all the norms and then they're just pushing down our wages and pushing down our quality of life, they're destroying our livelihoods and they won't stop. And so here's a list of things that you could do to fix that. Funny enough, one of the things that they proposed

was very much like a Andrew Yang style VAT, where it was like, why don't you tax the extra amount of cloth that a machine can produce and then use that to sort of fund like a, you know, a general fund for workers who need to return, but they were laughed out of Parliament, right? Time and time again, Parliament not only said, no, we're not going to listen to you, they tore up those laws and regulations on the books and basically left it completely up to the sort of the whims of the

market and these very powerful actors. And so when you have a situation like that, which increasingly

is mirroring what's happening today with an industry that has a ton of power, you know, at least right now in its alliance with the Trump administration has sort of the ear of, you know, David Sacks and sort of the insiders in the administration and they're working very closely together to do what they're going to do regardless of sort of popular will. And you have all these efforts to overturn local laws and things like that. Then yeah, it does start to be this period where

people look at that and say, well, what can I do? Right, what can I do? What are the options for me on the table? I voted. I told my council member, don't vote for this. 100 people showed up at this event and said, please don't vote for this and they did it anyways because the industry convinced them or they thought it was the right thing to do, but suddenly it looks like I don't have a say. I don't have any power. I don't get a, I don't get a vote in how the AI future is going to unfold.

And if I'm in Gen Z, where the negative sentiment towards AI is overwhelming, the NBC poll that just came out, it was like 44 points underwater for people who aged 18 to 34. They hate it because they're looking at the headlines and it's saying, this is the worst job market for entry-level

β€œjobs in 37 years. AI's taking all the jobs. So yeah, what are you, what are you going to do?”

Are you just going to kind of sit down and say, well, I guess I don't get a job. I guess the data center is going to, you know, get put up in my backyard. So in this sense, I feel like the industry, politicians, everybody should be paying close attention to those very genuine and very rational feelings of a, of agreement over what's happening and what's what, you know, what's happening to their futures, too. Right. If this isn't the wake-up call that people need, then I really don't

know what is. Bradley Tusk, Brian Mudge, and I could talk about this for hours, but we need to wrap it up here. I appreciate both of you appreciate your time. Thank you so much for joining us. Yeah, thanks for having me. Yeah, thanks for having me. Okay, that's it for today. We appreciate you joining us for another Prophecy Market's panel. If you have a guest, you think we should speak to on this topic or any other, please drop us a line in the comments or email our

Producer Claire at markets@proftymedia.

This episode was produced by Claire Miller and Alison Weiss,

β€œedited by Joel Passen, and engineered by Benjamin Spencer. Our video editor is Brad Williams,”

our research team is Dan Schlon, is about a cancelled Christmas Don Hugh and Mia Salvaria.

Our social producer is Jake McPherson. Thank you for listening to Prophecy Market's

from Prophecy Media. If you like it, give us a follow. I'm Ed Alison, I will see you tomorrow.

Compare and Explore