Should the US start bringing troops home from Europe?
I worry a lot that this is where we were headed.
Trump sort of signal it a lot during his first term.
And in my conversations with European friends, I've been telling them, be ready for this because I do not think we get through four years without the United States reducing its European footprint. I'm Jake Sullivan, and I'm John Finer,
and we're the host of the Long Game, a weekly national security podcast. This week, we debate whether the US should draw down its true presence in Europe and we break down the latest developments in the Iran War. The episodes out now search for and follow the long game wherever you get your podcasts.
Does anyone really know what goes on behind closed doors at the Supreme Court? Four years ago, I got a tip about the court, and I was not in the market to cover it whatsoever. But this tip was about a secret influence campaign that had been carried out inside the court
as you know, the very idea that is outrageous. I'm Pete Barara, and this week,
New York Times investigative journalist Jodi Cantor,
joins me to discuss her exposé on the court's Shadow Docket. The episode is out now. Search and follow stay tuned with Pete, wherever you get your podcasts.
Today's number? 100,000.
“That's how many dollars it reportedly costs”
to get one ticket to the Metgala. Meanwhile, the price of a table was $350,000 or as attendees call it half a facelift. Welcome to Proft View Markets. I'm Ed Nelson.
It is May 12th. Let's check in on yesterday's market vitals. The major indices all climbed led by a rally in chip stocks, the S&P and the Nasdaq both hit new records.
Those gains came despite President Trump's rejection of Iran's proposal to end the war. He also said the ceasefire was on, quote, "massive life" support. Brent crude climbed higher as hopes for peace,
faltered, and the yield on tenure treasuries rose. Okay, what else is happening? Since the 1800s, every generation has been smarter than their parents, except for Gen Z.
That is what neuroscientist Dr. Jared Cooley Horvath told Congress last month. Today, 90% of college students and 84% of high schoolers use AI in class or for their homework.
And according to OpenAI's own data, one of the most common use cases for AI is writing. Meanwhile, a recent study found that AI tool usage among business students
was associated with weaker, critical thinking skills.
And this data raises an important question. And that is what do we lose when we outsource our work and our thinking to AI, after all 900 million people use chart GPT.
Every week, in other words, is AI making all of us dumber.
“Now, you might remember that we discussed this question”
last week. We've been investigating this question a little bit more. But today, we want to bring in two experts who are thinking about this, who understand these issues. So we're going to do something a little bit different.
We are going to move away from the markets for today and focus on this question. So we're joined by Cal Newport, Professor of Computer Science at Georgetown University in New York Times, bestselling author of eight books,
including slow productivity and deep work. And we've also got Derek Thompson, host of the plain English podcast and author of abundance, Cal, and Derek. Thank you so much for joining us.
Welcome to the show, Cal. I'm going to start with you. Because you have written about this, and you've talked about this idea of cognitive fitness and this potential reality that it's in decline.
What do you make of what's happening on the ground
“in terms of AI usage on what it's actually doing to our brains?”
Well, I think AI has the real capacity to make us dumber. It's new enough, and usage of it is still growing that we're not seeing the major effects yet, but I fear that we are going to see it. And the way I conceptualize this world of cognitive fitness
is that social media and highly engaging tools on our phones started this trend. It moved us away for more sustained concentrated activities through which we strengthen our brain. AI is now taking target on the other main cognitive activity
that makes us stronger, which is writing. This is emerging as one of the major uses of this tool is to alleviate the strain you feel when you look at a blank page and have to fill that blank page. So if AI does, in fact, significantly reduced the amount of writing,
we do whether it's super important or just the memo, I do think we're going to see a continued diminishment of our intelligence that began with highly distracting phones about 10 years ago.
We'll get into what we do about this.
But Derek, do you agree with Cal?
Yeah, of course. Of course, he's right. You know, maybe we'll explore some disagreements between me and Cal in a few minutes, but I think he's right on the money.
I mean, like, if you doubt what Cal is saying, and you use AI, pay attention to your own life, pay attention to your own use of time. When you ask artificial intelligence to summarize an article, who to summarize a paper,
we're got to be to summarize an entire book. Do you understand that article that paper or that book, as well as if you'd read it? Of course not. Okay, now maybe you could argue that all right,
but they have to time, because now, rather than read that one book, which might have taken me 10 hours, I can summarize 15 books, and that'll take me sort of 10 hours to process or something.
Well, even there, you're engaging at such a shallow level with each book that I'm not sure you really understand the degree to which they agree and disagree with each other, but also what you're depriving yourself from the inability to read anything
for more than five or ten minutes at a time. And that is a skill that leads over time to the ability to make this sort of deep connections,
“and I think are the basis of all true, insightful thinking.”
So I absolutely think that the risk here is really, I guess as I described it, sort of at least two layers, one that you're depriving yourself of the experience of truly understanding something that you think you're trying to understand,
and number two that you fall out of a habit that is necessary to think deeply in the future.
And to Cal's sort of maybe first point to end there,
because you're going a chronologically. You know, we're looking at things like the Flynn effect, and we're looking at things like test scores over time. Well, if we're depleting the inability of fifth graders and sixth graders to think,
and they continue to use AI in seventh grade and eighth grade and through 12th grade and through college, that's not just one year of losing the practice of doing deep reading and deep thinking. Now we're talking about a decade,
a formative decade that you've chosen to essentially not work on the kind of muscles I do like this fitness metaphor, not work on the kind of muscles that are so necessary in the long run for understanding something deeply to be smart about it.
“So absolutely, I think the Cal's on this something.”
It seems like there are two main forces that we're kind of identifying here. One is the screen in general and are increasing addiction to those screens. And then the other is AI and our dependency on AI to solve harder problems, more nuanced difficult problems. Cal, just going back to you,
which is the more dominant force or is that even a relevant question right now? Well, the biggest impact so far is come from a decade of hyper-optimized engagements on a portable device that we have with us at all times. That has had a massive impact.
Essentially what happened is the machine learning algorithms behind especially short-form video platforms, built in approximation of our short-term rewards centers in our brain, so that it could give exactly the signal that's going to resonate strongly with those particular circuits.
This makes the phones essentially irresistible. When it is with me, I have to take it out. I have to look at it. So that over the last decade or so has done substantial damage to multiple generations ability to actually not just sustain attention,
but again, to build those circuits you can use to think deeply when the time is required. These circuits are built through the activities of reading and writing. These are privileged activities in the history of modern humanity, post-paleolithic humanity.
AI is new on the scene. But I really feel like it's going to be a catastrophic cousin to what we already were encountering with hyper-engaging content on a screen. Because if that really focused on reading, we no longer sit and concentrate on a book in a way that could build deep understanding.
Writing was its partner. Writing is the pair to reading. Writing is where we take the circuits we etched with the reading and then we apply them in reverse to create original thoughts of our own. We have to practice that muscle as well.
And now for the first time,
we can begin to substantially outsource that activity.
“So I really think about reading and writing as activities.”
This is not in nostalgia. This is not over talking about horse buggy and an era of automobiles. I really do think those are the activities on which the post-paleolithic modern human brain were built. The brain to give us theology.
The gave us politics that gave us philosophy. The gave us theology. The brain that everything we hold dear was built around. Substantially dependent on reading and writing the shape it. So I'm really worried about what we already lost with reading
and then we have a new tool. This is going to start to take right off the table as well. It sounds like sort of the argument you're making. Because someone would say in response, if someone would have pushed back,
the argument would be, well, every technology in history has made our life easier in some capacity.
Like, you invent the engine, you invent the car,
it makes it easy to get around. And the argument would be,
“this makes it easier to do the job of critical thinking.”
In the same way that other technologies do make other jobs easier. Cal, it sounds like what you're saying is that this is different.
The brain critical thinking is on a different level.
It's so endemic to what it means to be a human to the point where this is actually a bad thing. Unlike other technologies. Would that be the right characterization? I think that's right. If we used a fitness analogy, reading was a great technology.
It makes us better at critical thinking. Writing was a great technology to make us better at critical thinking. But to use something like AI is like bringing a forecliff into the gym. And be like, you know, we've been in here for years. We've been using weightlifting to try to get stronger.
Well, I figured out with a forecliff. It will be a lot easier. I don't have to lift the weight myself.
“You're actually being counterproductive to the actual goal,”
which is strengthening the cognitive muscle that gets stronger.
So no, I do think this is not a technology that's making us better at critical thinking.
It's allowing us to sidestep the hard activities that previously used to make our brain stronger. The product, the benefit being sold by this product, is convenience in the moment. Not a stronger brain or stronger ability to think. Stay tuned for more of this panel right off to the brick. And by the way, we are heading out on tour at the end of the month.
So for more info and to get tickets to a show near you, head to profgmailkitstool.com. Support for the show comes from Hostinger. The biggest barrier to entry from most entrepreneurs is in a lack of capital. It's the friction of starting.
You can spend months in the strategizing phase, which is precious time that could instead be spent actually making moves. But these days, the rules have changed. AI is redefining who gets to build a business. So, when you're building the next big thing, go live in minutes, not weeks, with Hostinger.
Hostinger is an all-in-platform that brings everything into one place. You're domain, website, email marketing, AI tools, and AI agents. So you can launch online without stitching together five different subscriptions. Start with a prompt and add your personal touch. You can create websites, online stores, and custom maps without coding or designing skills.
Then, use AI agents to automate tedious tasks and grow your business.
Hostinger powers over 10 million websites, and there's a reason it earned a CNET editor's choice award.
Turn your one day into day one. Go to Hostinger.com/TheProvG to bring your ideas online for under $3 a month. Plus, you can an extra 20% off with promo code, TheProvG. That's less than the price of a cup of coffee for a month.
That's Hostinger.com/TheProvG promo code, TheProvG for an extra 20% off. This week, on Network and Chill, I'm joined by Tanksonatra, the meme king, with over 15 million followers across Tanksonatra news, influencers in the wild, and his personal account. Tank is breaking down what the meme economy really is, how much a single sponsored post-pays,
why major brands are throwing serious money at jokes, and how meme culture, think preparation, age, starter packs, and a perfectly timed screenshot, is actually reshaping how we think about money and value. Get ready for a conversation that'll change the way you scroll, make your rethink what going viral is really worth,
and prove that sometimes the most serious money moves are wrapped in the silliest of jokes. Listen wherever you get your podcasts or watch on youtube.com/yourrichbf. We're back with ProfG markets. So if we all agree that this technology is making us dumber,
“and it seems that it's, I mean, I think that, I'm not sure.”
I'm not sure who disagrees with that at this point. I think it's pretty clear to us. I mean, Derek, let's like model this out, game theory it out. Where does this go in terms of the economy? I mean, if we are dependent on AI,
but none of us can really come up with original ideas, and we can't think critically about issues, do you think that that steers the trajectory of our economy involves a different direction? Let me try to take this question at a really high level of abstraction,
and then I'm going to zoom in on some specifics. I think that technology is use. How the effective AI is exclusively dependent on how we use it. If you look at how artificial intelligence was recently employed by the Mayo Clinic in radiology to see pancreatic cancer
on average 2.4 years before a doctor could see it in a scan. You can not possibly argue that that is AI making people dumber.
Yeah.
at seeing pancreatic cancer.
The use of technology, the use of artificial intelligence there, is to supplement the human radiologist eye to see pancreatic cancer. That is obviously good, so I don't want to represent my opinion here, as being a medical agrees, as being like, "Oh, all AI is bad." But that's not the way that artificial intelligence is being used
in high schools and college. It's being used to cheat, and to cheat at a scale that is keeping students from learning how to learn. So I am very optimistic about how this technology is being employed
“in some industries, while the same time I think Cal is absolutely right”
that if you look at the use of artificial intelligence in high school and college, I see practically no reason to be optimistic about that generation's ability to learn, to think deeply, to write by the time they graduate. So technology is used.
There are some wonderful use cases of artificial intelligence, both in the education system today, like I think it is basically a tool for mass cheating that is in fact cheating students out of the ability to think in the long run. Yeah, you may bring up an important point here,
which is we should probably distinguish who is getting dumped
because of AI, and the reality is we're mainly talking about children here.
We're talking about people who are in school or even high schoolers who are using AI to do their homework, to cheat. And we're seeing, as you mentioned earlier, that math scores are going down, science scores are going down, all of these standardized testing scores are going down,
even literacy rates are going down. So I mean, it sounds like maybe the point on which we would all agree is that AI has fundamentally transformed what it means to go to school. And that is the point that perhaps needs further and deeper exploration, deeper discussion, and perhaps some regulation,
Derek, if this has meant that everyone cheats now, what do we do? Yeah, if I was going to write like a magazine piece about this, I think the way that I was framed it, and I really like chaos framing.
So I'm borrowing this from him.
But I would say that for the last 10 to 20 years, we've been running this experiment of distraction in our schools. Like we have very clear correlative that I think causal evidence that suggests that phones are an enormous distraction that's responsible for the global, not just US,
but global decline in math scores in literacy scores and in other measures of one's capacity to maintain attention. Now on top of this weapon of math distraction, you add artificial intelligence, which is this extraordinary tool for synthesizing information, which allows students to cheat
at an extraordinary scale that we know is happening in colleges and high schools.
“If you want to fix that, if you want to fix this weapon of math distraction”
to follow by this weapon of math cheating, you have to solve it directly. Take the phones out of the classrooms, put them in pouches, run that experiment, certainly, to see if it works. And then when it comes to testing knowledge,
you just have to move out of the modes of testing knowledge that can be cheated toward modes of testing knowledge, that can't be cheated. So when the can't be cheated is something a little bit more like the Oxford model, where most of the grade
is dependent on in-class oral exams, you have this system or culture of, you know, you take the history class, you learn about the Habsburg Empire, rather than write an essay about the Habsburg Empire. I'm much more likely to just ask you to be cheated
or write it for you. You get up in front of the class and talk about the Habsburg Empire and talk about the Holy Roman Empire and people ask you questions and you defend and prove your intelligence to the classroom to the teacher.
So it's a little bit like my wife just finished her PhD in clinical psychology at the end of a PhD, what's the verb that we use to describe the end of a PhD? You defend your dissertation. You get up in front of a group of experts
and you don't just give them the paper and say, read it and then give them my degree. You defend it. They ask you questions. They say, what about this methodology?
What about figure number one? And you say, oh, well, here's why I did the methodology
“and here's why I think your one looks the way it does.”
You prove in real time that you are the author of that paper that you understand the work that you did. And I just think that more education, if you really want to get around the cheating epidemic, probably has to slurp in this Oxford model or this dissertation model
because it's much harder to cheat in an oral exam. It's a really interesting point. Cal, do you agree?
No, that has to be right.
I mean, this is what's happening in academia right now.
It's a combination of the Oxford model and what I've long been advocating for, which is the explicit discussion and promotion of the ability to aim your mind's eye towards complicated topics as the goal of school.
And it's something that we should be talking about starting at grammar school and moving all the way through the university system. We are here not just to get content and reproduce content on test, but to teach our mind to be comfortable thinking. And that's a frame for which to see almost every activity we do.
I would also throw into this.
“I think specificity is a really important point we made earlier.”
So I'm just going to throw in a sort of specificity constraint here. What we're really talking about. The AI is the wrong term. That's way too broad.
That includes things like the Cleveland Clinic or Mayo Clinic model
that Derek was talking about. That model, for example, has nothing to do with a large language model like the type you would see produced by the Frontier AI companies. This is a prediction model that's custom-trained on labeled datasets of radiology scans.
We've been doing this since the 90s. And it making slow and steady progress. Like these sort of AI models that are very utilitarian and useful. Aren't new. Aren't currently experiencing a massive exponential takeoff
in capabilities. But often the Frontier AI companies will wander the results from these non-LL models and sort of mix in with what they're doing. But what we're really talking about here is large language model based tools.
And in particular, using those for the production of written text
or in some sense to sort of aid thinking.
And that's exactly where we get to all the problems and the academic setting that we've been talking about. How big of a problem do you consider this in terms of like a national economic scale? I mean, there's one side of this, which is like, you know,
we want to protect our kids. It's important that our kids have fulfilling our interesting school experiences. They got a good education, et cetera, which I'm sure we'll agree. But then there's also another side to it, which is like,
we kind of need children to have functioning brains for when they eventually lead the nation. And there might even be like a China versus USA argument here. Like, if students over on the other side of the planet or being trained properly, their AI chatbot
to being regulated properly, they know how to use their brains doesn't that mean that sort of 50 years down the line. They're going to beat us and outperform us on every witch metric. I mean, is that an argument that you see as relevant or important call is that something that comes up
in your conversations when you discuss this topic? Well, I have a relatively radical view on this. I'll be interested, you know, Derek is the economics expert here. So, the interested in us take on it. But I argue, we have already seen the economic impact
of this reduced cognitive fitness. This has already been a major storyline of the last 10 to 20 years. I mean, given the technological advancements we've had and the digital, the intersection, the digital and the office, we should be seeing exploding total factor productivity,
especially in non-industrial sectors like Denolite sector. There's been a lot of different things that have been played on that. We have the economic crisis and other things going on. But at total factor productivity in non-industrial sectors has been more flat or uneven than you would expect.
And I would argue, this is in part a result already of massively increasing the distractions and context switching that happens in our lives and in the workplace.
“We're in a world now, I think one of the most telling statistics”
of the current office is now the average worker is going to check an email inbox or chat channel once every three minutes on average. That is a disastrous cognitive context to use your brain to add value to information, which is the core activity of knowledge work.
So I already think we're seeing a flat line. This is sometimes called the productivity paradox of the 2000s, because of this impact on cognitive fitness. So yes, if we go farther down this road and using LLM-based produced writing,
to take that important strengthening activity off the plate in our educational system, it's not just about kids' brains and some sort of abstract notion of smartness equals good. I think the economic impact that we may already
have been feeling for 10 or 20 years is just going to get way worse. And it is something we do have to really care about from a national perspective. Tarak, what are your views on these economic impacts? Yeah, you know, as I was listening to you
in CalTalk, sort of these two two different statistics sort of pops
“in my head that I think juxtaposed together, interestingly.”
One is that there are a lot of indications that Gen Z is the most materialist generation that we've ever seen in American history. If you ask various groups sort of bucketed by generational cohort, how much money
they consider success in America. You tend to have about $150,000 be the norm in most generations until you get to Gen Z and they say it's $4 to $500,000.
Institute for Family Studies recently looked
at a monitoring the future survey that asked various questions about materialism among young boys and girls in high school. And that line of materialism is just going up and up
and I think for the first time in the last 30 years
women are now higher on a certain measure of materialism and so on the one hand, you have this extraordinary desire among young people to be successful. They open their phones, they look at influencers, they see rich, successful, beautiful people
living their rich, successful, beautiful lives. And so that's one train track that's coming along here. But there's this other parallel train track. And that is students cheating constantly in high school in college.
In the short run, if you cheat in every test, you're cheating the test. In the long run, if you're cheating on every test, you are cheating yourself. Yeah.
You are removing from yourself the ability to lift the weight.
“And if you want to be rich and if you want to be successful,”
I myself certainly know of absolutely no individual who is rich and successful, who doesn't work unbelievably hard, who isn't very good at what I think of as cognitive time under tension, that is to extend the fitness metaphor, this idea that if you do sort of one rep of 150 pounds
on a bench press and it takes one second,
that's a certain amount of resistance. But if you make that a five or even 10 second up and down, it's much more tension on the muscles. That's time under tension. I think thinking has a similar principle that really
great ideas benefit from the ability to sit with those ideas for a long period of time to figure them out. To find the simplicity that I think is Oliver Wendell Holmes said is on the other side of the mountain. You learn about something and then you're able
through your learning about it to make it simple and make it effective. If you are cheating yourself out of all these tests, you're cheating yourself out of the ability to become rich and successful. And so one thing that I'm afraid of is not just that these people, who are cheating, aren't going to lose that to the Chinese,
or whatever the finish, or the deans, maybe they are, maybe they aren't, they're going to lose that to people, who can think, who are doing the work, who can sit with ideas, who do have and are building cognitive time and attention.
And so I just think that a world in which you have a generation of people with extraordinary expectations of material success, but underdeveloped abilities to actually achieve that success, that just seems like you're setting up a generation for unbelievable disappointment and anxiety and depression.
“So this goes, I think, not just to the concept of national greatness,”
US versus China, although maybe it touches on that. It goes to, like, what do we want from our life? Like, what do people who want to be rich and successful? What should they want from their life? They should want the ability to sit,
the ability to sit with this comfort, to work hard, to enjoy complicated problems, to love thinking through them, because that's where your money is made. If you lose that, you really lose out on the ability to achieve, like, what is the new American train?
I guess the reason that I'm so interested in the economics angle is because I feel like the argument against what we're saying is that it's sort of this low-dite argument that your anti-technology, anti-progress. And I think the thing that really resonates for me
to your point, Derek, is if you have a generation of people who have been trained since their infancy to take shortcuts to not sit with ideas, to not work hard, to just scroll, scroll, and kind of like live this sort of fleeting imaginary version of success.
And you never actually build the tools or the abilities
or actually go out there and achieve it. Then ultimately, we'll have an entire generation and nation of basically lazy, non-thinking losers, who can't really get anything done, who can't really come to a consensus
and make decisions and build things. I just wonder if that is the argument that needs to be made to those who would be pushing against this argument. I mean, there are certainly going to be people out there who would say Cal is just afraid of technology.
Derek thinks AI is bad. They're sort of anti-progress, they're anti-innovation. And I just wonder if they're missing something, they're missing a productivity angle, which is that if you have a generation,
I mean, an entire society of dumb people, then just economically speaking, GDP is going to go down. I feel like that's the only outcome. Derek does that resonate, I guess. I don't consider myself a ledite,
“and I think I'm probably more positive about large language models”
to technology than Cal. I want to be very clear about what it is that I think is bad. Yeah. And I think here, Cal and I don't have, like intersecting the diagrams,
I think here is the same end diagram.
What I think is bad is not artificial intelligence.
“What I think is bad is using artificial intelligence”
to do the thinking for you, and then representing your thinking as just the synthetic information that you got from artificial intelligence when you prompted it. That is what is cheating.
Yeah. That is definitely cheating. And my point is that in the short run, when you cheat, you are cheating the task, but in the long run, when you cheat,
you are cheating yourself, because work is one damn task after another. And if you lose the ability to be comfortable with what I'm calling time under tension, cognitive time under tension,
then you're really putting yourself
at an extraordinary disadvantage in what's going to be a very, very competitive labor market. And that's my fear for students today is that they are taking a shortcut that in the long run is going to
atrophy muscles that they're actually going to need in the labor force. This is what we wrap up here at Cal. What would be your advice to those people? I mean, I don't think that we're going to see real regulation on this stuff.
Open AI even built a tool that detected AI generated work, and they decided not to release it, because they weren't going to hurt usage. I mean, it doesn't seem like anyone else is going to solve this problem for you.
So what would be your recommendation or who don't want to fall behind it?
“Well, I mean, I think time under tension.”
That's a good analogy or metaphor that I think Derek is pointing out. You should be thinking as an individual if I want to be economically viable. Don't listen to the voices that are saying, "Oh, you won't be replaced by AI.
You'll be replaced by someone who uses AI better and say, "What is fundamentally what do I do in my job?" Right? Where do I actually create new value in the world? If I'm pulling in an knowledge work type of employment,
a salary that's non-trivial, it's not because I'm good at answering emails. It's not because I create PowerPoint slides really quickly. There must be some fundamental activity where I'm taking hard one skills knowledge
and applying it to information to add new value. The harder I can think, the more I can sustain my focus, the better I am at that core activity that matters. So what I've been arguing this since my book Deep Work a decade ago,
don't lose sight of the fundamental cognitive activity that actually moves the wheel that actually moves the needle on these knowledge work types of endeavors. If you cannot add original value to information
to deep, still thought, then what you're doing is imminently replaceable. If you turn yourself into a sort of cybernetic LLM Promptor, your unique value to the marketplace is going to plummet. You're putting yourself into a dangerous situation.
So don't mistake busyness for productivity. Don't mistake speed for better.
“What matters is, what is the high value output I produce”
that I'm uniquely suited to do it? And how do I get better at that activity? There's all sorts of ways technology can help you do it. But you have to be very wary about the ways that technology makes you worse at it
because it has a way in the last 20 years of sneaking in the back door and making you feel more productive and you look up and you're worse at what you do.
So met the first things be first.
Calming report is a professor of computer science at Georgetown University in New York Times, selling author of eight books, including slow productivity and deep work. Derek Thompson is host of the play in English podcast and author of abundance, Derek and Cal.
This was fascinating. Thank you so much. Thank you, sir. Thank you. Okay, that is it for today. We appreciate you joining us for another Proftory Markets panel.
If you have a guest that you think we should speak to, please drop us a line in the comments or email our producer Claire at [email protected]. We hope to hear from you. This episode was produced by Claire Miller-Alison Weiss,
edited by Joel Passen, and engineered by Benjamin Spencer. Our video editor is Brad Williams. Our research team is down to Lawners, about a pencil, Krishna Donahue, and Mia Salvario, and our social producer is Jake McPherson.
Thank you for listening to Proftory Markets from ProftoryMedia. If you liked what you heard, give us a follow. I'm Ed Allison, I will see you tomorrow.


