99% Invisible
99% Invisible

Constitution Breakdown #9: Alondra Nelson

2h ago1:05:4411,778 words
0:000:00

This is the ninth episode of our ongoing series breaking down the U.S. Constitution. This month, Roman and Elizabeth discuss Article VI and VII, which include some odds and ends like the Debts Clause,...

Transcript

EN

This is the 99% visible breakdown of the Constitution.

I'm Roman Mars. And I'm Elizabeth Jo. Today we're discussing Articles 6 and 7.

β€œRoman, why don't we go through both articles and let's save the most important part for”

last. Okay, because there's a lot of unimportant parts. It's a little different. It's a little different. Go for it.

So why don't we start with Article 7? That's the ratification clause. Okay. Maybe the least important to talk about today, but actually, crucially important for the Constitution itself.

Yeah, sure. We needed these states to vote on it, and the ratification clause says that nine states would be enough to ratify or make the Constitution itself a legitimate document. So that, in fact, happened, it comes into effect on June 21st, 1788 when New Hampshire became the ninth state of the 13 to ratify the Constitution.

So it's kind of as its own clause to make sure the document is legit. Got it. That makes sense. Okay. So that's pretty clear.

No real important Supreme Court cases on it, so let's turn back to Article 6. Okay. Article 6 is a mishmash of a bunch of different things. So Article 6, clause 1, talks about debts that the United States is already obligated for and still has to pay.

So, why is that there? Well, you know, when the Constitution was drafted, the country still had debts and engagements

that were left over from the Revolutionary War, and creditors were kind of nervous.

β€œWhat if you create a Constitution that wipes out all of the debt?”

That would be pretty convenient. Got it. Got it. So in order to assuage those nervous creditors, this Constitution, our Constitution actually says, "Don't worry, we understand that we have these debts and we are going to pay these

debts." So today it's a really mostly historical interest. It doesn't come up because we didn't in fact pay our debts. I will say what is interesting is that from clause 1, originally as it was drafted, the clause said that the United States would both be obligated to pay the debts, and would

have the power to pay the debts. But that second part got taken out of Article 6 and put into Article 1 as part of Congress's spending authority. So that very, very important part today is actually in the larger chunk of the Constitution. And we cite all the time, which is why the Congress have the ability to pass laws in

very often. It's because of spending authority. Wow. Well, that's really fast. Yeah, actually.

Yeah. So you know, one little switch around. Yeah. That's a huge difference. Okay.

Okay.

So that's the first clause.

Okay. Let's turn to clause 3 of Article 6. Do you want to read it? Okay. The senators and representatives before mentioned, and the members of the Serval State Legislatures,

and all executive and judicial officers, both of the United States and of the Serval States shall be bound by oath or affirmation to support this Constitution. But no religious tests shall ever be required as a qualification to any office or public trust under the United States. So this is known as the no religious test clause.

Great. I like this clause. Exactly. It's kind of, you know, no religious test for anybody taking office. And in fact, it's the absence of religious tests that makes us understand that this

is successful. Yeah. Yeah. Yeah. But sorry.

Of this formally. Right. I mean, there's no formal test. But you can kind of feel it in there. The fact that the representation of other religious faiths is not super common inside

of our public institutions. True. True. But there is a big difference when you were formally required to do it. And in fact, this clause comes from traditions going back to England.

So for instance, in England in the 17th century, for example, all government officials had to take an oath that they would help establish the Church of England. Also disclaim Catholicism and the pope. And so the idea is we have this common tradition. It comes from England.

By the time you have the colonies and the articles of Confederation, it was pretty common for government officials to be told that they had to take some kind of religious affirmation. Of course, that for the Church of England.

β€œBut some kind of, I believe in God sort of test.”

It's notable that it's absent. And this clause as well has very little in terms of Supreme Court interest or caseload today. And that's where a totally different reason. You'll notice this is about religious freedom, essentially, right?

It shouldn't matter whether you are a practicing Catholic or Muslim or due to be able to take a public office. But the reason why this clause doesn't give much attention is that's because free exercise

clause cases today come up under the first amendment.

First amendment. Right. So not too much there as well. Yeah. So I noticed in our recap, you had Article 6 clause 1 and Article 6 clause 3.

But we have skipped Article 6 clause 2. So what is that? Well Article 6 clause 2 contains what's called the Supremacy clause.

Why don't you read it?

Okay.

This Constitution and the laws of the United States, which shall be made in pursuance

thereof and all treaties made or which shall be made under the authority of the United States shall be the Supreme Law of the Land. And that's referred to as the Supremacy Clause.

β€œSo why is the Supremacy Clause so important?”

Well, historically, the Supremacy Clause responds to a very particular problem. And that is before the federal constitution, the articles of Confederation, which was the predecessor document, had no similar provision saying that federal law is supreme. And you might wonder, what does that really mean? Well think of it this way.

If you have state laws on a topic and federal laws on the exact same topic, which one are you supposed to follow? If there's no clear instruction, well, maybe you just follow whichever one you want. And that's kind of what happened before the constitution state courts sometimes just didn't think that federal law was binding.

So they didn't apply it. They applied state law. That's kind of a problem, right? So Supremacy Clause, just with one fell swoop with this particular clause, gets rid of that uncertainty or ambiguity.

The Supremacy Clause simply says, look, federal law, whether we mean the constitution, federal statutes, federal treaties, are supreme when it comes to any conflicting state law. So the idea here is that you have this very important structural part of the constitution, that federal law is supreme. So what does that mean, practically speaking?

Well, what that means is, if you can think of Supremacy as stating the simple fact that federal law is supreme. But arising out of Supremacy is the idea that Congress now has the power when it legislates to preempt or that really means displaced or override any contrary state or local law. So you can think of preemption as being based in the constitutional power of Supremacy.

So Congress doesn't have to exercise preemption, but when it does pass laws in this way, it's very clear that any directly conflicting state or local law has to give way. So that's kind of the genius or the simplicity of the Supremacy Clause. But that's the most simple part of the Supremacy Clause. And I take it, there's lots of constitutional case law based on the Supremacy Clause.

β€œThat's right, because things can never be simple, right?”

Yeah, yeah. So when you think about federal law, sometimes Congress can simply say we're going to pass a law, and this law will in the text of a law itself displace or preempt any similar state law. That's pretty easy.

And if that were the only issue, we'd never talk about preemption.

But the problem is that Congress very often doesn't say there may be a federal law on a topic and a state law on a topic, and the federal law doesn't say anything. So in response, the Suprem Court has come up with a whole host of cases, doctrines, tests, ways of thinking about federal preemption to try and answer the question, what happens when it seems like there are federal and state laws legislating on the same topic?

So what exactly is supposed to happen when there's a conflict? Well, that also is a complicated answer. So it depends on what we're talking about. Sometimes courts will say something like, you know, there are some areas of federal law where the federal interest is so important, so extreme, we don't want the states to get involved

a little tiny bit, even if Congress hasn't specifically spoken to that area. So on interest, like this would be foreign policy. We don't want the states to get involved with foreign policy negotiating their own treaties. That would be a great idea. Exactly.

So those are the easy cases, but the much more frequent and difficult cases are sometimes courts have to answer, well, there's a federal law on a topic, and a state law on the

β€œtopic, is it possible to comply with both the state and federal law?”

If it's possible, maybe there is no preemption, no preemption would mean that state law and federal law are both valid. But for instance, if there is a way in which the state law is an obstacle for the federal government's law to operate, or whether it's literally impossible, state law says black and federal law says white, you can't do both at the same time, then that's the case

of federal preemption.

So these are always case by case determinations, but preemption is actually really important

because if you think about all of the different areas in which the federal government regulates everything from the environment, consumer protection, energy, you name it. The state's also often legislate in the same areas, and what you will have are individuals or companies that say, well, I want to comply with one, I don't want to comply with both,

Or am I supposed to comply with both, and that gives rise to preemption.

So of all of the areas of law that we've talked about with the constitution, in fact,

β€œpreemption is probably the most frequently used constitutional law in practice.”

So on the one hand, you can think of constitutional law in the courts as being on a spectrum, like maybe we'd put impeachment at one end, we'd talk about it in the courts, and then preemption all the way at the other, preemption comes up all the time. Because the idea of federal preemption is that it's a possible question anytime the federal government is regulating in a particular area.

Right. Which could be infinite, almost. Almost infinite. That's right.

Every single area of modern life where the states regulate very often, though, not always

of course, very often, the federal government is also regulating. And this situation is exacerbated by the fact that modern life continues to go on, like there's new laws coming up all the time, because there's new technology all the time, and there's new things all the time to consider. That's right.

So whenever you have a new policy problem, a new change in society, there's a race to regulate it, or at least calls to regulate that new development in modern life. So the question is, are states going to do that job, or should the federal government do that job, or should they both do that job? So what I want to think about the problem of preemption is to, for us to pick an emerging

area where both the states and the federal government are trying to regulate at the same time.

β€œAnd I think there's no better topic than artificial intelligence.”

Totally. Totally. I mean, that's like huge. I don't even know what I think about. That's right.

So I can't even imagine what states in the federal government are thinking about at this point. That's right. Artificial intelligence is everywhere. It's that the doctors, it's that the store, it's that school, it's at work.

It's got a huge problem for a government. And that's because AI has the potential to produce these really big benefits for society. But we've already seen that it can have all kinds of harmful effects. It can produce all kinds of major risk for society, you know, everyone sort of AI makes up facts that don't exist, that people believe in sometimes act upon, or it can make decisions

about people that are really hard for us to explain and sometimes those decisions are false or misleading. Yeah. So just like any other problem in society, the states in the federal government

β€œare trying to figure out, how do we regulate AI or AI systems?”

And that means everything from how do you regulate a chatbot that teenagers use or self-driving taxis or how do you regulate autonomous weapons when it comes to wartime? And so what kind of level of government should be regulating AI and so should the states get out of the way all together? Now this seems like a very current topic and it is, but the larger picture is an old one

and that's a question of federalism. So the narrower view we have of preemption, we're really allowing the states to engage in more experimentation for the states to say, hey, we want to try this approach and California

will always take an approach that probably Texas will not, right?

Right? Right. But a very broad view of preemption really is saying, you know what? We want the states to just get the heck out of the way. We want the federal government to be the primary voice in this area.

So those are choices that courts have to make. There's nothing obvious about going in one direction or another. Yeah. Yeah. Because this is a fast-moving and complex topic, our guest for this episode is Dr.

Alondra Nelson. She's a scholar of technology and social science and a leading expert on artificial intelligence. She currently holds the Harold F. Linder Chair at the Institute for Advanced Study in Princeton. She also served in the Biden administration as the acting director of the White House Office of Science and Technology Policy.

It was in that role that Dr. Nelson spearheaded what's called the blueprint for an AI bill of rates. We invited her to help us navigate why it's a challenge to regulate and what to think of the tug of war between the states and the federal government on the topic, especially

during the second Trump administration.

But we start with Alondra's definition of what exactly AI is. So I usually use a modified version of the OECD definition, which is a definition that 38 nation states have agreed upon. And it's basically that these are machine-based systems, like lots of statistics, lots of math, and that they use, they make inferences from different inputs and they generate

outputs. And so the outputs are things like so-called predictions. They are things like recommendations, like your Spotify, music recommendations, or your Netflix recommendations.

I like to use those two examples, because people have different feelings abou...

or bad they think their net stream in Spotify is.

β€œAnd I think that's kind of a level set for AI, you know, decisions.”

So there are machines that are helping, you know, if we think about the theater of war, decisions about targeting people, locations, and the theater of war. And of course, with generative AI, AI tools and systems generate content. So text and images and sound. So that's kind of, you know, inferences made from different sets of inputs, almost all

sort of data, whether those are photographs or numeric data, or, you know, quote-unquote all of the internet, that was taken into generative AI, and lots of different outputs. So you cross-cut that with the fact that AI systems have like different levels of autonomy and adaptiveness after their deploy.

So some can be very static, like, you know, a decision making, our predictive algorithm

that might be used in the criminal legal system was taking in data, and, you know, it has a sort of sort of hard-wired data set that it's sort of making, sort of, so-called predictions against. And obviously today, we increasingly are being told about things like open claw and AI agents. And so these are autonomous kind of AI systems that are, you know, making purchasing decisions

for people coding for them and the like. So that's a broad definition on purpose, because AI is really broad.

β€œAnd we, I think we go back and forth from using generative AI as the default for what we mean”

by AI, but it's this whole suite of things. And if you talk to, you know, a computer scientist or an AI or machine learning engineer, they would say to you that actually, you know, if you think about AI, the world of AI is sort of a set of rushing, rushing, nesting dolls, that generative AI is actually a small, with you know, the smallest, right?

You've got deep learning, you've got machine learning, and all of that. So because generative AI with things like chat bots have been made consumer-facing tools and that's really how AI came into the public sphere, it's kind of how we think about AI. But there's a lot of other use cases and types and autonomous and more brittle, et cetera, besides.

Yeah, so I think, you know, when you hear this, it's like a pretty technical set of definitions and products, but I suppose if you're listening to this conversation and maybe someone might say, well, I'm sort of familiar with maybe chat GPT that came out in 2022, I used

β€œit a couple of times, but like, I really want to know, like, why should I care about this?”

So what for you are some of the most transformative or really concerning examples of AI that are happening in American society right now? So the why should I care is, you know, I think people every day, particularly folks in companies over sell AI, so that's certainly true. So what might be transformational?

Some of the claims, you know, the AI for good claim are true, and I think are on either happening or on the horizon. So you can think about in the medical space, like an AI system reading chest X rays or being able to flag kind of an early stage kind of cancer diagnosis, being able to see, you know, a tumor and it's very early stages.

So that's transformative and indeed, you know, if we get that right, life saving, it is the case that we still need radiologists and we don't have enough of that. So transformational, but transformational in potentially in the intersection of humans working with the AI, right? So, you know, other cases, certainly are like in agriculture, so you can have either farmers,

whether it's Sub-Saharan Africa, Kansas in the United States, are using forms of computer vision, forms, you know, on a phone app that can help them identify, you know, whether or not a crop is being blighted, we're using already kind of AI and deter traffic flow and try to sort of direct traffic and kind of retimes stop light. So, you know, you can cut commutes or you can redirect traffic.

So when these are all as to go back to my definition, there are all systems that take an image or a data pattern or a question, make an inference and generate an output that, you know, hopefully helps to augment when humans are doing, maybe improve what humans are doing, and maybe to help humans make better decisions. So, those are, I think, cool things.

I mean, we just have been watching Artemis, too. But, you know, that is full of AI computer situations that help them to track how they

were going to do this incredible 10-day journey, also cool.

Concerning, you know, we're living with a lot of that right now. You know, we've got this kind of great race happening in the world of, you know, looking for a job, right? So, you can now more easily do your resume and your cover letter using AI, but now AI systems are being used to screen your resume out, so, like, people are now sending dozens and dozens

Of resumes out on a given day, but they're, they're getting screened out righ...

So the downside of this is that it might filter out, you know, people out of an applicant

pool before anybody ever sees your name, or anybody ever, like, actually looks at your credentials, and nobody will tell you why, potentially. There's some research that suggests, because, you know, as again, as you talk about input data and making inferences from that, and things like employment, a lot of the input data is historical data.

So, you know, in fields in which you've had historic racial discrimination or gender discrimination, like if you're looking for the resume of an excellent computer scientist, then, you know, a lot of algorithms that we've have been shown to sort of kick people out. So you're, like, people are using access to opportunities with rural implications for their liberties and the rights.

There's, you know, so, called predictive policing tools that the algorithm says that

β€œyou should police it more because it's been police more historically, not because there's”

actually new information suggesting that that should be the case. And then, in the generative AI space, because I live partly in New York City, the Adams administration

spent, you know, nearly, I think, a million dollars on this government chatbot, this NYC

botter in my C chat, that was supposed to, you know, the idea was, of it was good. It was supposed to help, like, small businesses navigate all sorts of city regulations, which in a place like New York City, they're voluminous, but it was telling them to violate the law. So it was giving advice like that you could, like, how to skim workers tips, how to terminate

against your tenant if you're a landlord. I mean, it was fairly outrageous, and, you know, I think well beyond the kind of whimsical term hallucination that we use, that, you know, often suggests that it's not a really big deal.

β€œAnd, you know, we shouldn't be surprised that I think the mom-donny administration, I”

think cancel that contract and got rid of the chatbot. But the concerning aspects, I think also just give you a sense of, like, all of the places in our lives, all of the sites simultaneously that are being shaped in some way by some form of kind of algorithmic decision making or management. And I guess one of the ways to approach that, right, is to say not just, like, oh, these

are technical problems, but since you're mentioning, like, all of the different ways that individuals might feel powerless or just confused about what's going on, you can kind of use a civil rights approach, and, you know, and, of course, in the Biden administration, you let the OSTP, and you're credited with directing the White House blueprint for an AI Bill of Rights, and I would love for you to talk more about that. There's a policy paper,

it's a white paper. So, what was the process? How did you begin creating the blueprint? Like, who was behind it? Who did you talk to? Yeah, so it was, you know, we came into office in the middle of a pandemic, and we came into office as a country having a racial reckoning. We were having an economic crisis. And, you

β€œknow, I think those of us who work in the science and technology policy space, new, both”

on the research side, and also kind of saw a brewing, and this, all of these kind of societal concerns, like, what was going to be happening in the algorithmic space? And we all, you know, we were having already examples. So, for example, the YouTube videos about the, you know, the so-called racist soap dispensers and faucets, right? You know, if you have darker skin, you can't get the soap to come out, which is a kind of application of, of AI.

And, you know, and I have the idea to do, in part, I think, you know, borrowed from lots of other examples. I mean, the Obama administration accompanied its affordable care act with something called the Patience Bill of Rights. I think Ralph Nader have a consumer bill of rights. So, the bill of rights has been used, you know, variously, both by government and in, in folks in civil society as a way to sort of think about a rights expansion in the,

in the face of kind of a new technology or a new social dynamic, for example. So, we got into office by October of 2021. We published an op-ed and wired that came out at October of 2021. And we sort of use the bill of rights framing, and we kind of tried to draw a parallel to the country's founding. And noting that there was this time, you know, in the 1780s and 90s, Americans adopted the original bill of rights to guard against really about, they

like, just created this powerful government, right? We're about to celebrate the 250 years

of the Declaration of Independence and then the Constitution. Like, we had created this kind of powerful government technology, and that we needed to place a check on that. So, how did you secure our rights and our liberties, our opportunities, and the context of a kind of large and powerful government. So, we saw a parallel with the kind of powerful technologies and the powerful companies that were pushing these powerful technologies and thought that

There was a useful analogy.

public about what might be equivalent kind of guardrails against these kind of new powerful

domains. And so, we were trying to kind of frame the blueprint for an AI bill of rights project within a kind of continuous, you know, US or American tradition of aspiring to values kind of recognizing the shortcomings of the systems that we create and sort of, you know, thinking about what we might do to sort of mitigate it.

β€œCan you tell us some of the five principles that are identified in the AI bill of rights?”

Sure. Yeah. So, the five principles and the white paper that I was with alluded to was released in October of 2022. So, a year later, and what we did over the course of that year was a lot of public engagement. So, we had that wired op-ed ends with an email address that can go

direct right to the White House. That's always a good plan. Yeah. So, I think we wish more people

had taken a step on it, but people certainly did. And we did kind of focus groups. We had what we called office hours. So, everybody who worked on the team, which included policy generalists, AI scientists, computer scientists, folks who work on science and technology policy from academia, who had government experience, who had commercial experience. So, it was a pretty broad team. And we would all block on our calendar time just to talk with people. And that included

high school students and rabbis, in addition to always the technology companies lobbyists, you know, but we really tried to have a broad conversation. And so, the five principles are really distilled from those conversations. We weren't trying to do anything novel. We were trying to sort of take from this year of conversation. Like, what is the best of what we think? What are the aspirations that we should have as we move as a society into a more kind of algorithmic shaped

mediated world? So, one was that AI system should be safe and effective. I mean, that's a very

kind of basic and almost kind of consumer rights principle. Second, that people should have protections

from algorithmic discrimination. Third, that there should be, you know, some modicum of data privacy, we are still fighting out with that might even look like. But, again, these are kind of aspirations

β€œfor that there should be notice and explanation. So, that you should have a right to know”

when an AI system is being used to make consequential decisions. Like, some of those that I was talking about, Elizabeth, when you ask me what's concerning. Like, you know, do we care if you get a bad net flick recommendation and you end up watching movie, you don't really like, that the algorithm told you you were going to like, like, no. But when algorithms and more, you know, advanced AI systems are being made for consequential decisions about people's lives,

they should know about that. And if they want an explanation, they should be able to get one. And then lastly, the last principle is that there should be some sort of human alternative or fallback so that you should, you know, ideally be able to opt out. We build a lot of algorithm and social media systems as opt-in as opposed to opting people out. So, can you opt out of an automated system? Can you talk to a real person instead of being kind of brought down into a circle of like a

phone tree hell where you keep trying to press zero together. Particularly when it's about something that affects your life. I mean, you know, health insurance, jobs, housing. So, these are

β€œreally critical things. So, that's what we came up with. And it's been, you know, variously sort of taken”

up by different kinds of constituencies. It's become a kind of a civic infrastructure that is a way, I think that allows different kinds of communities, particularly non-expert communities, to talk about why AI is important and how they want it to sort of sit in their lives and not sit in their lives. So, from, you know, from an ordinary person's perspective, what doesn't mean to have a safe AI system? Does that mean that it's not going to make mistakes or is it, you know,

what do you envision as like an AI system that would follow this idea of safety? Yeah, so, you know, by my friend Damon, who leads the lawyers committee for civil rights, you know, he will often say, there's more laws around your toaster than around the chatbot that you might have used the warning, which is true. So, we, so we just basically don't have, certainly at a federal level, there's some action happening at the state level, but any kind of just basic consumer protection.

So, I think many people are actually shocked when they realize that when, you know, an AI company or a tech company sort of ships a new model or an update of a model that no one has looked at that, there's been no kind of third party kind of authority that said, you know, it's met some threshold or standard of testing, and that we think it should be safe and effective. So, you know, there are affordances, there are things about, particularly about gender to the AI, and we know increasingly

From the research that you're never going to get rid of all of the mistakes i...

certainly not a large language model. So, safe and effective systems doesn't mean that, but it does mean that one wouldn't, should expect that there should be testing on what people think would be the most obvious use cases of these technologies, right? You can't, if it's a multi-function

β€œor a multi-use technology, there are use cases, I think that we can't, we haven't even imagined,”

and people aren't doing yet, but I think anybody who has studied the history of technology

in the United States, even just going back to the '90s, you know, we know there's always

going to be a problem with scam scamming and fraud, always, any kind of new technology. We know historically there's always going to be a problem with forms of, you know, pornography, sexual abuse. These things are like, often the first use cases for new technologies, and so, you know, that we have chat bots that are being used to notify young people in high school or on rock or whatever. Like, like, we can't act like

these are not harmful use cases that we're not anticipated, and so, so it doesn't mean it all is a bit that there won't be unanticipated things or that, you know, that a chat bot won't hallucinate, but it's certainly shouldn't mean that a company before releasing a product has stopped through even basic historical use cases and actually thought about how they might be mitigated or that, you know, should have a conversation about some independent stakeholder,

you know, state government, you know, a civil society, I don't, you know, about how they might be mitigated. So, you know, you think with all this, essentially, you're sort of describing this experimentation, right? That's happening. And we'd expect that if the government's going to do this, they also should be regulating it a lot, and the answer at the federal level has been crickets, mostly. There has been some movement. I mean, the blueprint served as a springboard for President

Biden's executive order on AI, so could you say a little bit about what the core of those

β€œconcerns were in the EO? Yeah, so I think the philosophy, both for the AI bill of rights, and”

for the most part for President Biden's executive order on AI, was that just because we have a new technology does not mean that we have to have a new social compact or a new social contract, like you don't have to throw out every policy regulation in law because we have this new technology

as powerful as it may be. So, if intentional discrimination or intentional violations of people

civil rights or liberties are illegal in any other fashion, if you do that with AI, it's also illegal, right? You might have to differently figure out the mechanism or differently make the case, but the outcome is, you know, the legality of the outcome is the same. One of the things the executive order did was ask the Department of Education to think about what, you know, you've got guidelines for children's privacy and their protection for the use of educational technology. Do those

need to be updated or how do we just need to double down on what we have as you're introducing different forms of advanced AI potentially to the classroom, right? You know, a President's executive

β€œorder had some directions to things like the Department of Labor, and I think differently from”

what the current administration has been doing. It was not just what is AI going to do to work, it was how can government help put speed bumps or friction or help to direct the sort of direction of travel, so you're not just potentially casting people out of work, you are helping them find other work, you are re-skilling them, could there be a conversation about tax incentives or other kinds of incentives to keep people on work or to help people offboard or on ramp

to different work, for example. The executive order, of course, also weighed in on, you know, there was a lot of concern and remains a lot of concern in the national security space. So, you know, should there be export controls, should we be controlling where various forms of technology go, so this is still a very live conversation. Controversially, the executive order proposed that we would use the Defense Production Act from, I think, World War II originally

to require that companies give the government more input and information about

new, more powerful AI systems and tools that had a certain threshold of capability. So, it might have

been historically the longest executive order ever, but really, yeah, I think that's right. I think it was a hundred and some pages, a hundred and one or two. You know, as a reformist and reformer, I don't necessarily think that is a good thing. You know, in some of his a bad thing, but in this case, I think it was good in the sense that it tried to be comprehensive, that the philosophy here was that this is a kind of new infrastructure. This is sort of a new operating system for a lot of the

work that we do and how might we think about the ways that government can both help to accelerate

Potential good use cases and mitigate potential harms.

the levers that government agencies and the executive already have. We're going to take a break,

but when we come back, we'll turn to how the federal government is and isn't regulating AI and how the states are filling in the gaps. So, before the break, we talked to Dr. Alonja Nelson about how to think about artificial intelligence and why it poses a risk and should be regulated. And so, how did her work lead to a conversation about preemption? Well, as she's already mentioned during her time in the Biden White House,

she helped create the blueprint for an AI bill of rights and that blueprint became the impetus for a part of President Biden's 2023 executive order on AI. And as she's already discussed, that order told the federal agencies to address the safe and ethical use of AI. Now, that's the limit of what President Biden can do in that's because Congress has the power to legislate, not the President. So, while Biden could tell the executive branch what to do about

AI, he lacked the authority to actually preempt state law. Got it. And as soon as he began his

second term, Trump rescinded or did away with Biden's executive order and replaced it with his own.

Now, the Trump administration's approach to AI has been to turn away from a focus on safety and ethics in AI. It's a price price, okay? Right. And instead, to focus on what the federal government can do to accelerate AI development. Okay. Now, Trump's executive order has called upon Congress to use its power of preemption based in the supremacy clause to override state laws on AI. Congress so far has not responded. Okay. Which has left a lot of room for the states and so we'll

pick up our conversation there. So, we have a lot of different states regulating on AI. You know, California has been in the lead as it often is in these areas. So, for example, California has just a lot of different laws on AI. For example, you know, you've got to disclose what kind of data you use if you're an AI developer, what you use to train your models. That seems like very technical, very big picture. There's also some very specific California laws that we just passed a lot.

You know, if you're a police department, you've got to disclose if you use generative AI when you when your officers you write their police reports. That's good one. So, you've got the whole range of different things. So, what does that mean? You talk to a lot of people in the industry. If I'm an AI developer and I want to offer my product in California or I want to offer my product in Colorado, which has an algorithmic discrimination law. What does that even mean? How does that

β€œwork? Well, I mean, I think the first thing to say is that it's that we have other industries,”

you know, where you have different kinds of regulations. So, you've got like, you know, insurance is like regulated mostly by the states. For example, we've talked about a little bit consumer protection. So, it's, you know, I think the discourse that gets used in DC, which is its own language of its own, does a lot of kind of like pearl clutching around the fact that you would have different laws in different states, although the same very same people in Washington,

because they are the most adept people on, you know, what the regulatory space looks like

more broadly writ large, like, you know, know that it's true all the, you know, it's basically true.

And I think there's like, you know, there's real we use the phrase, you know, laboratories of democracy. I think there's something to that. I mean, you have a new technology that I think is fast moving.

β€œI think in some ideal demos, would you want just one, you know, a rule, you know, a lot of rule”

them all, sure, right? But we don't live in that ideal demos. And we also know that the states are much closer to the harms. Like, so you also have to imagine being a governor of a state or state legislator or senator in a state, and you have people writing to you about being worried about the future of their children, you know, we had a scandal. I'm sitting here in Princeton and New Jersey about new to buy apps. Like, you know, lots of just, I think, concerns about, you know, reading

in the news and experiencing young people harming themselves, you know, there's been a some based on a case reported about a potential homicide. And I think if you are a state legislator and you're hearing from constituents who've been denied a mortgage or screened out of a job by an algorithm, you can't just sit blindly and sort of not respond to that. So I think partly

β€œit's just like folks are hearing it. I think that we have a new technology. What are the best ways”

to think about this? I mean, even with the case you mentioned California in New York, which have done laws around kind of trying to require some disclosure and transparency from companies around

Harms.

but they've said it's really has to be intent. It can't be if there's unintentional harms, you know, that they're trying to let the companies off the hook. You know, a place like Colorado

has attempted the first, you know, we might think of as like an omnibus AI bill that covers lots of

things, you know, including sort of harms to young people, deep fakes discrimination. And these are all, these are like, I've just named three different approaches. And it's not clear which one of

β€œthose is the best to win or which one's going to be most efficacious. And I think it's worth actually”

letting states do this, you know, finish the work of implementing these laws and actually find out. I just don't think that the harms are more likely to be on the side of not doing anything at all, rather than trying to do a couple of, you know, different innovative strategies and different states them to see. And then, you know, because there's been no federal law, there's obviously just this vacuum in the states and there's a lack of clarity. And, you know,

I think there's been the DC conversation, the Trump administration conversation has been or discourse has really been, well, it's creating confusion. And I think what's actually creating confusion is the lack of any kind of federal guidance. It's actually the states that are trying to sort of bring clarity to chaos. I mean, if, if the states are the appropriate front line for figuring this stuff out, is the ideal form of that to eventually roll up into some kind of

β€œfederal regulation that makes sense? Sure. I mean, I think what the state patchwork does is”

test things out. Some things will work. Some things will fail horribly. I think it also creates some kind of, you know, the so-called patchwork. I think kind of create some upward pressure because when exactly to your point, Roman, when an F states act, federal policy or norms become, as the patchwork gets woven together, become kind of implicit. And I think it puts more pressure on it for the federal government to actually do something explicitly. I would also, like, if we

widen the aperture just slightly broadly from like the AI companies that, you know, like that we're talking about now to social media, the social media example, which gives us another, you know, 15, 10 years more to think about. We've seen the utter failure, right, of the federal government, to be able to legislate in that space. And to the extent that we've got anything that looks like regulation or law or governance in that space, it's coming out of these lawsuits,

like the lawsuits that we saw earlier, you know, that we cited a few weeks ago around, you know,

β€œmeta and YouTube. And so I think there's also, I think, if you are a state level, you know,”

executive, if you're a governor or state state legislator, you're like thinking it back about that example. And just thinking we can't wait and do this again. You know, the, the other, you know, as I said, the states are close to the harms they're hearing from constituencies. The way that we've been governing, if you think about the social media model, I mean, the young woman who was the plaintiff, and I think in the meta case, she's 20 years old. I mean, this happened eight

years ago or something. She was a, you know, a child when this happened. And so the light, you know, using liability and legal cases puts us quite far away from the harms. And I think the states can be much closer. Yeah, just to back up for a moment, by way of explanation, you're referring to the

social media trials that are happening in California where basically the state's attorney

generals and private plaintiffs are suing arguing that social media platforms are harmful products, which is has a long, storied history of legal liability in the United States. And actually they're using the legal playbook of Big Tobacco. You know, we kind of shut down Big Tobacco because we argued that the companies knew that these were harmful products and you sold them anyway. And that has proven so far to be successful in the social media space. So I guess we could think of perhaps, you know,

AI, some of these products are going to be dangerous, and maybe we'll do that. Of course, I think you're right. I'll understand that this is a backup, right? We don't want to wait for the bad use case for people to be harmed. I mean, the nice thing about regulation is you can be proactive and say, we think this is going to happen or it is happening and we want to affect as many people as we can within the state or within the country. My question is really

more about what about the companies, that that I feel too bad for them. But if you're a company, it's pretty burdensome, I would think, that you've got to look at every state and see like,

what is every state doing? So I would imagine that, you know, their first choice is no regulation,

right? But their second choice must be federal regulation, no? Yeah, I mean, I would disagree with that a little bit. I mean, I'm sorry. I think let's have a friendly quibble about this. I mean, I think that the compliance burden argument, I think is a bit overstated by companies, right? You know, that's just what companies do in their own interest in their lobbyists. So, you know, right? And as I said, you know, I already mentioned,

I think companies already in other policy spaces are navigating like different consumer

Protection regimes for different states, different employment laws, different...

I mean, you know, that there's a the state of Illinois has this pretty strong biometric policy,

you know, regime, and yet, you know, companies were still clear of you was still selling its facial recognition technology dataset, for example. So I think that the language from companies and lobbyists that say that state ALAs are like uniquely burdensome, or especially burdensome. I think doesn't really hold up when you think about these other examples of these other policy spaces. The other thing I would say is that I think what your question, which is a common question

and important one, presumed, is that like if the states don't have a law, there's no other governance or pressure being applied on the direction of AI governance, which certainly in the

β€œTrump administration is not true. So, you know, so, okay, maybe you have to, you don't want to”

apply, you don't want to deal with California or Colorado, but you've got a Trump administration

that's saying we're changing tariffs every day, you know, we've gone from liberation day to not liberation day back and forth. So companies are dealing with that, including AI companies. You've got a Trump administration that is saying, well, we're uncertain about we don't like immigration. We're uncomfortable with science-atech immigration. If you want to bring new technology talent, AI company, you're going to have to pay $100,000 per visa if we allow you to

have one to bring some, you know, a talented engineer from France or, you know, Korea or something. And then they're also intervening in business. So, you know, there, we've got the US taxpayer is a shareholder in Nvidia, where a shareholder in Intel. So, the compliance burden question,

β€œI think is as much to narrow, given all of the different ways in which companies are being”

asked to respond to a kind of broad spectrum of AI governance. Yeah, and let's not forget, I should say, you know, the federal government and all of the state governments are huge customers. Yes, right. You know, customers can demand changes if they want. Her achievement is a excellent vehicle. I mean, you know, governor and you some just sign this executive order that I think really leaned into that, including not only safety issues,

but issues around discrimination and civil rights and liberties, which I thought was fantastic. So, we've talked a lot about sort of granular harms that are potentially happening or are happening, but I do want to talk about your thoughts on what's on the horizon, the AI horizon. And there seems to be this race to develop AGI or artificial general intelligence. So, the idea would be like, not like, please find all the cats in this picture or write my

high school essay on Pride and Prejudice. It's like an all-purpose sophisticated AI with autonomy. Now, you've spoken to a lot of people in tech. I've spoken to a few. It seems like some people in the AI policy world are extremely worried about this. We could create something that gets totally out of control, develops like a biological weapon, takes over our defense systems. How concerned are you about this,

β€œas a subject and an object of regulation? So, I'm concerned about it as I think some people”

are quite invested in the in the name and what the name means, and people are quite invested in whether it's super intelligence or AGI. I'm not at all invested in the name, and I don't really care. So, it keeps me out of some fights, but probably also keeps me out of some parties. I don't know. But, but I do think that, you know, I prefer to use the phrase like advanced AI. Like, there are significant concerns about advanced AI. So, example, if we think about the doge,

early, you know, last year in the Trump administration, and part of what the reporting and wired and elsewhere was suggesting is that doge was breaking the privacy active 1974, which said that a lot of energy agency organizations could not share data, and part because you don't want the federal government to have administrative data about you from health and human services, from fanny may, from whatever, to be able to put into this kind of large surveillance,

kind of pen out to con. And I think what powerful AI systems do is allow the interoperability of that data and the sort of discovery potential of associations that are dangerous things that we

could never possibly know about ourselves or about others. So, that's like not even AGI, right?

But that's just sort of a powerful extreme. So, if you imagine a system having access to data about everyone in the United States, everyone in the world, being able to sort of constantly be evaluating that data, running that data, and then making decisions. And again, you know, I mentioned that the beginning, the various forms of the, you know, autonomy or not of different AI systems, and to do it autonomously. So, imagine not just all of the open claws, not all of the little

Lobster claws of various agents, but like a really big claw, like a really po...

agent, sort of acting in the world. And so, we've got, you know, there's been some reporting, and I've seen, you know, and some people, you know, discussing on social media, things like, you know, I used this agent and wiped out my entire hard drive, or deleted all of my emails, or that, right? You know, and that's, you know, that's happening, and we're not imagining an AI agent that was tension in all knowing, and like, they'll decide it that it was going to

wipe out all of your email, because you worked too hard, or because it doesn't want you to work,

or whatever. Those are just powerful systems that we're learning to use. So, then you can imagine,

potentially a system having a bit more intentionality, a bit more sort of understanding of its stakes and being more powerful. The question then becomes, and I think this is where we trip ourselves up, well, how do you regulate that? It's just so powerful. What are we going to do? You know,

β€œand before you get there, you need to imagine that companies can actually be told not to build a”

thing. You know, I mean, you can't tell them not to, or they can be told that they can't ship a thing. You can't tell if nobody would to create, but you can certainly say, you can't ship this out into the world without certain controls. Like, someone needs to be able to have a kind of final decision on whether or not it ships, or to be able to turn it off and on, or you can only run it for a few hours, or it can only have so much access to so much compute, or so much data, you know,

and we're not having those kind of, I think, system-wide conversations, and, you know, to go back to where, you know, the kind of subject of the broader conversation. I mean, that is where you would want a smart, prudent federal government to sort of weigh in, right? That's where, you know, at that level of kind of nuance in both kind of level of abstraction and power, you might want

β€œthere to be some sort of federal, I think, law or legislation or guidance.”

When we come back, Dr. Nelson explains her vision for finding a consensus on AI regulation, and whether she's optimistic the government will figure this out. I mean, you developed this idea of a kind of thick alignment when it comes to AI governance. Can you talk more about what thick alignment is and how that translates to regulation? Yeah, so there's a wonderful writer Brian Christian who has a really, you know, important book

that I would commend to people called the alignment problem that is really writing about the first,

the kind of early years of what some people call AI safety, which is basically just like, how do we explain them? How do we interpret what they're doing? How do we demonstrate that they're safe to the extent possible? And, you know, it was very much a kind of technical sense of thinking about alignment. So, you know, the system says that it's supposed to identify at 98 percent with a margin of error of two or three percent, you know, these people in a facial recognition

β€œtechnology system, and for all intents and purposes, you would say that system is aligned, right?”

But we know the system is misidentifying people. We know in the Detroit metropolitan area that there have been more than half a dozen people misidentified by facial recognition technologies that someone somewhere in the development and deployment queue said, this is aligned, like this product works, right? And so, as we're thinking about AI systems and advanced AI systems, it's not just whether or not they kind of work technically, what happens when you or what can we anticipate or

not anticipate when you deploy them, and how do we create a process or an understanding that allows us to be thinking about alignment as something that means to happen fairly continuously over time, and also that's something that needs to happen in conversation with the values of different communities and different societies. So alignment is not, so by thick alignment, I am taking up the work of, you know, the philosopher Gilbert Ryle, but also the anthropologist, Clifford Guards,

who was a professor here at the Institute of Advanced Study in Princeton, where I am, who has this very famous sort of essay and concept of thick description, like that you don't really understand the world until you really thought to understand contextually deeply what it means, how do you describe it sort of deeply? And so my provocation to AI safety researchers and my

collaboration actually, so it's not just a critical work, it's sort of, what does it mean to

like alignment is important? Safety, explainability, interpretability, all the things that you might put in that bucket are really important and are important and taken together or are an important solution set for some of the harm mitigation that we might want to do in the space of AI. But what does it mean to do that in a way that takes seriously the different contexts in which these tools might be used, the different values. So if you think about, you know,

Anthropic has created a constitution for AI, for example, like, you know, who...

and are those the values that I want or others want? You see this kind of value conversation coming

up also and even some of the Trump administration's kind of the way that they frame sort of quote unquote ideological bias in AI. Like who gets to decide what's biased in AI? There's a technical

β€œquestion about bias in AI, but who gets to decide sort of what is a bias chatbot? So I think we just”

need to have a conversation we're not having about what it means to try to come to a rough consensus values to the extent that's even possible to try to have high level values to make decisions about these technologies. So I think the AI Bill of Rights was one of the ways that we were trying to point to that. But certainly I think state laws are another way. I mean, you have state sort of saying, I mean, that you might think of those or as examples of thick alignment. Like this is what our

constituency cares about and this is where we're going to lean in on in the regulatory space with the guard to AI. The other stuff maybe we don't care about so much. I mean, I don't even know if I have thick alignment with President Trump as like a human. You know what I mean? Like like I got a seems harder and harder to have it. You know, like when you're talking about all these hypothetical uses of this stuff and like if if something like a program is supposed to have inherent,

you know, human values. I know a lot of those don't feel shared a lot right now.

β€œNo, I think that's right. I will say, you know, one of the things I've been doing since I left”

to the AI Bill of Rights comes out on October of 22, 2022. And I've been since that time following it's after lives. And some of it's after lives. I think Romans to your point have been and red states. So there's been an Oklahoma AI Bill of Rights introduced as a bill. You know,

it didn't succeed ultimately. But it contained all of the five principles that we discussed previously

plus a few more. That were really good and actually quite stronger than some of the things that we suggested. More recently in November and Florida Governor Distantist introduced the Florida AI Bill of Rights which contains within it all of the five principles that we have. And lots of other things besides deep fakes, you know child sexual abuse imagery, a really nice clause that was around health insurance and not being able, you know, being able to get a decision around algorithmic uses of health insurance,

so I totally take your point, but it's also clear that there are a few things that we agree as wrong or that we don't want that are like suboptimal for society. And so I take, I think you're exactly right, but I also take some comfort in these bill of rights alignments that pop up here in there. You sound updivistic about the future of AI regulations, right? I'm not optimistic about am I optimistic about regulation? I don't know. I mean, I think if we look at the history of

technology policy at the federal level in the Congress, I mean, we, it was with correct me if I'm wrong, but I think, you know, it's maybe not been since like the communications decency act of 1996 that we've passed anything like a technology law, like, you know, such a long time. That's a generation. So I'm not optimistic in that sense. I think I'm optimistic with, you know, some people are calling it the tech backlash and I don't call it that. You know, I don't, I don't like that framing,

but there's a growing public empowerment to speak about what people want and don't want

with regards to the way that AI systems are being developed and deployed. So when I first started

working, you know, and sort of big data, and then, you know, that's if you came AI, you know,

β€œpolicy and research, that's how you date your side. No, I know. You say big data and”

everyone, and people kind of like, the young friends, they're just like a big data, so cringe. So, you know, you would sit in rooms and say, people can't possibly understand. I mean, even now, you hear people saying, you know, if folks don't have, you would hear in DC, actually, when I was working in Washington, you know, if you don't really have a PhD, if you're a staffer on the hill, and you don't have a degree in machine learning or AI, like, how could you possibly

even begin to, like, offer guidance on how we should govern this technology? So, of course, you don't want people who know nothing about AI to be governing AI, but I've been encouraged by the fact that the public has demonstrated that it is not true that you have to have a PhD in AI to be able to say something about the AI governance space. So, you see it in the space of governance of a data center, so people have really, you know, that's a place where like AI governance and

policy is quite tactile, right? So, it's quite, you know, it is in communities, it is about their water, it is about their energy use, and it's where, sort of, AGI or superintelligence, like lands on the ground, and it is where communities really feel they have a sense of agency around that. So,

We're seeing, I think, I just saw in the news that made us down, you know, da...

time, there's been, there were a lot of big projects announced that have been installed, that are being

revisited, there's reporting now about how a lot of these data center agreements and various communities were done with local politicians under NDAs, that local communities can't even know the terms of the agreement for some of these, and people are really pushing back against that, and they're pushing back against the chart harms to young people. They're very concerned about,

β€œyou know, suicidal ideation and how chaplots encourage them. So, am I optimistic about law?”

Absolutely not, but am I optimistic about the fact that, you know, it's getting much more difficult for, you know, companies and, you know, other elites who really want to just drive a technology without thinking about the harms and the social implications to do that, because you've got a growing course of people by partisan, Roman, but by partisan, you know, saying that, like, we don't want this, and so I find optimism no encouragement, yes.

I think one of the things I just want, my one point here is like, when I think that it's funny, is the biggest proponents of AI and the, you know, broad use of it are kind of the biggest

fear mongers of it, too. Like, I think kind of enjoy the sort of sense of, this is super powerful,

you just let us do what we want to, and it's going to destroy humanity in five years. Like,

β€œI think they like both of those things, so I think both of them like feed into their ego.”

They're both about power, yeah? Yeah. It's fascinating, because it's one of the things that the, that the alarmists are the biggest proponents is a weird dynamic. This is not like tobacco regulation, where we, you know, where the people who wanted to like regulate were just on the side of harm, and the other people were like, no harm. It's an odd dynamic, and it's one of the things that's also like mixed up in all this stuff of like the, the floor to regulation versus the California

regulation, the political valence of this stuff is much more complicated than most other things. Very, yes, it's very, it's very complicated and kind of heterogeneous, and so that's fascinating. And I think there's some very interesting essays, articles, papers to be written about at a time of, you know, maybe historically, since we've been measuring highest polarization and American society, that you've got this growing negative sentiment about AI, and that it's bipartisan,

and that the issue set about which people are having agreement of their dissatisfaction around is growing, right? So you go from kind of discrimination to young people and CSAM to fraud, to, you know, healthcare, like the space is just becoming much broader to data centers, for example, people are obviously worried about their jobs and worried about employment and what they're being told,

room into your point about powerful people saying our powerful tool is going to be really great

and destroy everything, including all of your jobs, right? So, yeah, it's, you know, it's a very

β€œinteresting policy space, and it's a space, as I said, you know, I think of political encouragement”

if not optimism. Yeah, I mean, this seems like a new opportunity for a different kind of alignment, which is really kind of fascinating. Yeah. Dr. Nelson, I really appreciate you being here. Thank you so much. It's been great to talk to you. So, that's the original seven articles of the Constitution. Thank you for joining for all of that. Of course, there are amendments to be talked about 27 of them, but we're going to take a pause

on the breakdown of the Constitution. There's just so much going on with Trump and the Constitution that we're going to go back to releasing are what Trump can teach us about calm law episodes. There won't be an episode of May, but we'll be back in June for Supreme Court decisions season. Everyone's favorite season. The 99% invisible breakdown of the Constitution is produced by Isabel Angel, edited by committee, music by Swanry All, mixed by Martin Gonzalez.

Kathy 2 is our executive producer, Kurt Colstet is the digital director, Delaney Hall was our senior editor. The rest of the team includes Chris Beruwe, Jason Delion, Emmett Fitzgerald, Christopher Johnson, Vivienne Lay, Lashima Dawn, Joe Rosenberg, Kelly Prime, Jacob Medina Gleason, Tallinn and Rain Stradley, and me, Roman Mars. The 99% of its below-goat was created by Stefan Lawrence, the art for this series, was created by Erin Nester. We are part of the

series XM podcast family. Now, headquartered six blocks north in the Pandora building, in beautiful, uptown, Oakland, California. You can find the show on all the usual social media sites as well as our own discord server, or we have fun discussions about constitutional law, but architecture about movies, music, all kinds of good stuff. You can find a link to the discord server as well as every past episode of the Commonwealth Book Club and every past episode of 99PI

99PI.

Compare and Explore