[music]
[music] [music]
“I'm Marissa Wong, intern at Laugh Fair, with an episode from the Laugh Fair archive for April”
12th, 2026. On March 26th, a federal judge in California issued a ruling temporarily blocking the department of defense from labeling anthropic as security. The ongoing legal challenges filed by anthropic against the defense department may shape how the federal government implements AI tools in the future, especially in the military and the national security apparatus. For today's archive, I chose an episode from October 28th, 2024, in which Kevin Fraser and
Aaron Gervor discussed the first ever national security memo on AI, and what its provisions
mean for the future of AI policy in the federal government. [music] [music] It's the Laugh Fair podcast, I'm Kevin Fraser, senior research fellow in the constitutional studies program at the University of Texas at Austin, and a Tarbell fellow at Laugh Fair,
joined by Arm Gervor, associate dean for academic affairs at GW Law.
“The structural changes that I think is the sleeper in all of this is the procurement.”
The fact that there's going to be streamlined procurement is going to send some waves.
And that's a big policy call. Today we're talking about the first-ever national security memo on AI. This long-awaited document provides a chance to analyze how the U.S. aims to position itself, it is competitive and perhaps can bat if race to lead in AI. Arm, we're talking the day after the release of the national security memo on AI, or to impress
my friends in Washington, the NSM. It didn't emerge out of thin air, though. I think there's a sort
of temptation to always see these new documents as pathbreaking as emerging from thin air and from the
new policy thinking. But instead, there's a long history of AI policy building towards this moment. So before we dive into the actual content of the NSM and its significance for AI policy and national security, Arm, can you give us a sense of how this all came to be? How did we end up with this
“really important document? Sure, and thanks so much, Laugh Fair, for having me on and really appreciate”
Kevin, this conversation. So I guess the big backdrop zooming out a little bit is there is no federal statute directly on key that regulates AI. There's a whole bunch of other federal statutes that regulate other subject matter for which AI is just a manifestation. And Congress has been pretty good at funding things or requiring certain types of training, et cetera, but this has been a dominant executive branch policy making model with regard to the advancement of policy and
for quite some time. The Obama administration laid out a couple general EOs. President Trump as well, the latest from President Trump executive order 13960 actually is a holdover that was not rescinded, but indeed adopted by President Biden. And the Biden administration, the big policy products are these in 2022, office of science and technology policy, OSTP for relatively people, issued the blueprint for an AI bill of rights, which introduced not just AI principles and safety
and security, but also a significant integration of Biden administration civil rights component type weaving in. Then the president on October 30 last year, so almost a year ago, five days a year ago, signed executive order 14110, which laid out the broad enunciation of AI policy
Making, which did include a significant amount of civil rights.
because that's part of what I think is on the political line right now. There's a lot of common
sense and there's a lot of autological even. And some of the advanced policy making that talks about keeping the technology within American supremacy, fostering it, utilizing it within government. That's all within the the cone of what I would view as relatively non-political. And then the application of civil rights principles is really where there's a bit of a disagreement on how it's played out. So following executive order 14110 and the bill doubts of it,
the opposite management and budget promulgated in MMO, M2410 on April 26th of I recall of this year, to all the non national security intelligence community agencies laying out a pretty predominant
model of strictures in part borrowing from the GDPR for data privacy with regard to high impact,
high use cases. The M2410 lays out if there's civil rights impacting or safety impacting AI, especially GAI, generative AI, that there has to be a higher level of scrutiny and red teaming.
“Then in September, I think 26th M2418 was promulgated by the opposite management and budget,”
which really lays out procurement particulars with regard to AI. And then we have the instant product or series of products that we're talking about today. The national security memorandum that was rolled out yesterday as required by executive order 14110 section 4.8. And associated with it is a framework to advance AI governance and risk management in national security. So the national security memorandum is really meant to be a doctrinal
document laying out US military intelligence community in national security doctrine for how AI is to be used adopted kept safe and kept responsible. The framework to advance AI is meant to be a somewhat modular document that is updated with regard to AI use restrictions, risk management, cataloging and monitoring AI use, and training and accountability. And then the last of the policy products, for which we do not have access, is a high side. So subject to classification,
“document that I understand relates to export controls. So that's how we get to today, procedurally.”
I can't resist the temptation because we're a mirror I think 12 days away from the election. So there's an election-sized elephant in the room, which begs the question, why does this even matter? Not to sound too cynical, but we're emerging we're on the precipice of some change in administration. And so a lot of the conversation around this NSM has been, is this actually a sort of NSC68, the historic document that changed our policy approach in the Cold War, or is this just
a nice, hey, we care about AI and it has national security implications. How can we think about that before we dive into the weeds of the actual document? Sure, so to unpack that a little bit, there's I think three components I want to lay out. One is from the Biden administration perspective, at least from what I saw, because that was at the War of College yesterday when National Security Advisor Jake Sullivan rolled it out and I got the vibe in the room had a lot of different side
conversations. And also, this, this in one respect, is the delivery of a promised product that was promised last year, right? And there was a 270-day timeframe for which the National Security
“Memorandum Draft had to be provided to the White House, which triggered some time I think can”
nitulate June, and then given the breath and the reach and the depth of the National Security Memorandum, the framework and the high-side document, which I haven't seen, makes sense for it might take up until around now for it to be issued. Will it survive? Well, that's highly probable lipstick in part based on the election president Trump, at least late on candidate Trump, for the Republican Party platform, I'll read you the text, artificial intelligence. We will repeal Joe Biden's
dangerous executive order that hinders AI innovation and imposes radical left-wing ideas on the development of this technology, and its place Republican support AI development rooted in free
speech and human flourishing. So that's quite critical to executive order 14110 for which this
National Security Memorandum is a follow-up document. So there's some non-zero risk of this going away if President Trump is successful in the election that's taking place in a couple weeks. However, I think if you're a little bit deeper, there's a lot of policy in here that undoubtedly
Will survive.
the civil rights emphasis and components that I think is at least one of the main areas of contention between the political line. In terms of the longevity also, in the presidential election,
“it vice president Harris is successful. I think there's some likelihood of all of this surviving”
and perhaps in a Harris presidency based on the political rhetoric. She may lean into the civil rights component even more. So I think this takes time to pressure test. It takes time to work through and the longer this policy or at least aspects of it remains stable. That's when we'll be able
to tell what this means. Because ultimately, if the executive branch changes a lot of its functionality
and there are some significant structural changes that we can get into. And also if the private sector takes this as a demand signal, which in some respects, at least Jake Sullivan yesterday said this was asked for by a significant aspects of the private sector, then you do have your durable change. I think really the big premise that I want to talk about is the nature of the technology
“and how it's developed versus other transformative national security technologies. And that sounds”
excellent. In terms of getting into the weeds now, looking at the specific national security
memo provisions, what are we seeing this suggest about policy? We see some big headlines in the memo
doubling down on AI, focusing on AI, channeling AI, directing AI. It really is emphasizing that no longer are we taking perhaps a wait and see approach to AI and national security. But instead, using the weight of the government's procurement power as you were hinting at, to really direct AI in a specific direction and that direction is making sure that the U.S. maintains its supremacy especially Lisa V. rivals like China. So what are the core provisions that you want to call out here
that listeners should be attentive to? Sure. So with the AI itself as a technology, let's look back to the pages of history and see other transformative technologies that were fostered by the U.S. government and its research arms and spending and funding. Nuclear physics led by the U.S. government. Information technology, significant advances in computing, rocketry, stealth, submarines, any kind of really advanced communication technologies. Many of
those were at least funded developed if not conceived in the context of government investment.
This is the first technology where it's the private sector that led the way and
laid out a completely new model for the national security infrastructure of the country to take a technology that it itself did not develop and is looking to apply and then further advance
“within the executive branch. So that's why this is a little bit different. All of the government”
structures leading up to this point were based on that current, that prior model that I laid out. Here I think there's a significant change and there is a big press, I think, in this national security memorandum, to not just encourage the adoption or the responsible adoption of AI, but to encourage the adoption and development of frontier models. This is the advanced stuff that is not publicly available to us. This isn't about taking, let's say, like a
earlier model of GPT and then applying it in the context of a national security application where it's already inferior by time, it's ready to go, housed in the secure data, et cetera, you know, secure data facilities and for utilization on the high side. This is something much more advanced and sophisticated. So for example, Jake Sullivan yesterday gave an example of precision missile system. Well, the model that we currently have is we have static weapon platforms,
right? And it typically takes, you know, some level of years before you have additional variants of them and then you build upon the platform, usually in a combination of hardware and software. The model that the national security council is thinking about and that is looking to advance, especially with Frontier AI models is something very different. So let's go back to that static platform, a precision guidance missile. So the hardware might be static for a longer period of time,
but it's entirely possible that for it to be most effective, the software might be updated on a far faster interval, maybe monthly, especially to keep up with and perhaps beyond any sort of
Battlefield adversaries, electronic warfare capability.
different mindset that is much more agile, much more flexible, and also the goals kind of building
off of like the post 9/11 counterterrorism, CT, ODI and I level across siloing. So getting rid of siloes,
“but I've been cross-cutting mechanisms. The goal also I think is for this NSM to foster”
sharing of these frontier technologies across agencies, sharing potentially even data sets, although they get a little bit trickier with regard to the civil rights components and the high impact, high use cases. So that's something that's very different. In a that holds that idea, that concept, that is something that's quite transformational. I think what also stands out with respect to the NSM's coverage is the fact that this isn't just a national security document
in terms of just referring to the Pentagon or just referring to the Armed Forces, but instead, we see a whole panoply of issues being addressed. There is industrial policy here. There's energy policy here. There's administrative law and procurement law. So can you give us a sense of some of those specific provisions and what sort of directions we saw issued to specific agencies in the NSM? Sure, so Kevin, I want to acknowledge, I've read everything. I've only read it once
and there's all of these different documents and I'm still getting my head together as I'm writing work product on what that might look like in the context of a blog post. So we can walk through this. So just looking at the title of the document, the document tracks with the subject of the memo, three distinct things, advancing the US's leadership in AI, second, harnessing AI to fulfill national security objectives, and third, fostering safety and security as well,
of trustworthyness of artificial intelligence. So the audiences are many. Audiences are US policy makers, guvies, us listeners on this pod, strategic partners, strategic competitors as well as industry. So this is meant to be as advisor Sullivan described yesterday a demand signal to industry
as well to indicate here's what the US government wants to do. Here's what it's looking for,
“here's how you should design products and design offerings, and it also provides as a”
permissioning mechanism for the government to move forward. Now, I think the challenging piece is that if I'm looking at the the rollout of M2410, which is that M memo from the Office of Management and Budget, that applied 14-110 to the non-national security agencies, the juxtaposition of you should be comfortable using AI, use AI, but then also red team a lot is a little bit of a challenge. It's a little bit of a shark, I think, for some of the agencies, they're trying to figure
out how to balance it through. Now, this NSM is not just sort of tepid with regard to GAI, like M2410, it's actually saying you need to really reach for the stars here and try to do some very advanced things while still having some of those countermeasures in place. Now, keep in mind some of the countermeasures are total logical to comply with the constitution, comply with civil rights laws. And many of which, if it's domestic use, definitely definitely apply. If it's
foreign use, you're looking at the National Security Act of 1947, second of order, 12 triple three,
that's where that tepid of U.S. constraint has historically ended, right? That's where
“that's how the U.S. government of the president is able to exit order extra-judicial”
killings abroad in the interest of national security with almost no oversight. Just, you know, there's the war powers resolution of 1973 and there's a disclosure to Congress, et cetera. That's type of thing. So that I think is going to be a little bit tricky, but also on top of that, a lot of the agencies have already erected and have been working on privacy considerations, et cetera, and some level of responsible thinking on all of this. Now, of course, there's a level of opacity
with regard to the IC, but at the same time, looking at the pages of history, U.S. has actually been the world leader and privacy is a concept, even, and also many of the civil rights concepts
That are sort of adopted in the world today.
respect, there's something to latch on to, but there's a little bit of attention there, between
full, full supremacy, fully leaning forward on national security, and then also some restraint.
“But I think also the, the drafters of this document would be thinking, I'm able to infer it from”
the broader text, is that because of the private sector lead of all of this, and some of the bad examples of where some of the purveurs of the technology in Silicon Valley, some of their employees got cold feet about helping the government on, on, on some of the use cases. So, it makes sense that this document is aligned in such a way where it doesn't create too much
blowback for the private sector where they feel like they can lean in a little bit more without
the hesitation, oh, let's just stuff going to be used for. So, there's a lot of balancing going on here. Well, that when open AI, for example, initially was getting a lot of attention for chatGPT's initial release, there were provisions there that said they did not intend to use their models or to allow their models to be used for military purposes. Over time, we have seen those sorts
of provisions disappear or be softened, similarly, anthropic, the arguably more safety-oriented AI lab has seemingly been leaning more and more into national security. And so, the sort of
inevitability of the militarization of AI seems to have finally happened for better, for worse,
what have you, as you pointed out, this is the doubling down moment. This is the moment where the Biden administration, most likely for any future administration, has decided that the U.S.
“is going to lead in the national security uses of AI, especially vis-a-vis China. And I think the”
provisions that are spread throughout the document that really emphasize the importance of pushing back against rivals, sometimes China's explicitly named, sometimes they're just inferred. But we see this huge focus on making sure that China isn't able to, for example, still IP from the labs, not able to interfere with the supply chain with respect to AI development. So, seeing the national security ramifications of this, it's hard to miss that this is very much in the
context of a greater wave of AI being a somewhat inevitable force for U.S. aims, especially against China. Do you have any, have you picked up any other insights did, advice or sold in have much to say about that China dynamic? Yes, yes, he did. So, the, the posture of the Biden administration is, and this is pretty consistent, is that of strategic competition. It's not just strategic competition, but also strategic alignment where there is an alignment between U.S. and any sort of foreign
adversary's interest. So, one example, one example would, would, of this would be the coordination between the U.S. and the PRC to disrupt the supply chain of fentanyl precursors to the United States. So, essentially, and drugs. So, that, that's, that is one example that he enumerated yesterday of where there's an alignment of interests. In another respect, this document, although not stating the PRC, and there are other adversaries besides the PRC as well, or at least strategic competitors.
I would even say adversaries, it's a strategic competitor, is that the U.S. should vigorously compete where interests are not aligned with any strategic competitor, and potentially even strategic
“allies as well. And, and that's, and that's how U.S. maintains supremacy. But the goal also,”
and this is in the backdrop, is with a concurrent emphasis on increased communication, having some sort of communication line, even, you know, melt-a-mail communication, PRC, U.S. melt-a-mail communication. So, that that vigorous strategic competition does not hinge on conflict. The goal is, nobody wants conflict. The plan is strategic competition, open communication where it is appropriate,
Maintaining some level of connection, searching for ways to have commonality ...
because this is also meant designed to be a U.S. export as well. Is a vision of a free and democratic world, with regard to the responsible and sound adoption of AI? So, you know, outside of this document, the U.S. has participated at the pledgely convening a couple of years ago last year in Seoul as well, laying out that type of a framework and trying to build connections and come Rotary. Yet you're right, there's a significant component with regard to maintaining intellectual
“property and technology itself. So, that's why there's a high-side export control, but that's”
sort of unsurprising. I like, this is not the first time the U.S. is engaged next. This is just
a manifestation of, like, a long-standing U.S. policy on these types of things. Also, in reference to certain types of data, right, so that the U.S. government has done a fair bit to catch up from the types of conversations that we've had years ago, or something like 8% of machine learning PhDs find the way into public service, just because if you can get $800,000 right off the bat at a Bay Area company, you're going to do that. So, there's increased amounts of
tech industry sponsored fellowships in the U.S. government. There's now there's AI officers, Chief AI officers, sometimes dual-headed with the Chief Data Privacy Officer, like you have with the Department of Defense. So, this type of learning process is moving forward. Even with regard to the GSA's 2024 AI training series, we're a colleague, Jessica Teleman and I,
we're responsible for one third of them of those sessions. We handled the procurement
with regard to AI, and then Stanford and Princeton handled leadership and then technology, the technical aspects of it. This really is, I would say, in the good sense, a whole of government
“approach with regard to the technology. And I think regardless of who wins in the general,”
a lot of the non-controversal stuff, of which, just like a majority of it is in my view, just like, good old, well-thought-out, well-developed policy that many people should agree on and should not be politicized. I think that's going to continue on. And then, of course, depending on the outcome of the election, it's pretty classic. You know, if you have a cross-party
overall office transition, you know, look, look in the first two weeks for a lot of things being
rescinded. Well, so, as you pointed out, I think you're one of the few folks who I would call a sort of government AI whisper. If anyone knew how government was doing with respect to adopting AI, it's got to be you. And one of the provisions that really stuck out to me was the emphasis on recruiting and retaining AI talent. And so, with respect to the idea of perhaps a change in immigration policy or a liberalization of some immigration rules to try to bring in more of that AI talent,
is that one of those bipartisan measures you think might withstand the test of either administration,
“Harris or Trump? Yeah, I think, so you mentioned immigration. I think there's a significant”
commonality of interest with regard to making certain types of visas, like non-immigrant visas in particular, and potentially even immigrant visa categories available for people who have that type of talent and knowledge. Those types of categories already exist, right? See, there's like L visas for specialty knowledge, exceptional folks can have an O, and certainly this could serve as a mechanism for which there could be commonality and sort of a moment of bipartisan
support and immigration with regard to this type of tech and keeping that talent in the United States. And then of course, there's F visas, there's there's there's STEM O, P, T, there's all these different types of mechanisms for which there's a lot of a lot of discretion within the executive branch to be hospitable. And I would be very surprised if that was an exercise. Although there's also an actual security risk too to that, right? So, I think it has to be balanced. And thinking about the
political significance of this NSM, for those of us who weren't at the war college listening to adviser Sullivan, and then getting the sort of immediate spin from stakeholders who are in the know, what was sort of the vibes in the room, right? Was this regarded as some groundbreaking moment,
Or was there a certain degree of ambivalence about its ramifications?
the key stakeholders in response to the release of the memo? So, there's definitely enthusiasm,
definitely among the, among the political ranks, but that's unsurprising, right? What I was most impressed by was for non-political folks throughout the federal government that I've been in touch with,
“there's a general support for this. I think there was a significant level of almost apprehension”
if not some level of fear, with regard to the adoption of advanced AI models and algorithms. And this document is a permissioning mechanism in some respect. It's a philosophical facilitation mechanism, because undoubtedly Silicon Valley, you know, be area influence isn't this, right? This
is not necessarily like a restrictionist document. It also isn't necessarily like a heavy, heavy,
competition document, either, right? This is about, there's almost like a pragmatic nature to the document, which as well. If we're talking about the most advanced frontier models, we'll, there's only a couple of players who can produce those. And if you're, you know, just trying to get into that type of, you know, mass-comput, quantum computing, really sophisticated applications that require, let's say, hardware that's in the possession only of the OS government. That is
in part the audience as well. So what I have not observed speaking informally and nothing on
toward with many agencies is that there's no adverse, oh man, this is not, this is going to suck.
That's not the vibe that I'm getting. And my guess is no matter what, the executive branch
“is going to be focused on a lot of the key principles. I think there is a distinct difference”
in views between fostering safety, as well. Like the, the, the NIST AI safety institute, that's something where there's going to have to be a fair amount of attention as the house successful that is, whether that works right as an entity. If we're taking the text of the Republican platform, the emphasis on free speech, at least what I understand it to mean in that text, really is about not really less a fair, but more of just providing flexibility within the industry
itself for the direction that it wants to go in. And, and that necessarily means fewer direct restrictions. There was certainly an early win for the AC, the AI safety institute, where in the NSM, it calls out the AC as the singular point of contact for AI industry stakeholders. And so I, I really wonder about the longevity of that provision, especially if, if we see a Trump administration that maybe isn't as willing to embrace the AC because I can also imagine quite a few agencies are thinking
huh, you know, I was developing a good relationship with anthropic or open AI or what have you,
“and to now have this upstart institute become the focal point of their attention. I think it's”
going to be a really interesting maneuver, whether that sort of coordination has any legs or not. Yeah, so I mean, I think this sort of remains to be seen. I think you, you fairly stated sort of the skeptical position. I'll take it one step further. Is it a regulatory agency that is unsupported by a statute or an express authorization of Congress, especially at a time when there is judicial skepticism for those kinds of things with axon, arthrex, loper bright, Kaiser, even corner posts with
regard to the temporality of suit, jarkacy, a number of other doctrinal and judicial real linement of the relationship between executive branch administrative agencies and the regulated public that are in favor of the regulated public. So I think a lot of it remains to be seen. I've gone on record myself thinking, and I did this right after like the big Senate testimony of Sam Altman, and Google, and actually an IBM in May of 2023 that I do think that there should be some federal
regulatory presence, but almost as an advisory mechanism that has a two-year reauthorization, that has relatively weak powers and is really sort of meant to be a consensus building entity,
Providing guardrails for really the types of use cases that are pretty signif...
So for example, like non-consensual deep fake pornography, right? That's that's like a good one
that nobody thinks the government should have any function doing in any way whatsoever, at least domestically, right? So this is really just like the wait and see. And of course, if the thing is designed without a statutory framework to undergird it, that could survive or it could be like a day one and therefore it has gone type thing. Before we go our merry ways before we inevitably get back together again to discuss how this NSM is evolving and being received,
I wonder if there's anything that you were surprised was left out. Was there anything you were looking for and anticipating that perhaps you didn't see, be explicitly expressed? That's a good one.
“I haven't, I think I'd have to go through maybe two more iterations of reading all of these”
materials, perhaps it's like 14,000 words and all. So we'll give you like two hours and then if you
like to really analyze the negative space associated with it. But really, I think the structural changes that I think is the sleeper, you know, all of this is the procurement. The fact that there's going to be streamlined procurement is going to send some waves and that's a big policy call. The procurement structures that exist predominantly are meant to foster competition, correctness, thoroughness, all of those other features that are all good policy and the choice,
the intentional choice to focus on a streamlined cross-cutting procurement policy certainly is an expression of seriousness for really staying on the cutting edge, because you're doing that with
“in SSR, at least it's structurally higher likelihood or lower ability to sort of dissuade concepts”
like lock-in, right? Where there's a couple of vendors and they're just there and it's more difficult for smaller players to get in. But that's the great game that the US is in right now and that's sort of been consistent with a lot of other major procurement mechanisms in the past. The difference here is, you know, is that the US itself developed the SR-71, right, with Lockheed Martin, and that was a plan and it wanted to do so for a while and like other major military technologies, there's
always a couple big, big, big players who can bid, you know, in a sealed bidding that's classified
and then ultimately one is selected and it moves forward. But here the technology again, I'm circling back, already exists. So there's a latent capability that has already demonstrably proven with adequate compute mechanisms of adapting to confatulation, et cetera, that really need to have the applied use case and then also perhaps certain governmental capabilities like the, you know, there's mass compute out there but then there's real quantum
computing capabilities within the US government's domain as well and sort of mirroring those upstitching them together and integrating them in a way that is consistent, integrated, and sufficiently sustained to develop fruits of use cases that we don't even know of yet today, right, like it's still discovering like, well, it'd be really good for this, that type of thing. So like one example, you know, that I gave in the executive branch wide training series on national security,
AI procurements, when I got a question, a simple question from the audience, but a good one, audience of like 1,400 people is like, well, how accurate do these things need to be? Well, we're talking about GAIs, whether they are large language, large graphic models, well, if you're four days out and it's from a hurricane, you know, you're from Florida, and if it's for meteorological purposes and you're able to be 5% more accurate than the best
model out there, that's awesome. But if you're engaging in utilization of an integrated AI for,
“let's say theater defense for an aircraft carrier battle group and you need to have, you know,”
pretty clear understanding of targeting in a 200 or even a 300 mile radius, especially to be able to deal with hypersonics, it better be pretty darn accurate. So you're not mistargeting, let's say, like an innocent commercial airliner that's like happening to be flying like a couple hundred miles away, just on its merry way. So those are the types of like frontier capabilities that I am inferring, because obviously they're not going to be saying that in public with no chatemows rules or
classification regime. Well, and I think too, the thing that was maybe missing on my end was
Correct me if I'm wrong, just an appreciation for the amount of resources tha...
have to be allocated towards this effort. I mean, when we talk about spending an AI, this is not
millions of dollars, hundreds of millions of dollars, billions of dollars, hundreds of billions of dollars, but really to be on the frontier, trillions of dollars. And the magnitude of those resources in my opinion maybe wasn't fully expressed there, but from my understanding what I've heard is that advisor Sullivan, for example, did make some pretty astonishing statements about just how much energy development, for example, we're going to have to spin up if we're going to realize
frontier AI on a sort of national scale, seeing perhaps 25% of all energy production go toward AI is not outside the realm of possibility with this new vision of AI as conceived by the NSF.
I agree. I mean, that's where I just in the past couple of weeks. The only technology that exists
to be able to provide that type of energy, especially if you're considering the rest of the
“grid that's under pressure from electric vehicles, from EVs and things like that, is nuclear, right?”
And I think that actually is perhaps the strongest case for the return and reconsiderate the creation of nuclear energy that I have seen in my lifetime. And yes, I agree that it probably is some trillions, but but over a period of time, like maybe 10 years or so. But what I think this NSM does and does well is I think there's a level of cognizance that AI could and perhaps is becoming like the new counterterrorism. For those of us in government 20 years ago and I was
like an intern 20 years ago, but I was in government like in '07. It still counts, still counts.
Yeah, I mean, the OS is still in Iraq, and the Greens on it's still active was just everything. There was just throwing money at CT, CT, CT, and a lot of it was waste. Some of it was highly effective, and there was a lot of structural change, like the Homeland Security Act of 2002 relied the act of 2005 Patriotic. All these different statutory schemes, you know, amending the National Security Act, creating OD and I, all of these different concepts that laid out different
doctrine, as well as structures and investment. I think there's a cognizance that we don't want
“AI to be the next CT. In the sense that, you know, you have to use that flash word to get more”
funding, and you can just do random stupid things that, you know, you just want in your own sub-agency and therefore you get the green light. There has to be more of an intentioned application. So we're not talking, I mean, I'm sure coal pilot is very, very useful, but not like for purposes of, you know, helping those aircraft carrier battle groups function. We're not talking about that. So I think this takes steps to sort of learn from some of the mistakes from the war on terror.
And again, I don't want to trash that too much because the U.S. was in a very reactive posture. I mean, you need to do a lot of things really, really fast. And whenever you do that, you sort of get a certain result that's aboptimal. Like if you look at the, you know, the, the, the mass of loan schemes, cash loan schemes during COVID, right? There's, there's a, there's a ton of oversight, and there's a ton of misapplication, but that was part of the design. There's just to get the money
“out fast and the cost was deemed acceptable at the time of decision. So here, I think there's a more”
intentional, structured way to do it. Some of the provisions 3.3c lays out that, yeah, I save the Institute under NIST, NIST getting in the space for this stuff was a Trump administration thing. Now, the, I think, again, the policy differences will be with regard to the civil rights things, with regard to even looking at the framework to advance AI governance. So this is the framework document page 3. So the years like the big no-nose use restrictions. Well, some of them are pretty
topological, right? It's unlawfully suppressed or burdened the right of free speech or right to legal counsel, especially for US citizens. Well, I'm happy that it's in there because it needs to be in there, right? And that that's just broader constitutional protection. But then there's a couple of other ones too, which are pure policy calls, detect, measure, or infer individuals, emotional state from data acquired about the person, except for a lawful and justified reason.
So that, that is something that is clearly a policy call or infer or determine relying solely on biometric data persons, religious, ethnic, racial, sexual orientation, disability, status, gender identity, or political identity. That's, that's, you know, one of the types of things that I'm sure, you know, Republicans are looking at. But then there's a lot of other stuff,
Where it's very straightforward, you know, do not remove the human and the lo...
critical to informing executive decisions by the president to initiate or terminate nuclear weapons
“deployment. Thank you. That's good. We want that in there, right? Thank you. Thank you. Thank you.”
Thank you for the human before the new. This was a maintaining the article to a core executive commander-in-chief prerogative. I am happy that's written down in the document somewhere. There we go.
“Well, for the rest of it, we will indeed, unfortunately, have to wait and see. But Arum,”
you're two cents on this are always worth a nickel, even with inflation. So thank you very much for
coming on and I'm sure this is not the last time we'll be talking. Always a pleasure.
“The law fair podcast is produced in cooperation with the Brookings Institution. You can get”
add free versions of this and other law fair podcasts by becoming a law fair material supporter through our website, law fairmedia.org/support. You'll also get access to special events and other content available only to our supporters. Please wait and review us wherever you get the podcast. Look out for our other podcasts including rational security, chatter, allies, and the aftermath. Our latest law fair presents podcast series on the government's response to January 6.
Check out our written work at law fairmedia.org. The podcast is edited by Jen Pacha and your audio engineer this episode was known as the band of GoPro here. Our theme song is from Alla by Music,
as always. Thank you for listening.
Can you do this? Yes. It's a streamer of RTL+. Go.


