The Lawfare Podcast
The Lawfare Podcast

Scaling Laws: Can AI Make AI Regulation Cheaper?, with Cullen O'Keefe and Kevin Frazier

8h ago52:4510,052 words
0:000:00

Alan Rozenshtein, research director at Lawfare, spoke with Cullen O'Keefe, research director at the Institute for Law & AI, and Kevin Frazier, AI Innovation and Law Fellow at the University o...

Transcript

EN

The Electronic Communications Privacy Act turns 40 this year, and it's showin...

On Friday, March 6th, Laugh Fair and Georgetown Law are bringing together leading scholars,

practitioners, and former government officials for installing updates to ECCPA, a half-day

event on what's broken with the statute and how to fix it. The event is free and open to the public in person and online. Visit LaughFairMedia.org/ECCPAEVENT. Visit ECCPAEVENT for details and to register. It's the Laugh Fair podcast.

I'm Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law, and a senior editor at Laugh Fair. Today, we're bringing you something a little different. It's an episode from our new podcast series, Scaling Laws. Scaling Laws is a creation of Laugh Fair and Texas Law.

It has a pretty simple aim but a huge mission.

We cover the most important AI-and-law policy questions that are top of mind for everyone

from Sam Altman to senators on the hill to folks like you. We dive deep into the weeds of new laws, various proposals, and what the labs are up to to make sure you're up to date on the rules and regulations, standards, and ideas that are shaping the future of this pivotal technology. If that sounds like something you're going to be interested in and our hunches it is,

you can find Scaling Laws wherever you subscribe to podcasts. You can also follow us on X and Blue Sky. Thank you.

When the AI overlords take over, what are you most excited about?

It's not crazy. It's just smart.

And just this year in the first six months, there have been something like a thousand laws.

Who's actually building the scaffolding around how it's going to work, how every day folks are going to use it? AI only works if society lets a work. There are so many questions have to be figured out and nobody came to my bonus class. Let's enforce the rules of the road. Welcome to Scaling Laws, a podcast from Laugh Fair and the University of Texas School of Law that explores the intersection of AI law and policy. I'm Alan Rosenstein, Associate Professor

of Law at the University of Minnesota and Research Director at Laugh Fair. Today I'm talking to Colin O'Keef, Research Director at the Institute for Law and AI, and my very own Scaling Laws co-host Kevin Frager, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Laugh Fair. Colin and Kevin have written a new paper and accompanying Laugh Fair article,

arguing that AI itself could dramatically lower the costs of complying with AI regulation. We discussed the concept of automated compliance, the limits of compute thresholds, and a novel proposal for automotive ability triggers that would tie the activation of new regulations to the availability of cheap compliance tools. You can reach us at Scaling Laws at Laugh Fair Media.org and we hope you enjoy the show. Kevin Frager and Colin O'Keef, welcome to Scaling Laws.

Thanks for having me. So you will wrote a really interesting paper about the effect of AI potentially lowering compliance costs for regulation and specifically in the context of AI regulation. But before we get into that paper, let's just set the scene. Let me start with you Kevin. What is the general problem of regulatory compliance costs just outside the AI context? I mean, in the paper, you provide some really interesting striking examples, for example,

you know, $55 billion for California's privacy law or outside the tech context,

the quote unquote nuclear premium, which adds double digit percentages to construction materials, and on and on. So just describe overall could of what the current landscape of compliance costs are and then how they map on to the AI policy debates that we're all having.

Yeah. So I think what's really important here is to frame that compliance costs vary

by your size of company, right? So for the sort of largest company, let's talk about meta. Let's talk about Google. Let's talk about open AI. They have whole compliance teams. Oftentimes hundreds, if not near thousands of lawyers who are just paying attention to what's the latest regulation? How can we streamline compliance with that regulation? And they're generally going to kind of float and get by whatever regulatory hurdles are thrown their way.

While that's going to be a substantial cost as a fraction of their total operational expenditures

Or as a fraction of their revenue and profits, it's kind of de minimise.

to comply in a fairly straightforward fashion. But if you look on the other end of the spectrum and think about the startups, whether in the AI space or generally just any small firm, compliant with any set of regulations is going to be a lot more onerous because when you start

something like a new business, your first hire isn't usually in attorney, right? We're expensive.

We're not exactly fun. You don't want to have us around. And so instead what do you do if a new law gets enacted? Maybe you just ignore it. And then you're kind of screwed when

you're found in non-compliance or you have to turn to outside council. And that means looking to

a big law firm who charges big dollar fees and suddenly for something as small as just updating your privacy policy, for example. That may cost around $5,000 in outside council expenses. And for a startup, that's a significant amount of money when the usual average operating expenditures for a startup is around $55,000 per month. And so compliance costs are really

this question of number one, how is it impacting you in terms of just those pure operational

expenditures? But then as we also point out in the paper, you have to pay attention to the opportunity costs. All the time that you spend collecting the requisite forms, touching base with the right administrators, so on and so forth. That's time you could have spent doing other things, other more productive things for your businesses in particular. So Colin, you've been involved in a lot of efforts to develop frontier AI regulation, your organization in suit for law and AI,

of which I should say I'm currently also a part of as Kevin in a kind of part-time capacity. I'm not sure I would necessarily call you guys necessarily an AI safety organization,

but I think it's fair to say that your AI safety adjacent or AI safety curious,

certainly you're in a lot of those same conversations as AI safety folks. How do you and maybe more generally how do you think the AI safety and AI regulatory community tends to think about compliance costs to the extent that they even do and should they think about it more? Yeah, so as for my eye, I think it's right to say that we take AI safety related issues pretty seriously and have done work kind of sketching out what forms of frontier AI regulation might look like,

but I think we and some maybe not all, but definitely some of the actors in this space try to be attentive to how you could tailor frontier AI regulations to capture a lot of the safety benefits, while also minimizing the costs on actors that are maybe not contributing as much to some of the frontier AI risks that we are worried about. And historically one of the main ways that people in the kind of frontier AI safety community have tried to thread that needle is by using something

called compute thresholds. This is a topic that I assume has come up on scaling laws before, but just to refresh your audiences, the idea here is that AI systems can be trained with different amounts of compute. There tends to be a relationship between the amount of compute trained in the capabilities and therefore maybe the risks of AI systems and computers also quite expensive as people probably know. And so one nice thing that you can do potentially is set what's called a

training compute threshold where you say that this type of regulation will only apply to models trained with say 10 to the 26 floating point operations flops. And what this means is that this would only apply to firms that could afford that amount of compute and even though it's not like an iron law or anything those firms would tend to be the better capitalized firms of the sort that have been kind of led with and therefore might be better able to absorb compliance costs. And then

firms operating below that threshold would be exempted. So that's one way historically that people have tried to address this problem. And so maybe one way of framing and motivating the papers like can we improve on that as a methodology for differentiating between firms that can easily eat compliance costs versus not or otherwise make the tradeoffs a bit a bit more sensible.

Let's say on the compute threshold point for a second because as you point out that is been

the standard way of doing it and it has certain two of appeal. But you all point out in the paper that increasingly that may not be a useful way of distinguishing on the one hand the models that we are potentially worried about and on the other hand the sorts of companies that can afford to pay these compliance costs. Let me stay with you Colin. Why is that? What recently has been

happening that is making the compute threshold approach perhaps no longer fit for purpose?

Yes, you know this is somewhat old news in the fast moving world of AI but you know over the past two weeks old. More or less you know over the past two years we've seen this emerging paradigm

Called reasoning models right and one of the key insights of reasoning models...

in some sense tradeoff training compute for test time computer inference compute that is to say a model that took less compute to train can kind of think for longer when you ask it for the answer to a question and perform as well as a model that took more compute to train but is only given a

single kind of forward pass to complete its answer. And I think a lot of people expect this to mean

that over time the amount of compute needed to kind of give rise to a certain capability level will

go down there's kind of other reasons to expect that as well firms are always finding new ways

to make their training runs more efficient and compute costs are also coming down right so there's all these kind of secular trends that tend to point to fixed flop amounts being cheaper to hit and also fixed flops corresponding to greater and greater capabilities so I think you know if training compute is a reasonable proxy measure and I don't have like a strong view on whether that's still the case you know I think it's a reasonable guess that it might be appropriate but if it is there's

a bunch of kind of secular trends that mean that it's not going to be forever and may not be very much longer either. And just one small thing to add on here is I think that the flop space to governance or flop space trigger for compliance expectations also misses some of the new risks that are emerging in a lot of the AI discourse so for example in state legislatures around the country AI companions now are among the top issues that they're focusing on you don't need part of my

friendship shit ton of compute to design a AI companion that's going to drive young users towards certain behaviors and so you know grounding a lot of AI legislation on that proxy it depends on the risk your focus on I agree with calling that especially for those sort of frontier risk it may be a reliable proxy but for the folks who are concerned about the AI issues that are oftentimes headline news these days I think it's particularly ill-suited for that. So it sounds like we have

the following problem which is that the current compute thresholds are insufficient to capture the world of things that we want to regulate so then the response would be we'll just regulate all the things maybe do it by some capability threshold or maybe just by a sort of a general

if you're building an AI system you have to satisfy these obligations on the other hand though

that hits into the compliance cost problem and so I think this is a nice segue into what I take to be the core inside of your paper and I'll start with Kevin here which is maybe we can solve this problem maybe there are some some sort of efficiencies to be had through this idea of automated compliance so Kevin what is automated compliance? Yeah automated compliance is exactly what it sounds like thankfully it's pretty on the nose here which is to say taking compliance

tests and delegating it essentially more or less to AI systems and this is not new by way of trying to find efficiencies with respect to complying with complicated sets of requirements or new expectations from the state or the federal government if you go talk to any business they'll tell

you about how they're always trying to streamline how they can apply with various expectations and

to create new workflows and so on and so forth and this is really just saying hey we have these new tools that are really good at a couple of things they can aggregate a lot of data they can parse through that data and they can share that data and so when we think about some of the AI regulations that we're seeing pop up around the country we've got SB 53 in California the Razac in New York there's a I'll say a SB 53 sister or sibling that's been proposed in Utah I suspect we'll see

similar kind of transparency requirements well what are we really asking companies to do with respect to those efforts? Well it's too compile transparency reports about how an AI system is performing and then sharing that information with a regulator well if we can have AI do that and

clone and I think AI will get to that point of being able to do just that will suddenly you're

somewhat try although accurate statement Allen of well why not just regulate everyone well if it's costless or near costless then yes why not right now we're seeing that the disproportionate burden that currently exists under a lot of compliance regimes would essentially disappear but I'll also flag that there are some other key things that we expect AI will be able to do if not now in the near future performing for example automated e-values on AI systems monitoring

safety and safety and security incidents for example which is another thing that a lot of

state legislators are looking at and then finally providing incident disclosures to regulators and

Consumers and so there's a range of really important kind of essential regula...

that AI may be able to handle in the near future and our argument under automated compliance is that AI can lower those costs and make it far more efficient for all sizes of companies.

Yeah and completely agreed maybe just two things I'd add to that too is first I would direct

people to a great article by Paul Ome called something like a toward compliance zero that came out a few months before us where he makes a lot of similar points and it liberates that very well and then maybe the other like framing that I think people might want to bring to this conversation is that you know most new technologies kind of expand the production possibility frontier right

they make new things possible and so you know that's what makes a lot of us excited about

AI technology and maybe sometimes also apprehensive but this is really just pointing out that kind of the one logical consequence of that for AI technology is that it's going to make new forms of compliance automation possible that wouldn't have been possible before. Collin I think it'd be helpful to get

a little more specific as to what sorts of things are automatable and what sorts of things are not

automatable compliance is a very general term it encompasses a lot of behaviors and so just give a sense of when you and Kevin already but automated compliance what sorts of tasks like specifically are you all anticipating and maybe more importantly what is it not automatable and is it not automatable yet or is it sort of in principle not really automatable? Yeah great question and I think this task-based framing that you introduced is really the way at least I think about it so Kevin

mentioned a few types of examples of things that we could imagine AI safety regulations requiring people to do and so a lot of these seem like things that in principle AI either could do today if you put you know a little bit of elbow grease and to working out the the workflows and plumbing

to make it work so things like compiling information about how an AI system was trained right

transparency type obligations maybe intervening in the training process you know there's different ideas for how you can intervene in the training process to make AI system safer or behave in certain ways right and so that's another type of thing where AI systems are you know quite good at coding the AI labs are already using their AI systems to help them build the next generation of AI models well you know if you require the AI system to incorporate some regulatory requirements

into that maybe it's not too much extra work but there definitely are things that you could imagine AI safety regulations requiring that would seem a lot harder to automate so just one example

I think that's often considered a kind of best practice in AI safety is something like human

red teaming where humans try to cause the AI systems to behave in undesired ways kind of by definition that as humans involve there's definitely a lot of interest in AI driven red teaming or AI-rated red teaming and so you know we will see whether that is ever competitive with human red teaming but you might want there to be a requirement that humans red team the system at least if that was requirement that would obviously be hard to automate though maybe you know with AI systems they could

do it quicker who knows and then maybe another thing you might consider is like some sort of like a clock time requirement right so one idea that people have talked about is something like an exclusivity period where you know a company kind of has to sit on an AI model and maybe can only offer it through an API or through a chat bar or something but can't release the weights publicly for maybe six months well people kind of see how it behaves and assess whether it would be safe to

release the weights of this model broadly kind of regardless of whether you think that's a good idea obviously you can't automate away six months although again maybe you can do more in those six months and maybe that means you would get the same safety benefit in three months kind of post AI that you would get pre AI so nevertheless like if you think about how different requirements might be specified some of them will be hard to automate yeah which you kind of get support of

the point of our paper which is that you should think about which types of safety requirements

will be more automatable unless and maybe there's a some reason to prefer once I will be more automatable how do you all think of what we might call the the good heart's law objection to your account so good heart's law is the famous victim that once a measure becomes the goal it ceases to be a useful measure and we see this sort of throughout society you know we all focus on such and such statistic about education performance or healthcare performance and then

the regulated industries start optimizing for that and that ends up distorting the very goal that they were trying to accomplish and what can imagine a similar concern with automated compliance where okay you know once you've made compliance kind of machine readable in a sense then you could imagine the incentive of companies to try to game the system train the models to sort of satisfy you know in legal terms you might think of this as a kind of letter of the law versus

The spirit of the law concern but I use to imagine a world where you have thi...

automated compliance framework but in the end it's not actually solving the reason that the

legislatures or the regulators put out whatever you know whatever compliance requirement they did whether it's the safety or anything else and I'm curious how you all think about that potential

concern. I'm happy to take a first step at this one. I think for me the difference here is that

good heart's law has some sort of reward mechanism that values changing your operations to achieve that result right so the assumption is that by virtue of changing your operations you'll send some signal to the world to your stakeholders to your consumers someone and so forth and be recognized for achieving that metric whereas what we're proposing is basically just continuing the status quo whatever you are doing the background tasks that you are ignoring to begin with or perhaps

not pain an incredible amount of attention to or not gathering in the way you previously imagined now AI is just doing that but it's not saying that we're necessarily going to reward you for this outcome or give you some relief from some other regulatory paradigm or something like that basically you get to carry on as is but just have this tool do your compliance task for you and so I don't have the same concern that suddenly an AI startup that faces some

regulation for which automated compliance is possible they just don't really have an incentive

in my opinion for for changing their behavior but I'm always intrigued what my co-author has to say

no I think I generally do it that I think you know good heart like problems are in

demic to the process of setting measures and then people optimizing against them and you know one way people think about AI systems is that they're optimizers and so they might find ways to optimize against whatever measures and do so more aggressively than humans might be able to so I think this will be like a general issue that the law and a lot of other sectors will have to grapple with in the future you know I guess the way I would think about it as it relates to this paper

is that you know it remains the duty and burden of legislatures and regulators to think about what types of behaviors they want to inculcate and find the best ways to do them and then they'll specify them and you know the best that we can do is help regulated parties achieve those specifications kind of as efficiently as possible and I guess yeah I could see ways in which introducing AI into that process introduces more optimization but I could also see ways in which it also helps for

example regulators think through more clearly they're like drafting process and think about ways in which the measures that they're picking might be good heartable for example let me pose another potential objection to the project which is if the problem that you're trying to solve for is let's say Silicon Valley's resistance to regulation and your solution is well it's actually going to be a lot cheaper than you think because of automated compliance that might only get at one part of the

reason why the technology industry might oppose regulation right so it may very will be that you know especially for the big companies where the compliance costs while not trivial are you know fundamentally rounding errors their concern is actually not cost at all it's the actual substance of the regulation right they may say you could drive the quote unquote costs of complying with the regulation to zero in the sense of lowering the administrative costs but automated compliance is not

lower than not in administrative costs of regulation so I'm just curious how you all think of that or whether that's just a different problem and you know we're solving a problem over here they're still a problem over there we might as well solve the problem over here even if it's not the entirety of

the problem yeah yeah yeah yeah yeah yeah I can jump in on that I mean I think that's great like

I think that we should just then have like part of what's exciting about this is it enables us to

focus on the first order question instead of the second order question of like do we think that these

regulations are worth the kind of first order cost and benefits is it worth you know preventing AI companies from doing the profit maximizing thing that we assume that they will do by default to you know achieve some additional degree of public safety or whatever other type of good we're trying to achieve and people like Canon will disagree about that like those disagreements you know are healthy and part of you know normal democratic debate and I think it's actually just more productive

if AI technology enables us to focus on those disagreements eventually and I'll jump on there to say that one thing that particularly excites me about this idea is the ease with which we can now switch to a different regulatory paradigm in which automated compliance is possible is way easier and so one of my greatest concerns about premature regulation and and we outline the difference between a sort of pro regulatory and deregulatory spectrum and colonii occasionally end up on opposite

sides of that spectrum but I think everyone agrees we want evidence driven policy and we really

Want to avoid path dependence being created by laws that are well intention b...

AI development down a certain direction when in reality you know we want it to go different

route that perhaps is even safer and even more innovation enabling and so if we have automated

compliance be the norm and it doesn't require you to effectively change your operations such that you're fulfilling some expectation of the regulators well now both regulators and companies can be more innovative and more evidence driven and that is super exciting okay so that's that's great let me let me kind of repeat back to you what what I heard and you can tell me if it's right

which is and I always find the sort of production possibility frontier diagrams from you know first

your micro economics really useful sorry I'm I'm now waving my finger in the air because podcast is a very visual medium as everyone knows but you know it I take it what you're arguing is that look there are real trade-offs in regulation safety versus innovation kind of as the class example and your paper is not kind of responding to that as a general matter what you're saying is yes but there's a whole another set of trade-offs that are actually dissolvable which is like you know

for any given amount of safety we can have the same amount of innovation we can have more innovation or vice versa as long as we get rid of this like compliance sludge and we should all want to get rid of compliance sludge because then we can start fighting about the thing that actually

matters is that a fair fair description of of the project yeah I would say so I mean yeah I

think we we say is much right like for for if you hold the level of safety that you want consent you get it for cheaper if you hold the amount of like regulatory costs that you're willing to eat as a society then you get more safety like either way of framing it right works and that's the beauty of positive some innovation so let's talk about another part of your paper and this to me was the most interesting idea and this is your proposal for what you all call

automated ability triggers so Colin what what are these what are these triggers and again what what problem are they sort of responding to yeah so this really goes back to kind of the the central tension that often motivate some of these debates where let's say that Kevin and I agree that like we need regulation at some point and Kevin's refrain is ah but if we regulate now you know you might have all these bad things you might go into a kind of course a path dependent

route of technological development that's hard to reverse or costly to reverse you could kind of lock in incumbent etc and I retort uh well I'm I'm quite worried that if we don't regulate now

there will kind of never be another opportunity to regulate or by the time there's another um

other opportunity to regulate it will be too late will have already had some sort of catastrophe that we really would have preferred to uh prevent but you know Kevin and I like share an underlying world view which is something like AI is going to unlock a lot of very very beneficial capabilities in the future and among those it really looks to us is like the ability to automate a

lot of core compliance tests and I think this uh you know the way that I kind of like initially

came up with uh some of the ideas behind this is like I think this suggests a very natural trade which is like we agree to regulate but not now we agree to regulate when that AI capability improvement that we both expect drives automation costs below some level that's the fundamental idea of what an automated ability trigger is it says we will the this regulation will not be effective now it'll become effective only when the costs to implement compliance with it are lower than they

are today uh because presumably AI technology is better at uh doing the compliance tasks and it's worth liking just to add something quickly it's worth liking that this is not a novel concept with respect to conditioning the application of a law on a certain event these are known as sunrise clauses uh a lot of folks know about sunset clauses and don't get me started because I can go off for another 90 minutes about the importance of sunset clauses

but sunrise clauses are also essential and basically condition uh the enforcement of a law on

some trigger that may be okay now an AI tool exists to allow for compliance or it can be something like hey uh we're not going to start to implement these privacy laws or regulations until we've actually created the privacy agency and hired the requisite number of staff and so on and so forth there've also been states that impose sunrise uh clauses with respect to occupational licensing provisions this is an interesting use case where they say we will not allow for a new occupational license

until there's a study done indicating that we actually need one which is kind of like no shit uh I would hope that's the law but sometimes we just need these reminders to be baked into the legislation themselves and just make sure I understand how this would be implemented

Someone would have to decide when the well I mean two two things would have t...

one someone would have to set the kind of trade off between how much automation do you want to make sure there is before the law goes into effect I imagine that would be something for the legislature to decide and then there's someone I assume in the executive branch who has to

say okay I've done a study I believe that the time is now in terms of satisfying legislation

do you have in mind who would do that my instinct would be like the secretary of commerce because of nist and I would imagine nist would be the National Institute for Heritage and Technology or the AI safety or whatever they're calling it these days in situ I'm like who actually does this and how I'm kind of curious in the sort of add law um minutia this a little bit um yeah I mean you know

I think as a first order matter I think there's a lot of different ways you could imagine this

being implemented and since it is a new type of mechanism you know I wouldn't say that uh congresspeople tomorrow should rush out and try to copy and paste the language from our paper into uh their hot new AI regulation bill there still needs to be a lot of work done to think through how this would be implemented um that said yeah I think the basic schema that you're pointing out sends about right where congress would say um you know we want this law to come into effect

only when um we think that compliance costs have dropped to x dollars per like relevant task and so you might think the relevant task is like evaluating a single AI model just to take a very simple example of what an AI safety regulation might do we think that right now it would probably

cost firms if you include kind of overhead maybe it costs like a million dollars to run a single

model evaluation and that's too much but if it only costs ten thousand dollars then we think that that's great just to make up numbers right and so yeah congress would say that and then maybe the yeah secretary of commerce it seems like the best place person uh in the federal system since we don't have the Department of AI yet uh you know uh says we think the day has come we think that the cost is ten thousand dollars here is why uh and then you know the enforcer starts bringing enforcement

actions maybe then litigants could challenge that determination court that is itself is a you know statutory and administrative procedure question that I am not necessarily an expert on but uh yeah they're the that's just one example of how you might implement this and something that we

talked about in the initial formation of this idea was the fact that this could lead to a really

interesting market on the private side of saying hey I want to develop the tool that then gets adopted or offered as one of the options for this AI compliance and we don't necessarily have that right now obviously there are a number of startups that are trying to think through how they can facilitate uh easing your compliance burden with various uh AI regulations and other regulations but actually developing this sort of AI compliance tool is a really interesting market that could

be created um and I also think it's worth flagging that this concept could have a lot of positive spillover benefits in other areas of regulation where we're also concerned about having a sort of disproportionate impact on smaller businesses. Let me actually stay with this question of who would develop these these tools because um I want to sort of kind of prod at this ideal or financial interesting um but one objection you might have is well why would Silicon Valley have an incentive

to develop these tools if it's not until the tools are developed do they have to actually do the compliance or the regulation comes into effect so how do you incentivize and of course Silicon Valley is a they it's not an it um but um how do you incentivize Silicon Valley to build these

tools when in some sense it's against their interests to do so. Yeah I think it's a a great question

and I think number one like there's a coordination problem or something right so if uh you know if from c that there's going to be a lot of business to be made by offering this like compliance tool it would be illegal for them to coordinate not to make it under the antitrust laws probably so uh they couldn't get together and do that but then also it's probably the type of thing that is built um you know by someone building on top of a foundation although it's my guess like the most likely way that this

would be implemented it's just hard for firms to kind of prevent them from doing that you could imagine having additional uh restrictions uh that make it hard for firms to like stop like people from building compliance tools on top of them I don't know if we want that but uh yeah I guess I'm pretty optimistic that like uh you know uh compliance automating AI will find a way uh you know at the very least there's like open source models that are not too far behind the frontier and this

would be you know you even harder for anyone to hold back intentionally. Yeah and I think that um

so long as the government is saying we're going to pay for this or whether it's the federal government or 50 state governments or governments around the world that want to emulate this automated compliance mechanism there will be a market for saying hey yeah we'll we'll procure and then make available uh this AI compliance tool or set of tools and we'll give you this contract and

On and so forth and so someone will want to make that money.

potential objections so let me ask this one of you Kevin you know one thing I can imagine a safety focus critic saying to this idea is well automated ability triggers just sound like a way of delaying regulation um you know if not indefinitely then for quite some time. I mean by by advocating the way that you the way that you all present this in your paper is this is a way of calibrating lawmakers preferences around sort of safety versus innovation. But a different way is saying is

well just the very idea of delaying this is kind of putting a thumb on the scale for deregulation because of course in the vast majority of other domains we don't actually do this. So you give some examples of sunrise provisions I think is very interesting to think about but you know like the counter example that came to my mind and I've not done a sort of deep study into this but I think what I'm saying is reasonably accurate which is when you know the

EPA or let's say this data california which is really taking lead on this um tells car companies

you know you must drive emissions down such and such a you know to to 10% 20% whatever the case is

they actually have not always done that knowing that such technology existed often it was

we're going to make you do this we'll set the effective date of this sometime in the future to allow you to prepare but it's kind of on you to figure out how to do this so why isn't that the better answer you know if you're worried about the companies not being able to do this now tell them you can you have two or three years um to do this this is going to go into effect and instead of saying it will only go into effect once someone else has figured out how to do it cheaply if you

you know it's going to go into effect so if you met a google open a i and thropic you know x whatever if you want to save money on the compliance which presumably you do you figure this out so it's a really valid critique and a good one I think that the assumption that colonis are making and that folks like Paul ohm have made and that others folks in the space of made is that a i seems to be closer to facilitating a lot of these kinds of compliance tests then perhaps in another domain or a different

sort of automated compliance game so I think that day is sooner rather than later so that's one response

another response is yes this is certainly putting a thumb on the scale with respect to assuming

some degree of delay now that's a reflection of the fact that every single policy we enact always has

constant benefits and this is sort of a forcing mechanism that says are you really weighing those as seriously and as thoroughly as you can and one aspect of that is the sort of loss in innovation loss in safety loss in just greater and novel technological development that may come as a result of that sort of premature regulation now we didn't consider this in the paper but i'd be curious perhaps we could add something on at some point exploring the notion of okay if these tools aren't

available within three years or within 18 months or within however long then it will go into effect right and that way you're kind of feeding two birds with one stone hashtag your welcome pita that is a different approach that we could certainly rely on that kind of tries to get both of those mechanisms going that you were mentioning Allen both at one point putting folks on notice that they may have to comply with this while also giving those innovators who want to develop the

automated tool in incentive to get you up and get going on whatever that automated compliance tool may look like yeah and maybe to add a few things to that yeah yeah um as the person who tends to like

worry a bit more about like us not regulating in time is like first this this dynamic works both ways

right this is a way of credibly signaling that like and bindingly signaling that a regulation welcome and do a fact if this milestone is met right it's it's definitely like in some sense if you don't do the disjunctive thing that kind of just said more flexible than a you know date certain sunrise provision but it's you know more certain than i like well we'll revisit it

if there is a problem that requires us to legislate which i think frankly is like the default outcome

the default outcome in legislation is nothing happens right and so i think this is a way of like trying to strike a deal that in principle like principal parties can agree to and then yeah i'd also create an incentive to like order the technological innovations in a way that i think reflects what people should want right we should want the technology that helps us solve these thorny trade-offs before the like applications the technology that create hard problems right and so this is saying

that like all those people we would prefer to have the compliance automating technology sooner thank you and if you do that you will be rewarded by the market because there will be a captive market that is uh basically you know strongly incentivized to buy it um but there are there are like situations

In which you know uh you might worry that this this is not ideal right so lik...

the most sense for problems where you think um you don't have like catastrophes that arise before

you have the compliance automating AI that could have prevented those catastrophes and that may or may not be the case so you know legislators would have to think carefully uh empirically and strategically about whether the problem this is the right solution for the problem that they're facing and it might not be uh you know other things will make sense for other problems so i posed the sort of critique from the safety side to Kevin let me propose the opposite

side of the critique to Colin which is this all seems very complicated why are we trying to regulate stuff in the future when we think that the technology that we don't really understand exists is like this is not how we do stuff generally the way that legislatures usually work is that they identify a problem they make sure they can fix it and then they implement it why are we signaling out um uh AI for this sort of additional regulation you know if if you can

if if the regulation is cost benefit justified today fine we can have that fight but if it's not

cost benefit justified today which is a little bit what i think the idea of these automated

ability triggers in the future kind of imply otherwise why would you push it out to the future what are we doing there are so many other things that congress could be doing today it seems weird to you know both have them guests and also it seems weird one might argue um to have them spend their precious current political capital on stuff that a buy definition is not going to

happen for a while and may never happen yeah again i think there's a lot of validity to to

that critique especially as applied to different AI problems you know different problems in AI policies have different dynamics and require different solutions and i think uh you know one of the the best parts of scaling laws is bringing more nuance to all the various AI policy problems that exist and so you know the there are problems that i spent a lot of my time worrying about where um society would probably have a very low i think uh risk tolerance right so i i i i think a one

example in this might be AI systems that are uh it would aid in the engineering of novel pathogens that uh yeah we may not have immunity to maybe quite costly to respond to you know covid cost trillions and trillions of dollars right and so um to uh be willing to prevent the next COVID we should be

willing to spend you know a lot of money right and so um i guess the way i i think about uh this

is that number one the use of it on the made ability trigger sends a useful signal about uh we

would you know prefer there to be lower cost to implement it uh and implement this type of regulation we are not willing to implement it at the current cost benefit analysis but we would be at a at a different one um and number two we're going to kind of like make that commitment credible in a way that uh delaying until the problem has happened is not it is not a credible kind of signal for market actors to be in working on uh in the meantime maybe uh you know sometimes it is sometimes

it isn't um so it's it's a it's a it's a way for legislators to really like uh put a credible signal that uh there will be market incentives to regulate in the future or sorry to uh to provide a certain type of AI service in the future before we close i i want to talk a little bit about what i thought was a particularly interesting scenario that you all have it's a little speculative as as you all describe but it's a very interesting potential preview of the future um which is um

quote automated compliance meets automated governance um so i could i could try to summarize what you're all predicting but i'd rather just hear it from you all what is this potential jetsons like world where uh essentially robots talk to robots to figure out what the law says

call and let me let me start with you yeah great uh you know i think if you can just imagine

if you just imagine a kind of human staff uh regulator and then the automated compliance regulated party um you're kind of playing half court tennis right so uh i think this probably works the most efficiently when the regulated compliance automating AI can talk to at the speed of AI uh some sort of other AI systems in the regulators offices that can help it understand like hey like can i get additional guidance on this for example and uh you know i don't know how long that would take in

a typical regulatory process my guess is on the order of months uh but maybe it can provide it in the matter of seconds right um and that's just like one benefit that kind of automated governance could bring to this process um is is kind of the speed of AI and there's a lots of others too so you know why don't firms just share a bunch of information with regulators and uh you know just like try to get better better signal from them about what what what's sorry what's not one plausible answer

is that they are afraid that the regulator is going to like use that uh selectively against them or hold it over their head or something um i mean part part of the reason that is is worrying right is that um because regulators are staffed by humans humans can't just like forget things that they've

Learned about regulated parties um but maybe you could design a isisems that ...

survival children I can forget anything uh NVU NVU on um but uh you know maybe one thing that uh

regulator AI a regulator side AI's could do is like have a kind of quasi-privileged thing where they say like we want to get like regulatory guidance on this like uh type of thing we're going to provide you a bunch of super sensitive documents that we wouldn't share with anyone normally but um because we have strong you know trust in the regulator side AI set up that that you have we know that you're not going to use them for other enforcement actions you're just going to

give us you know your regulatory approval um and then we're good to go and like you know we can have a kind of secure record of that that we keep when you ask us later you know hey why do you do that and because they while you're you know we showed this to your regulator AI and it said it was okay and then you know everything's everything's good so I think just like ideas like this about the potential synergies between these two things is going to be a really important dynamic in the

21st century to consider and I'll just add what I think could be a concrete example of this so I'm thinking a lot about uh workforce and job displacement issues right now and there's a lot of conversation about how we can update the worn acts and for folks who aren't steeped in 1970s policy this was the idea that when you lay off a 300 folks at your factory in Buffalo, New York

you have to tell not the Department of Labor because that would make too much sense but the

local officials in your state that you're about to lay off 300 people. We'll now we have a lot of concerns for example we're talking on January 28th, 2026 Amazon announced it's going to lay off 16,000 people and some people are attributing that to AI and so there's a lot of conversation about how can we manage the labor market in a more productive fashion. Now no company wants to send to the Department of Labor hey here's all of our information three weeks in advance we're about to lay off these people

please don't do anything mean or give us bad press or anything like that. What they may be willing to do is let's say on a quarterly or monthly basis submit data via automated compliance to the Department of Labor who can then aggregate and then share out really valuable insights that could trigger congressional hearings or a response by the Department of Labor or a new program by job retraining programs and things like that. That's a whole new workflow and kind of regulatory

approach that we just don't have that automated compliance and by extension automated governance could realize and that to me is really exciting. So I want to end by asking you to to reflect a little

bit about sort of your journey in writing this paper as you know and I think Kevin as you pointed

out earlier in the conversation you too are on I don't want to say opposite sides of the pro regulatory versus derregulatory spectrum but there's some sort of daylight obviously between

you two which I think is actually always a really fun way to sort of collaborate and I'm curious

having thinking having thought through this issue and and the many conversations I'm sure you too had and writing this paper has it changed your views on either the optimal timing or content in a regulation. So let me ask Kevin your version of this question and then I'll close up asking Colin his version. Kevin has it made you more sympathetic to some forms of earlier or more intensive regulation on AI let's say. Yeah I think I'm very sympathetic to the argument that

there are certain things that we may not be able to measure and this is where Colin and I think had a meaningful discourse of automated compliance can only go so far and so by virtue of writing

this paper and having that experience I think it did shine a light on what are the areas of AI

governance where we're still going to have to have a sort of human driven conversation about what risks and what benefits are we willing to tolerate because quantifying all of that and using AI

to derive all of the requisite inputs and data may not always be possible in the near term.

Given the sort of risks that we often talk about in a more kind of long term perspective and so to me it was just a really useful exercise to try to bifurcate what's the sort of information where automated compliance could be really useful and what are the sorts of tasks that will not allow for that sort of compliance and then with respect to those tasks who then has the institutional capacity to handle those regulatory questions. So to me it just added more nuance to

use Colin's word and more nuance in my opinion is always better and a heck of a lot more fun. So Colin let me ask you sort of your version of the same question that close out has it made you more sympathetic to the concerns from the quote unquote pro-nevation side around compliance costs? Yeah I mean I think the pro-nevation side has done a really good job

Of hammering or injecting a few different very important memes into this disc...

working on this paper is great to grapple with them and like among these and one thing that I hope comes clear is that like we're both big believers and the idea that technology is generally positive some and you know a lot of discourse tends to lose light of that fact and this is kind of in some way applying this like general positive some dynamic into a domain where there's often like assumed to be a zero some kind of trade-off right so I think grappling with that is has been

fine I think that grappling with these like timing problems is also is also kind of important

you know when when I was at OpenAI one thing that OpenAI talks about a lot is the benefits of iterated deployment and by which they mean that like the process of society seeing AI progress and learning how to deal with it incrementally is beneficial to the kind of long-term challenge that humanity has of figuring out how to deal with AI systems you know people kind of agree or disagree with the specific ways in which OpenAI's been going about that kind of iterative deployment

philosophy but I think that the core insight that learning from the technology and leveraging

some of its beneficial uses as it advances is it has a lot of benefits that I think AI safety and policy discourse you know four years ago or something might not have appreciated and I do think this general bet of try to sequence AI innovation in in the way that you know gets you the most

socially beneficial applications first and think about ways to do that instead of just

framing it as a progress versus stasis kind of problem I think is like maybe a more productive

framing and thinking about you know ways to do that I think is a fruitful policy endeavor that

hopefully this paper is just the first of many and because I think everyone agrees that different forms of progress you know have different social values right progress and more addictive drugs is is probably not a good thing progress in providing legal services to people medical innovations etc is better and so you know when we can kind of selectively pick beneficial forms of innovation all of us equal we should prefer to do that and yeah this is just one way to do that well I think

it's a good place to leave it it's a great paper will link to the original paper that lie is hosting and then to a shorter law fair post that should be up by the time this is released but thank you Colin and Kevin for coming on the show and talking about it

thanks Alan always a hoot thanks

scaling laws is a joint production of law fair and the University of Texas School of Law you can get an ad free version of this and other law fair podcasts by becoming a material subscriber at our website law fairmedia.org/support you'll also get access to special events and other content available only to our supporters please rate and review us wherever you get your podcasts check out our written work at lawfairmedia.org you can also follow us on x and blue sky

this podcast was edited by gnome osband of goat rodeo our music is from alibi as always thanks for listening

Compare and Explore