(upbeat music)
- Welcome to Building AI Boston.
Today, our esteemed guest is John Sue Johnja. She's the Director of Responsible AI Practice and an Associate Professor of Philosophy at Northeastern University. Today, we're gonna be talking about all things
ethical AI and why academia matters more than ever for this founder and director of the AI Ethics Lab. Welcome to the show, John Sue. - Thank you so much, Anna. - Thank you.
Oh my goodness, you're even your background is like mind blowing. I mean, I can't wait to talk about this.
“Karat, I think we've really broken down ethics”
with an expert like John Sue, yeah. - Yeah, that's right. And you know, I imagine we're not gonna get through it.
I'll say either because they did a complicated subject, right?
(laughing) - Well, thank you for being with us. And you know, I'm honored because you have been recognized among the 30 influential women advancing AI in Boston alone and 100 brilliant women in AI Ethics.
Is it true that you even work with the UN and many other privileges, my friend? - Yes, I worked with UN Interpol rather economic firm, South Organization. I mean, basically all the, I try to help whoever I can
in terms of putting ethics into practice in the biggest, you know, bigger, bigger platform. - That's a big, big platform. I'm gonna say this that before your work in technology, you were on the full-time faculty
at the University of Hong Kong Medical School and an ethics researcher at the Harvard Law School. Karat, I'm kind of star-struck, are you? (laughing) - Oh, yeah, and you know, it's actually pretty cool
and you think about like health and then law, like those are two really big areas where the ethics
are pretty critical to the functioning of those.
- Yes. - Obviously.
“- I think for your audience who may think like,”
well, how does why does she do AI Ethics now after being in medical school and being in law? I think kind of exactly that intersection. I am an ethicist, I'm an ethicist. I'm an ethicist.
So I am interested in questions that are high stakes and ask the question, what is the right thing to do? And this critical core question is there, whenever you engage with ethics and once you start talking about AI, it becomes yet another critical question.
And the same training that allows me to do ethics and public health allows me to do ethics and AI. Of course, with the additional training around, that domain, this is great to unpack this, because I think I think the big question
or the big concern on everyone's mind is, okay, I'm a graduate student. And now what is my job really truly being taken over my AI? We're not gonna get into the fear mongering, but I just wanna ask you, how is your career in academia
and forms what you believe in your core about the subject of ethics?
“Why do you stay there where you are at Northeastern?”
- Yeah, I think the main thing is, so I'm gonna, again, emphasize the philosophy part, right? They, because I think a lot of these questions that we are asking, you know, these questions that have should,
what should AI systems do, what should society do as they adopt AI systems, what should policymakers? So these should questions are really philosophical questions. And I don't mean this in the sense that let's sit down and in abstract think about it.
I mean it as we use the theoretical and structural thinking that philosophy brings in. And all the, you know, two millennia of work by the of work that philosophy brings in in order to figure out our path forward.
So the questions, like the way to think about this, it's almost like the math behind it, right? Like the philosophy behind it is very structured. So being within academia allows you to have access to experts
who are thinking about these questions without that in an impartial manner, by being objective, not for companies sake, not in order to make profit for a particular product or a particular approach,
but really looking at it from computer science perspective, looking at it from design perspective, looking at it from philosophy perspective, or sociology perspective politics. That is really the, you know,
the benefit of being embedded in academia. No, we understand that this integrated approach is part of your initiatives as the director of AI ethics lab is that true? - Yeah, from the outset.
So I started the AI ethics lab. AI tech lab is my own initiative, let's say. I started in 2017 and since 2017 and then noticed and wanted well requested
A similar function as they were building
this institute for experiential AI. So the idea of AI ethics lab was coming from my work in ethics and health actually,
because the question always has been,
how can we not water down the ethical questions really be serious about it? But also don't take years and years to think about them because the questions have to be resolved right now, right here when a developer is building an AI system
or where the manager employer is about to deploy an AI system. So you don't have this leisurely time frame,
“you have to sit down and make decisions.”
And this is something that anyone who has worked in public health, for example, would, you know, like, yes, we would love to have the infinite time to come up with the best models possible. But if there's an epidemic,
if there's a pandemic, you gotta act with your best intentions, with your best knowledge and with the best, you know, state of art that you can possibly use. So same mentality that created AI ethics lab back in 2017, which still is going on.
And the same mentality for response bad practice than I created at Northeastern University. And Northeastern was the perfect setting for this, because of Northeastern's the whole mentality of having rigorous academic theoretical thinking,
directly connected to industry and practitioners.
So that intersection is super critical
for the work that I do, right? And when you think about public health, we had a great conversation recently with another guest about the analogies. And, you know, see, these large systemic sort of way
of thinking, right, about big change, rather than just individual sort of. It's like, it's like the idea of precision medicine versus public health, right? There's kind of like this individual versus the group.
And there's a theory in that you know,
“well, and I think our audience might be interested”
in hearing about it is the concept of farm reduction, right? And like, farm reduction in public health is, you know, some intervention is better than none. And in some cases, so I don't know, when you think about ethical AI and first I'll even back up
and ask you like, what does it mean? What is ethical AI actually mean? We say that, and then how do you think of applying that at a large system-wide global perspective? So it needs an easy question.
[LAUGHTER] So let me start with like accepting one of our issues, which is, we cannot settle on the perfect naming of this thing that we do. So it's not quite right because we are not
trying to create like human like AI that is now acting like a moral agent. But what we are really, but you know, we don't have a better shortcut. And responsible AI is also not like AI having responsibility,
trustworthy AI is not with trust AI.
So there's always this conceptual problem
that we run into, but ultimately,
“what we are trying to say is the following.”
We want to create AI systems. We want to make sure that we create AI systems that are beneficial for the society, that are in alignment with our general society expectations and our legal frameworks.
And also, you know, within this, what does it mean that in alignment with ethics, what it means that we want to, you know, there's like, when you think about the, again, going back to philosophy, like moral philosophy,
the more important philosophy, they're really like some core ideas. And that has never changed. Like those are the core ideas. We want to make sure that AI systems that we create
are still allowing humans to have autonomy. In the sense that I don't want my agency to be taken away from me. I don't want my decision making to be taken away from me. I can delegate.
That's a different thing. I can choose to make an AI system, make some decisions for me. But I don't want to be a neighbor. I don't want to be manipulated all the time. I want to be able to make decisions for my own life,
for my own body, for my self, for my power. You know, respect autonomy, human autonomy. Sure and what you're really talking about is governance, which is a nice umbrella term for-- Yeah, exactly.
So this is the autonomy part. The other one is what you just mentioned, Kara, basically the harm reduction. We want to make sure that everything that we build is reducing the-- as we are building number,
we reduce the harm, we increase the benefits. This is the same when you are doing public health analysis. It's the same when you're doing AI analysis in this sense. And the other thing, other major major thing, is fairness.
We want to create systems that are as fair as possible. And of course, the definition of fairness varies according to the question that we are dealing with, according to domain that we are dealing with. But the thing is, we have the tools to think about these things.
We know that there are different definitions of fairness. We know we have literature. We have work that's done on which one is the most appropriate for health care versus for finance versus for insurance, versus for education, and then operationalize it.
Basically, we don't want AI systems to come in.
Take away strip us from our agency, manipulate us, become air to us, and harm us. That is the thing that we are trying to prevent.
That's basically the off response.
But AI ethical AI trustworthy AI, whichever word you want to use. Nothing, nothing too big there of that. That's everything.
“A core issue, I believe this is why people are educated”
and people like yourself. I think that I noticed a trend with our guests is that what we're really up against is the speed at which AI is doubling. And I've read articles that suggest that if you
are thinking AI did a pretty bad job of whatever it is that you're putting it into your job, last year, it's going to do it perfectly in six to four to six months now. I mean, is that really what you're up against when it comes to getting this conversation out there
and having people understand, hey, if we're not deciding this democratically, somebody else is and they're thinking in a silo. Is that why you have sort of the pressure to get people's understanding up about what AI ethics can do
or why we're doing what you're doing? - Absolutely, I can just say yes and shut up. (laughing) - Yes, yes, yes, yes, yes, yes, yes. - No, seriously, like this is really a great way of putting it.
Because we, me and my colleagues, my team, my colleagues at Northeastern, we are working on AI because we find this technology fascinating. And when we say AI, by the way, we are not just talking about large language models
and genetic data, I mean, I and some of my colleagues working on for the last 10 years and some of my colleagues have been working on 20 years and more than that, right? So we are not really the AI that has been like changing
and becoming different, better in some sense, not so much better in other sense, but we are not just talking about, your regular touch APT or whatever. - Yeah, and exactly what you said Anna,
the systems are evolving quite fast, which is, again, fascinating and it's so exciting. At the same time, we need to make sure that our approaches to safeguard these systems, design them well, make them well.
And oftentimes, you know, I wanna make sure that this really is understood that when we say ethical AI or when we say putting ethics into AI, oftentimes, this just means that we are making the AI system better.
You know, it captures, you know, it is more accurate. It is more reliable. It is, it doesn't unjustifiably discriminate against the group. What does it mean to unjustify the discriminate? Well, it was first of our example, pick up your resume
because you were a great fit and it just didn't because you were a woman. Well, that is a system that is just malfunctioning, it is not functioning well, right? So forget about this, oh, do we want, like,
I don't want to have the conversation to always go
towards like, but what does morality mean?
“But much more like, well, what does a good technology mean?”
What does a well functioning technology mean? And a lot of the times doing ethics or response play AI, AI safety really gives you better technology. So that also means that you don't just like, you know, innovate in a way, and then try to tie
a little ribbon around it and call it responsible, you have to innovate responsibly. You have to design it in such a way that the correct optimizations, variables are integrated, the correct data set is used, the correct,
the appropriate model is employed or in deployment. You want to make sure that the right type of AI systems is used for the right type of purpose. So these are not like your one-page policy that comes all the way after the fact,
but this is really about how to develop, how to design, and how to implement. And they do, oh, sorry. And the idea that it doesn't have to be a zero-sum game, that if you're building ethically versus building,
well, they can be together.
“And I think that's what we've talked about that in other shows,”
and just how important it is that the business side of this equation understands us, and maybe dig into that just a little more. Yeah, absolutely. I mean, there are multiple different ways of thinking about this.
Like for example, if you think about,
let me just go back for a second to the unfair discrimination case, right?
So if you are trying to acquire a, if you want to implement an AI system to determine who should you give, what level, what type of laws? What you're trying to really do make, like your goal is to capture all the customers
that are qualified for this law, all the customers, right?
A lot of the times, what happens with it,
not so well done, systems is that because of historical biases
against racial and gender biases, for example, in your existing dataset. Or because of the fact that there are so many groups that don't even have their access to bank systems, which means that they are not even in your historical dataset.
What ends up doing when you don't look at your, when you don't look at the fairness of your model, or of your AI system, what ends up happening is that you are missing out on customers. You are missing out on great potential customers.
So this is a good example, and a lot of the times, this is the type of example that we are seeing over and over again, you just, you want an AI system, and that AI system is not really serving the function that you wanted it for, which is capture all the customers
that would be good fit for this law. And saying those for the hiring,
you know, for the ranking or resumes or education applications,
like the student applications, like all of those, like you're trying to capture the best fits, and if the best fit is discriminated against, then the system is not doing its job. Or you know, like from back in the day,
one of our classical examples is like facial recognition technologies. And if the facial recognition technologies keep failing on women and people of color, well, that just means that the system does not function, well, so as a, let's say police agency,
that you buy this technology, it keeps failing. This is not like, again, this is a business case. Or if you are using, you know, large language models, you have confidential information, you don't know how to use language model,
you put all the confidential information. Well, you have a problem because that is not what you wanted. It goes against your business interest. Now you have a lot of your confidential data. Like all of those type of things,
there are really like limited examples where we have to go head to head with a business saying, well, we see this is for your business interest, but it is not difficult. This is more difficult.
And yes, I mean, this, well, yeah, I think it's hitting headlines harder. I think when people understand, you know, you can ostrich yourself and think it's only, you know, this company or this company,
when they realize the landscape around them is shifting so fast, these questions, I'm afraid that if people don't jump into this conversation and at least, you know, find your place in it,
“that's why we exist, by the way, so that we can really”
bring people into the mix and understand, yeah, the things that people are talking about now, the people like John Sue, the people that she's, you know, assisting. - Super Bowl is a really interesting example.
It's like on my mind when you talk about facial recognition. I mean, people are going to want to know that these things are being decided and how they're, you know, their private information is being used, how they're being assessed for a loan.
I mean, it's mind-boggling and yet we're trying to dial down the fear because people like you are in charge still, people like you are a part of the conversation. I want to ask you like a really granular question, 'cause it came up in your resume,
you've implemented something called the problem solving in ethics, the pie model. - Yeah, I'll come up with that, is? - Yeah. Before I get into that, and I just want to point to something
that you also said that in a little bit earlier, but you said that if you're not deciding somebody is deciding,
“I think that is so important to underline.”
Like there is no option where there is no value judgment being made. Like there is constantly a sort of value judgment being integrated into the AI systems. They are designed with a value judgment
in place, like they are optimized for engagement, or they are optimized for having your making sure that they have better accuracy or they making sure that they protect your privacy. So these are decisions, these are trade-offs,
that you don't have the option of like, let's just not do this, let's just do math. Well, your math makes those your variables, or math is not just numbers, the numbers are attached to purposes, the data, that there is no such thing
as like completely objective value free system. So I didn't want to lose that part because that news said it so correct me, that there is no option of that nobody is deciding. If you are not deciding somebody else is certainly deciding.
- Well, yeah, and I'll let you get to the pie question 'cause I literally want to know, this is not gonna be our last conversation.
“- I'm just telling you, we could do this all day, right, Gara?”
- Oh yeah, I would love to. - I would love to.
- This first, the pie model is something
that I developed back in 2018, 2019. And the idea there was, this was still earlier times when, you know, back in 2017, 18 by the way, there was so little conversation about AI governance and AI ethics.
There was still some, I mean,
I don't wanna overlook some great work in the visuals have been doing around the world for such a long time. But in terms of like the discourse, I remember, you know, I had to,
when I first reach out to companies, I had to say, I would like to discuss AI ethics with you.
Here's what it means, and here's what it means.
(laughing) But there was no questions. Like, well known establish concept out there. So the idea of pie model puzzle solving in ethics was trying to really push this idea that we are not,
I'm not talking about, let's have another committee. Let's have another poll, it's like one page poll. Let's have like three principles and call it a day. What we are really doing with ethics is as exciting and she ever changing and progressive innovation is,
ethical questions are like that too because they are connected to these innovations. So a lot of the times, it's not like you come to me, I know the answers, and I'll tell you what to do. And that is also the reason
when people very rarely at this point, sometimes they will come and say, like, what do you mean, I react responsibly? I might not. The responsible researcher, why do I need to talk to you?
And I say, well, it's not personal about you.
The problem is, you know, we don't often know
what is the right direction to take. Because these are complicated questions. Like, how, which data set? How do you, which models to choose? How do you decide on what type of fairness
are you going to try to, or equalize the outcome? Are you trying to, are you going to equalize the treatment of individuals? Like, what these are questions that are that don't have like the rule of the time type of answers.
So it is like a puzzle. And I always love philosophy and ethics is because it is like a puzzle. You are constantly solving these high stakes questions with extreme time pressure.
“And you have to, because, again, going back to what you said,”
somebody has to. And yes, you will be sometimes wrong. But you will be wrong with the best intentions and the best information out there. And somebody will correct you, because you're
going to publish your results off the list, or talk about your approach. And then you're going to do it better. You're going to iterate again. So in just as an innovation, I would argue,
and I say this always in our work as well.
Also in governance and ethics, we iterate. You do your best. You put it out there. You watch it. You monitor it.
And if things are not going as well, you iterate again. That is very, very different to what still most companies are doing, which is we have a five-people governance team that's a committee that gets together every two months.
And talks about what we want to do. And here's our one-page or policy. That basically says, Diego, you got it right. No, that's not going to work. What I love about Boston, though, is that to me,
the world comes to Boston to be educated. I love to hear news stories where I see that, for example, in health care, I heard some policy about, you know, patient information, let's just say, I like the fact that in certain countries and hopefully
our own, we're understanding the level of care when we're sharing information responsibly. Just I just love the fact that AI is this great leveler. And I know that this philosophy exists in Boston
“because that's where we built our show, right, Garra?”
We're not native to Boston, but we both enjoy the fact that it's an international kind of hub. I mean, I'm sure that in your capacity, you're working with big organizations like the who and the, you know, you went.
And you're from Turkey, is that right? It's something, yes, I love this international community. And I really particularly love what's going on in Boston. But to bring it to me with you, by the way, for anyone who's being intelligent,
you'll work, I think Boston is kind of like a magical Disneyland because you have everyone who knows, like who has the best information, most up-to-date information about everything, right, like in this sense. See this video right here, yeah, crazy.
But I know there's Northeastern campuses everywhere. I mean, that's another, I enjoy the fact that you're global and, you know, that you have so many campuses, but Boston is just this playground. I want to say something about your president.
This is a quote from an op-ed piece that I read. If colleges and universities can see the AI revolution as an opportunity, instead of a threat, we may be able to reassert our relevance at the very, at the very important,
much of society is questioning our value. This is the moment for academia to step up and do you have any comments on that? - I mean, I cannot agree more. This is so correct because, I mean,
“we've been through this and years to so much to it, right?”
Going through this, as soon as AI became a thing, meaning, truthfully, it came out, basically. The immediate reaction from education, like all of it,
It's not just higher education,
but from the education setting was that, oh, they are gonna cheat now.
“And so, come on, like, I think they're, like, yes.”
Of course, they're gonna, like students are gonna use it by the way, social, Jew, but the question is like, how can we best do it, how can we best use this system? And this comes up over and over again, right? Like, how do you design your courses?
What type of courses are you putting out? How are you integrating AI into courses, but very, very importantly, how are you integrating, the things that we are talking about, the philosophy around it, you know,
the social aspects around it, humanities and social sciences, I don't think have been ever disimportant, because AI is getting better and better, but it requires us to be able to understand how we want to use it, like, it's a tool.
Like, unless we are gonna say, go ahead, AI, run our world, we are just gonna be your little ponds. Unless that's our goal, we better know, we better learn from historical changes, we better understand the philosophical thinking behind it,
we better understand how society is impacted by it. And because this is a relay, I mean, in the university,
the university, I always think of this as like a small version
of the, like the world, right? Like, you have all these different types of people, you have the faculty, you have the researchers, you have the teachers, you have the staff, admins, you have the, you also keep the university going,
you know, the maintenance and everything, you have students, you have all these different people with different goals working together. And AI is not just, you know, you take AI, you use it. No, no, like, as we're using AI systems,
you adapt your behavior. Now, as a student, you adapt your learning structure.
“Now, as a professor, you should be adapting”
your teaching structure, your courses, the way that you do exams, of course you should. Like a very mundane way, this comes up, for example, the, among many, many different ways. You know, we have like many universities have online courses
and in online courses, you wanna make sure that students don't cheat. And now, another use of AI, not just creativity, right? Another use of AI is, well, are we gonna, they're a proctoring AI systems that you ask
and so in their, you know, computers. So they're, and they are taking the course. They are, it's, the system makes sure that they are just taking the, sorry, taking the exam. They are just taking the exam.
They are not switching tabs. They aren't, they don't have anybody else around. They don't look away, but you also understand, they are like so many problems that everything that I just said. They don't look away.
I, I can, I mean, just look at my, look at the video that we shot up onto now. How many times I look away thinking? Now I'm not, I'm not, right? Right.
We mean that you are gonna check my tabs,
“like, does that mean that you have full access to my computer?”
Does this mean that you basically have like a spyware?
And what does this mean? Like so many different uses of AI is random to higher education. And one way to approach it is like, well, we need these systems
because we wanna catch the cheaters. The other way to think about it is like, let's think about how we should, what is the new F teaching? How do we adopt ourselves?
What does, in the end, the goal is to make sure that we have, we prepare our students to the world. And for them to be not just happy, not just working, but happy in the world, like functioning in the world.
So how do we do this? That's really the big question. And we can start from scratch of needed. So it's not supposed to be like mine and mine are changes. - Yeah, and I have to give a shout out
to the Northeastern students, 'cause I go to a lot of events around town, a lot of events. And I see so many of your students out in the world. And I know that's certainly part of the Northeastern way with co-op and everything and being part of out there
and communicating, but they're just amazing. And I see them really working very hard to make connections in person. - Yes, which is very cool. And I'd same for me, her, like whenever I talk to
not industry representative, a company SCO,
they always mentioned, oh, we had co-op from Northeastern,
they were amazing. And every time it's like a little heart that pops up for it. - Exactly. - I've got a point to the fact that you have a policy that supports that,
that you understand that your students are native. When it comes to AI, they're bringing a part of this puzzle and that you, you kind of mix it up a little bit. You're interested in that intersection between what jobs are actually there
and what can teachers do to help students. I mean, it's very clear that you've got this system down and that it's different. So I can see that even as an outsider. - Absolutely, and I think, you know, the questions,
I think it's very fitting for Northeastern
To think about, think and talk about.
I mean, I'm grateful that our present is so upfront
about these absolutely critical questions,
existential questions for a really higher education. Because, you know, we are at the type of university that is really at sort of like the,
“we have so many moving parts that are important.”
Right, I currently mentioned very global universities. So when we think about AI, we think about not just with the US regulations and thinking, but also the EU and Canada. When we, you know, we have, we are a research university.
We have so many different departments, you know, we are not just focused on technical or not just focused on humanities, but we, we not only have all these different departments, we are highly encouraged to collaborate
between departments. So all of these, it really also means and we have online courses. So all of these really mean that we are, we don't have a, we cannot escape thinking these questions.
And the thing is, we don't try to. We really go, okay, this is fantastic. Let's like, this is an opportunity. Let's take this opportunity and let's talk about how to think about AI and higher education.
And in my case, how to think about AI governance in higher education. Because we are building AI systems, we are buying AI systems, we are deploying AI systems, we are using them for teaching and learning.
We are using them for research. There is not, like, you don't get more than more merited than this and you have to have,
“you have to have like really robust governance”
to make sure that we do everything correctly. And we are really looking out for the best interest of our community. - Right. And higher education has such a special role.
In other, I mean, obviously training the next generation advancing science, all of the things research, and I love the idea that AI may save the humanities, but we'll talk about that separately. But I want to see, right, back for the liberal arts degree.
But they exert a different pressure on the system too, which is super important. Because it's that whole point of who can do that,
leaning into not worrying about profit first
or worrying about some side. I don't impact first in things of that nature. And the universities are so needed in that sort of trifecta of government, higher education, and then business. And it's such an important piece.
And I think people who worry about higher education or think it's, you know, to expensive or other things
“are missing in some ways, I think the role that it plays”
is pressureing the system to do better. Right. And I would, I mean, I would second this and like with a lot of enthusiasm because I think what we need to think about is universities are really unique.
Like we have the reason why the exist is we have the public trust for educating the youth, to bring them to life, but also doing research that is for that service society. That is why the exist, right? So we have a very different structure
that also tells us, well, not like ethics is not, like ethics is hard sell. You know, like no matter how much I give you this ROI conversation, you know, but it's for your business benefit and so among so many different things
that businesses will implement, ethics doesn't necessarily come first, right? But there's a major difference with universities because universities in higher education. First of all, you have all the experts that you need.
You have everyone in house. You have certain classes that, yeah, you want to figure out how to create the best organizational process. You have your business goal. You want to figure out how to deal with private data
while you have your computer science. You have all your experts in house, which is fantastic. In addition to that, you also are in this particular special position where you can be impartial. Again, as I said, right, like we don't have products.
We have a service, which is that towards our students and towards society. We don't have products that we need to make sure you buy that product, make sure that see it 500 times a day, eventually your brain says, buy that product.
We don't have to care about that. That is not what we do. So we are in this extreme unique position where we can lead AI governance, responsible AI, ethical AI, by example, and I would go more than that, we have to.
What does it mean that we don't lead in this area? How can we not lead in this area where we exist because of public trust? Right, and that could be said for foster, too. Yeah, I would say, I would say that's exactly why I made
a B-line there, that's exactly why our show exists. And I want to bring you back for more conversations.
This has been really amazing.
You have made me so proud of my liberal arts education. We feel like, well, I'm going to say it, badass in the world, because I understand the human element right now, and it's because of these kinds of conversations
That I feel like people can really start to grow and hope.
And I want to just encourage anybody to follow you.
“Will put links in your bio and show notes.”
This is a very important time. There's a lot coming up for you, particularly, is there any shout out you want to give? I know you just finished up some research work and your launch shift on in the month of March.
Is there anything you want to shout out? Go for it. Thank you so much. Nothing specific, I would just say what you said,
which has just followed our work.
And this is, again, going back to high education and in some sense, also Boston, I would say, reach out. Because I don't think that there are many other teams that have the experience that we have, both in higher education, in research,
and in implementation, in the industry setting, reach out. I mean, we exist because, I mean, we, as the response play, I practice. And I, I think that, we exist because we want to make sure this actually happens.
We don't want to just talk about it.
We don't want to just have our papers published and so on. But like, it has to happen. And we have to implement.
“That's why I want to, I work with, you know,”
the intergovernmental organizations. That's why I work with these groups and universities and industry partners because the ultimate goal is to make sure that whatever we are talking, whatever we are thinking, whatever we are researching,
eventually goes into somebody's production. And it can be as, you know, the image could be as cool as well. We are stopping from terminators or as mundane, which I think, but super important, your interest, you know, treats you well.
When you get sick, actually, you'll get coverage, which is, I'm sorry, but super important. It is not just, when anyone who had to go to root would tell you so. So we work across domains.
“We work with all practitioners and I think, you know,”
we are more than happy to partner with universities, other universities, other researchers, industry partners, just follow us, get in touch with us. You're incredibly accessible, Cara, any final thoughts? No, it's, it's, it's just great.
And thank you so much. And I am really looking forward to talking with you more. And, you know, figuring out how we can make Boston the official hub for Ethical AI. Well, let's said that you both inspire me to get on an airplane,
but thank the police to our audience. And to me, we can have these conversations. Please come back and consider yourself in the Bay Tribe. Building AI Boston, we're celebrating women's history month, and you are certainly a woman making history.
Thank you so much for being here. Thank you so much for having me. This was so enjoyable. And I look forward to chatting again. You will definitely back.
Thank you, by John Sue. Thank you for joining us on Building AI Boston. Stay tuned for more enlightening episodes that put you at the forefront of the conversations shaping our future.


