The DSR Network
The DSR Network

Siliconsciousness: The Power of Applied AI Optimism

16h ago33:484,503 words
0:000:00

AI should serve us, yet that’s not always the approach that experts take. There’s been a lot of talk about AI fueling economic displacement, but what about AI that’s a partner instead of a replacement...

Transcript

EN

To stay up to date on all the news that you need to know, there's no better p...

right here on the DSR network.

And there's no better way to enjoy the DSR network than by becoming a member.

Members enjoying ad-free listening experience, access to our discord community, exclusive content, early episode access, and more. Who's code at DSR 26 for a 25% off discount on sign up at the DSR network.com. That's code at DSR 26 at the DSR network.com/buy. Thank you, and enjoy the show.

Welcome to Silicon Consciousness, the DSR network podcast focusing on the artificial intelligence

revolution, politics, and policy.

Hello, and welcome to the latest episode of DSR's Silicon Consciousness. I am David Rothkov, your host, and this week, as every week, we're going to be joined by an expert

who can help us explore different aspect of what's going on in AI and related next-gen

technologies. This week, we are extremely fortunate to be joined by Angela RSDDo, who is professor at University College London, and a faculty fellow at the Digital Economy Lab, and the Stanford's Human Center, AI Institute.

She speaks, writes, and advises about something we're really interested in, which is the real

life deployment of artificial intelligence tools for the public good. Welcome, Angela, thank you for joining us. Thank you for having me, David. So I became interested in your work. I read up piece that you wrote in MIT Technology Review, I then read it in a couple of others,

because you're dealing with what I consider to be an especially interesting sort of gap in knowledge and understanding, which is sort of how we go from the theoretical frameworks we use for AI, or the frameworks that are used by AI creators to sort of shape how we look at it, to the real world of applying it in small and medium-sized businesses and not for profit businesses and regular businesses and regular daily life, and there's a big gap

there, and that's perhaps understandable since AI is a relatively new technology. But before we get into the specifics of that, what I'd like to do is I'd like to ask you to address what as subject has come up a lot here. And that is the seeming sort of premature backlash against AI that we're seeing. It's generational, you know, you're sort of Gen Z type, so like, I don't want anything to

do with it, and there's this sort of growing view. It's almost political in some places, AI bad, you know, it's George Orwell, for like, it's good to look better, AI bad, and so even before we get to these applied views, we're dealing with this sense of, you know, that this is a malevolent technology, as if such a thing could exist.

And I was just wondering how you deal with that, because you must encounter it periodically

too. Thank you so much, David, and thanks for, actually, pointing to the elephant in the room, yes indeed, artificial intelligence is a cluster of technologies and across the whole spectrum of AI technologies and tools and applications, there are significant concerns from many parts of society, and some of them are warranted, because we have had experience

of other emerging technologies in the past new technologies that have not served the purposes, that we're attached to what we hoped they were going to do. They didn't help humanity flourish the way that we thought, at least not in the beginning. So in that direction, about a year, a year and a half ago, myself and Professor Erick Briolson, Nate Percelli, Sandy Pendland, and Condoliser Ice offered and credited a book

on AI and democracy, it was called, it is called the Digitalist Papers, and as co-editors, we invited voices from across different aspects of society, from philanthropists, industrial industry leaders, and also academic leaders, in different disciplines, to share with us

Their vision for AI, and would I saw through that collection and what we put ...

is optimism, pragmatic optimism, though, technology, any technology, not just artificial intelligence,

requires thoughtful implementation, and that thoughtful implementation shouldn't be left down to individuals, only, and it shouldn't be left down to specific organizations or specific parts of society. There should be some understanding of this is where we're headed with this, and this is how the different institutions of our society can come together and consider the guardrails,

and where the best space for opportunity is. So in line with that, my whole research has been not about demonizing the technology itself,

but more about thinking of how do we govern the technology in sensible ways that allow

for the innovation, but also allow for human flourishing, and for the things that we as humans care about, most, the relationships, the interests, the passions, why we go to work, and why we engage with other people through our institutions and our organizations. Yeah, I mean, you touched just in your response there, and one of the things that I find most troubling and most challenging, and that is vocabulary based, because AI is not a thing.

AI is a thousand things, it's a thousand thousand things, and when somebody says, well,

I don't like AI, it's like, well, what does it that you mean?

You mean the technology that helps your airplane land properly when you're flying in an airplane, or is it the technology that gets you to an answer to a question on Google faster than you thought, or is it some technology that's about to hack into your personal data and sell it off to a foreign government, I mean it's a thousand thousand things, and so it's very hard to have a conversation about it unless you get down to, well, which AI, what do you mean,

what kind of AI, and what purpose, and what use, and we almost never get there in our discussions.

Exactly, my space, the space that I'm working is AI organizations, and organizations can very excise, it can be from a very small, moment-pop business in your neighborhood, all the way up to the US government, that is also a form of organization, and when I look at AI with it, it used to be, it's less organized now, but go ahead. Usually, the way that it would work, or it should work, is that the people within the organization

decide on a function of purpose, which direction is this tool going to help us go into.

What's the strategic objective? Why are we bringing this new tool in the organization?

What happened about three years ago is that I very often observe that people would focus on the technology, more than they would focus on the reason why we're bringing the technology in. I would sit in board rooms and advisory boards and listen to very well-motivated and well-intended people in positions of power within the organization discussing the need to bring AI in, and the question would be, what for, and the answer often was in there.

As long as the technology is broadened to our systems, our human systems in a way that serves our purposes, I think that we have a good chance of making it work for us rather than the other way around. Of course, part of that has to do with the messengers, and for some reason, the leading spokespeople for the companies that are at the van guard of AI just aren't doing a great job. I mean, some of them are among the most odious people in our, some of them are less odious,

but it does create a problem. But getting beyond personalities, the way they sell AI is they say, this is faster, this is smarter, this is smarter than a human being here, the metrics. And one of the things I thought was quite interesting is those metrics don't actually work in application as you've written. I'll talk a little bit about the gap between the metrics that AI gets sold with and the metrics that ought to be used when it's applied in an organization.

Well, it's quite seductive for us as humans to have a metric, isn't it?

It's something that we can use to on a headboard. We can say, these are the leading AI tools. This is faster, this is more efficient, this gets the job done in half the effort than a human,

Faster than any other comparably AI tool out there.

prominent way of us assessing and evaluating AI tools. It makes sense on so many levels.

Initially, it was human versus AI. That was the biggest comparison out there. Remember with chess, we had a human champion versus an AI or algorithmically driven champion and who is faster and who wins the game was. Well, it's honest, the gating metric is a touring test. It is. It is. Right. So actually, my colleague, Gary Brielfson has talked about the touring trap. So that is an interesting piece for listeners to go and look for the touring trap is,

what exactly are we making these tools for? And that's your point as well, David. My articles, though, I focused on the next step to that. If we take it for granted that these

benchmarks that are based on technical efficiency and speed are the only thing that we should

take into consideration, then we end up adopting in our organizations and our governments, AI tools, that don't fit the context in which they're supposed to be adopted or they don't fit the way in which people use them within our organizations. And typically, people work in teams. It's very rare that you have one individual alone in an office without interacting with other humans, coming up with a solution to a decision or a puzzle or a problem. We don't work like that in

organizations, most organizations, even very isolated tasks have a before part in an after part, where that task has to be coordinated, integrated, negotiated in the bigger picture of a workflow together with other individuals within the organization, other parts of the organization, other units. So the idea that we have human versus machine benchmarks that are at the individual level of a task, and the idea that those are not considered within the workflow and within real organizational

environments just doesn't make sense. It doesn't make sense and it's sometimes even misleading. It makes people think that our AI adoptions have not been successful, when in fact,

what we were measuring just wasn't intended to be showing the first place.

Well, you know, so it gets down to a pretty, I mean, in the article that I saw that you wrote about this, which was in the MIT technology review, that you know, you get down to certain core

principles, which I think are useful to go over, but the initial idea or the central idea here,

which I think people need to think of is to assess AI outside the context of the process in which it is used is a mistake, and that one needs to assess the process and just as you would assess different individuals or departments within that process as factors, AI becomes one of those. And you see whether it enhances the overall process and the team's work or it degrades it or, as is more likely the case, it enhances it in some ways and perhaps creates impediments and others.

But that's a different way of thinking because people are mesmerized, there's a new toy, there's a new box, oh my goodness, we have this AI. And let's see how that has transformed us with its, you know, just with its speed. Now, I mean, to me, this seems like growing pains.

Every time somebody says to me, AI.dot.dot, I think, you know, with this is 1776, James Watts,

just invented the steam engine, and you want to talk to me about the implications of the industrial revolution. But, you know, perhaps you have a different perspective than I do on that.

It is growing thanks, and the amazing thing about being a researcher in this space is that I have the

front row view of what is happening in a space that is shaping up in front of my eyes. And that's where my insights come from. They come from real-world research that I've conducted in real-world organizations where AI is being adopted. And in those organizations, in settings that range from humanitarian aid all the way to health, I have seen how they come together and they decide that, okay, we're going to think of a different way of approaching that were AI evaluation. And I've distilled those down

to the four principles, if or directions that I suggest in the MIT technology review piece.

The first one is the most obvious one in many ways, but settling on one that ...

which is to shift the unit of analysis. So going from individual and single task performance

to actually assessing workflow performance. When we work in organizations, we don't think of tasks most of the time. We think about how those tasks come together in order to achieve something.

That's what I mean by workflow. The second parameter that's very important is shifting the time

horizon. And I really can't emphasize that enough. When I do my research, I observe organizations and alliances of organizations over a period of time. And the initial sync or swim within the

first three months AI adoption, that is great to know. And I know that it is very important for a lot

of organizational leaders. But what I truly think matters most is sustained AI adoption. And we can perhaps sustained AI adoption without looking at the evaluation of AI over a longer time horizon. What does it do after six months? How does it work when we're looking at long-term impacts?

What does it work after? What does it work after? What does it work after?

It works after. What does it work after? When everything is too much,

it needs a lot of work to do. Metbo-confort-plus. Psychiatric and psychosomative

handling in comfortable education. A plus-on-roof, private, fair and service. But six-clinicum vellas hoff. Yet informiren.metbo.de/vallisto. This podcast is underwritten and part by the U.S. Embassy of the United Arab Emirates. Its editorial content is completely independent and the views expressed are exclusively those of participating experts. It is presented live without editing. For further information about the UAE's efforts in the areas of artificial

intelligence and technology, go to the website of the embassy at www.ue-embusy.org and search for UAE-US tech cooperation. We thank them for their support. We thank everybody who is supporting this podcast for their support and we look forward to it developing and growing over time because

the issue is so important. One of the things that strikes me about this is that sometimes we

have it backwards. We're trying to assess the AI as it is combined with the team. But the more interesting question very often is how can you use human beings to improve AI? I give you a couple of examples that come up. One that's in the news right now, people saying, "Well, we're in the military. We'll use AI for targeting." We find that when they roll over a lie on the AI, you end up with serious problems. There is a human AI next there that creates the optimal outcome and we

have to assess that. I don't know a year ago on this show with a venture capitalist who said, "Well, he's looking for the three-person unicorn." I was like, "Well, what's a three-person unicorn?" He's like, "Well, I'm looking for a company that only has three people in it, but uses AI to behave like it has a hundred." I want to sort of want a human-free company. I was like, "Is this the direction?" It seems to me, you're on the cutting edge of a different discussion. That discussion is

about the equation AI plus human and trying to come up with the optimum way to produce the outcomes that we seek. Do I have this right? Yes, you do orchestrating. I choose that word very carefully. Orchestrating humans and AI tools plural, within the same team is going to be a very prominent feature of our organizations in the future. I want to understand what combination of humans and AI we should be aiming for in which context. That combination can mean many things. We might have

AI tools that work really well with two humans and amplify their work in a way that's

Meaningful to them, but we might have a different combination when we are in ...

setting where humans have to authorize or at least review every decision that's made by the AI

because the regulatory setting requires that a human is the person or the entity or the final decision maker. All of that is shaping up around us. It's almost like there's so much experimentation happening in organizations and I am fascinated with what people are actually engaging with and how they're trying out different ways of engaging with the technology in the real world. Often, some of that is lost in the conversation because we only hear about the cases that go off the bat or the cases that

are promoted through social media or other media platforms. We never hear about the wonderful ways

in the fantastic discussions that are taking place behind closed doors in different settings. It's interesting you use the word orchestrate because from a management perspective whether you're dealing with the government or private sector organization or not for profit organization, the issue of AI as well as how it gets integrated into the operations and what that requires,

I think in new generations of leaders, CEOs and others, is their ability to be a conductor,

like to play in your term orchestrate and to figure out how you manage an orchestra that it consists of people and some machine capabilities. These are early days for us to be able to figure that out. Do you think that the organizations that are training next generation leaders recognize this new role as trying to conduct orchestrate humans and machines together? Organizations that train the next generation of leaders are usually our universities mostly. From my own research together with

Professor Wilson Wong at the Chinese University of Hong Kong, we studied how the top 24 Asian universities are redesigning their curriculum to meet the challenges of AI and what they have done consistently across the board is that they have decided to over emphasize very rightfully in my opinion the training of their students within human AI teams. So they're not teaching them how to use any particular AI tool but they're teaching them how do you work in a team

together with AI alongside you within that team and that is a different skill set. It requires you making decisions and decisions that range from which AI tools are best suited for our team. What level of trust do we give them? Who is the ultimate decision maker? Is the human in the loop or is the human only coming in the end to say that the outcome of the AI or the decision of the AI something that I agree with and proceed to the next stage? It has to do with a conversation

or at least some decisions to be made around what do we not use AI for which is as critical as

what do we use AI for? All of that is part of what I see as crucial elements for training the

future leaders of organizations and having it as early as possible within their student training or their MBA training is going to make a huge difference on how fast they are able to leverage these new technologies and how responsibly they are able to leverage these new technologies in the workplace. Yeah it's quite interesting also because each university in different societies and having to deal with groups of students who come to the university with different kinds of

knowledge and prejudices and so I see this being translated already. You mentioned Asian universities but you know if you look at Chinese levels of adoption of AI or you look at the levels of adoption of AI and Singapore or in the UAE it's much higher than the levels in the United States

because I think there's an AI allergy here and there is certain degree of AI ignorance and also

I think a lot of universities still treat AI the way they treated integrating in integrating

Personal computers into the workspace.

the hall it's handled by them. This is a subset of a particular part of our business as opposed to

something that infuses everything and is part of everyone's job and I'm just wondering are we dealing

with sort of cultural differences worldwide that are going to produce outcome differences?

I don't have enough evidence to speak of cultural differences but what I would I have seeing is beautiful case studies that I have personally examined in which people both in the West and in the East have integrated AI in a more holistic way across their organization and I might add also very thoughtfully. One of my favorite research sites is in health and in health I have seen hospitals integrate AI on both extremes of the spectrum that you just described. I have seen

hospitals integrate AI in the sense that it's the responsibility of the chief information officer and the IT office is going to deal with it and I have doubts about how well that was done and then I have seen hospitals integrate AI in a more bottom up integrated way starting from the very beginning what do we want to achieve with bringing AI into our hospital? How is it going to honor our existing commitments to our patients and our staff? How is it going to continue to comply with

regulations and policies that are important to us? And I have seen the effect of that thoughtfulness carry through a year later and two years later in the outcomes. So yes there's a lot more work to be done in the beginning to integrate and do it so in a way that helps the aim and also helps the organization and serves the people within the organization but also the people that the organization serves ultimately to the patients. It's up to us to decide how we bring the technology going to

your point about AI being an exceptional technology. I want to push back a little bit on that David.

I hope you're okay with it. I think yes AI is exceptional in many ways. The fact that

it's omnivorous when it comes to data and the way that it automates in many ways and even when

it doesn't automate but it recommends there's always the feeling that there's a black box. I get it.

I do think of it as a technology that's distinctly different to what we've had before but I don't want to let go of the idea that we have handled technologies before that we're considered to be very distinctive and very innovative and new at the time and we have done so well and I want to hold on to that optimism here as well. I think we have to remind ourselves that just because it's AI doesn't mean that we have to consider it as fully exceptional. We have to find ways to make it

more normal to normalize it within our organizations at least. Well I think you know this is all I mean

these are all relative terms and I think the reality is that not only is it big and multifaceted

but it's a different organization in different ways and needs to be assessed in that way and that change is going to accelerate and there will be more applications and so forth. Let me conclude with one specific example though that picks up on your optimism and picks up on the way that this can be you know sort of tackled by people not just intellectually but in in practice and that that is a piece I saw that you did on how AI is used, a can be used within not for profit organizations

and specifically how AI can be used to produce you know more human oriented better outcomes for people in other words that seems how it can be used in a way that's sort of contrary to the reputation of AI because I thought it was a good article and I just I'd love it if you could highlight a couple of points. Thank you so much for bringing that up. I think you're referring to this thanks for social innovation review article. Yes. That's one of my favorite pieces of together with

Sam Fan Kuchen and Andrew Dunkelman. We wanted to give people in the non-profit sector examples of how AI can actually benefit in a way that is consistent with the expectations of the sector.

To be more human centric and to help them do more and not falling the trap of...

automation for efficiency tool and what we go through that article step by step and talk about

how do you use AI to prioritize the human relationship and that's of the heart of everything.

AI can help us prioritize our human relationships by freeing our hands from the repetitive and

mundane and easily automated tasks that do tend to take up a lot of time and for us it was

important for that voice to be out there because without it an entire sector might be reluctant

to engage with a technology that could actually help them to benefit more people in their networks.

Yeah. It's a fascinating article. I encourage people to read it and of course anybody is listening to this podcast will say well I want to follow the word that Angela is doing in her in her many different locations and activities and hopefully we'll be able to

invite you back here to continue this discussion because these are early days and I think the

work you're doing is really pioneering and important. For now thank you very much Angela. Thank you everybody for listening and join us again next week here. The pleasure has been online. Thank you so much David. Thank you. Thank you very very much. This was Siliconjustness, a production of the DSR network.

Compare and Explore