AI at Work Episode 45: All About AI Ethics with AI Ethics Lab Founder, Cansu Canca | Talla

AI at Work Episode 45: All About AI Ethics with AI Ethics Lab Founder, Cansu Canca

Episode Overview

This episode of AI at Work is all about AI Ethics with Cansu Canca. Cansu is a philosopher and the founder of the AI Ethics Lab, where she leads teams of computer scientists and legal scholars to provide ethics analysis and guidance to researchers and practitioners. Tune in to her all about AI Ethics Lab’s founding story and what they work on, Cansu’s take on the fact that the market is starting to move towards the importance of philosophy and ethics, the strategies that she uses to teach ethical issues to non-philosophers at workshops, and much more. 

Listen to every episode of AI at Work on iTunesGoogle Play, StitcherSoundCloud or Spotify

Rob May

CEO and Co-Founder
Talla

Cansu Canca

Founder and Director
AI Ethics Lab

Episode Transcription   

Rob May: Hi, everyone. Welcome to the AI at Work Podcast. I am Rob May, the co-founder and CEO of Talla. I’m your host today. My guest today is Cansu Canca, the founder and director of the AI Ethics Lab[CC1] . Cansu, why don’t you tell us about your background, how this lab came about and what kinds of things you work on.

Cansu Canca: Thanks for having me, first of all. I’m a moral philosopher, so my whole background is in moral and political philosophy. And I’ve been focused on applied ethics all along.

I started off with ethics and health. Bioethics is one of the most developed and most deeply rooted applied ethics areas, so that’s where I started. Specifically, in the population level bioethics, we call it. That’s the policy level questions– how should we arrange the policy to make sure that it’s right and fair, and all of those things.

I was working on these issues, teaching at the medical school, and at some point I started realizing that we get a lot of health technologies– seminars on new health technologies that use AI systems. I also realized that our typical way of talking about bioethics doesn’t really cover some of the questions that arise from these new AI systems that we use in health care. That made me very interested in ethics of technology, but more specifically in ethics of AI.

What I did was I said, “oh, sure” (this was a little over two years ago) “obviously, some people are working on this, I should just go ahead and join one of these academic centers or one of the industry places where they are, I’m sure, busy with AI ethics”. Turns out that wasn’t the case.

(laughs)  

Nobody was really working on this about a little over two years ago—November, December 2016. I talked to a whole bunch of people and realized that a need for philosophers in this realm, the demand is not there yet.

I figured, in that case, I should just start something that should have already existed, which basically does interdisciplinary research on AI ethics, but it also aims to do really applied work. Really look at the practical aspects, talk with the companies, talk with start-ups, work together with the practitioners to solve their problems– solve their everyday problems. That’s how it started.

RM: It’s pretty interesting, when you told your parents that you were getting a PhD in philosophy, were they were they worried you might not be able to find a job? Was that the perception at the time that you were going through undergrad and grad school? Then how do you feel about that change the fact that the market is starting to move towards the importance of philosophy, and ethics, and back to some of those values?

CC: You’re totally right, my parents should have been worried. They were not worried, which makes me doubt their perception.

(laughs)

The philosophy market is notoriously terrible in the academic job market. Bioethics was always an option outside of academia. You have hospitals or public health institutions that do hire or work with philosophers for the questions that matter for them.

I did research at the World Health Organization, for example. As I said, I worked at the medical school. So that was almost the only area where you could do something applied with your ethics background.

Now, the AI is coming in, so I would love to think that this is a whole brand new area where philosophers will flourish and we will have this amazing discussion. There is, of course, the possibility that it’s not going to go as well, that idealistic way. I hope we won’t end up having a whole bunch of “ethicists”, meaning, people who are ethically minded, well intended, people who worry about these things, but then they are the only people who are working on this rather than philosophers actually joining the discussion. That’s my worry. But I hope we have a new area of work.

RM: People that listen to this have heard me bring this up before that, I’m very concerned and have been as an engineering major that I was taught no ethics class. My brother is a computer scientist, he was taught no ethics class. Now we have a bunch of ethical dilemmas about the technology that we’ve created about notifications, and the attention gathering, and how manipulative they might be. AI could be all of this on steroids.

What are your thoughts on that? Let me break down my question in two parts. Number one, do you feel like you can, in a relatively easy way, is taking one ethics course, for example, in college, is that enough to ground somebody to have some of these types of discussions? And then, do you see any trends towards that around ethics and technology?

CC: There are definitely new trends. A number of universities are working on incorporating ethics classes into their CS curriculum now. Harvard is one of them. I believe Northeastern is working on this, as well. MIT is working on this. There is definitely a clear trend towards this. I am hoping that these will be successful.

One of the areas where we always had ethics courses is medical school and school of public health. We could and we should, I think, question the effectiveness of those courses, as well. Did we do a good job until now in those areas where we’ve been teaching? We could learn from that– not copying it, but learning from those curricula– how to integrate it into the curriculum.

One problem with the ethics courses is that ethics courses tend to be very idealistic, and when you go to the job, there’s a disconnect. And the disconnect usually causes students to just neglect everything– forget about everything they’ve learned in the class.

Unless we can sort of bridge that and say, “look, even if you cannot make the perfect decision, even if you don’t know all the features, you can still have some sort of ethical reasoning that can guide you”. Here, more than just focusing on a single ethics course, I think what matters a lot for ethical reasoning is to be able to have critical reasoning and analytic thinking skills, which does not necessarily require an ethics course. I would say, yes, have one ethics course, at least, that’s a good start. But, have a whole bunch of critical thinking and analytic reasoning integrated into the whole curriculum because those will be as useful as an ethical theory.

RM: When you think about how ethical ideas move and shape over time, what we believe, and how industry adopts ethics– what do you think are some of the things that have to happen? Do you think ethics around AI will be a top-down thing that’s imposed by governments, that’s influenced by academic research, and industry standards, and groups like that? Or, do you think it’ll be more bottoms-up? Do you think you’ll have a couple of companies lead the way? Do you have any opinions on how that might come about?

CC: I don’t have any predictions, but I can tell you what I hope will happen and what I fear. What I hope to happen is, I hope that we will have decent regulations that will at least force everyone to have the relevant incentives to meet the threshold ethics. We should have some sort of boundaries—anything below that is just not acceptable. That’s one thing.

The second thing I really hope we will be doing is to build on top of those boundaries. In your company, in your institution, how would you make sure whatever you are creating goes beyond those threshold requirements? You really try to strive to make it as ethical as possible—meaning, also, how would you devise solutions for ethical problems? Or how would you design solutions for ethical problems?

There will be a number of questions that are solvable if you just really focus on them. Like, there will be technical and design solutions. There will also be those, they will be just value trade-off, so you will have to pick one and go with it. There, I think, it’s important for companies to recognize which values they’re endorsing and have a consistent structure.

What I fear will happen is– again, coming from bioethics, what has happened in research ethics, which is we had terrible practices causing scandals and public outrage. And against that public outrage, the regulation came in and then we had this very heavy handed IRB structure (Institution Review Boards– Ethics Review Boards) controlling the research. I think that has a lot of flaws and I don’t think that’s the best way of practicing ethics.

The way that the AI ethics story is developing is eerily similar because it did start with companies going “move fast break things” style, and then having the scandals, and then AI principals just popping up from right and left. And the regulators are rushing in to regulate without really understanding what they are regulating. I hope we will somehow divert that story into sensible regulation and proper ethics on top of it.

RM: Now, you wrote an article for Forbes called A New Model for AI Ethics in R&D. Tell us a little bit about your– what was the gist of that piece?

CC: Basically, the main idea behind that is—what I said about the ideal situation in my opinion—you are not just relying on the regulations or outside oversight, but in an institution—which could be a research institute, an academic institute, or a private company—you try to have a meaningful and comprehensive way of integrating ethics.

First of all, one thing to recognize is that the ethics questions arise throughout the whole development of AI systems. You start from the research, and design, and development phases, all the way to deployment and updating of these systems, you will have different ethics questions. And AI systems don’t happen in a vacuum. Whichever sector you are planning to deploy those systems, they have already ethical questions around it, and it’s best if you have some sort of understanding of those. The system– the ERD system that I am advocating and we are practicing in AI Ethics Lab is basically starting with focusing on three ways of integrating ethics into a company.

One is helping and making sure that developers and designers are aware of ethical issues that they are facing, and they understand when there are questions that they can solve and when there are questions that need further analysis. So if it’s a design solution, maybe they can just catch it and work on it. If it’s a value trade-off, this requires a lot more work on it. We do trainings and workshops just to make sure that developers are like these “first responders”. They recognize the ethical questions, they flag them, and they say, “hey, we need to work on this, we need to look into this further” before moving forward.

The second step is, once you are working on a project, to have ethicists as a part of the team. Depending on the company, you can have an in-house ethicist participating in these project teams or you could have consultants. If the budget is limited, then you can have consultants who either look into specific projects, or even specific parts of these projects when the critical questions arise.

The reason for that is because no matter how much you can train your developers, it will just be this ethics awareness type of training. It will not be a four year PhD. It’s impossible for me to train the developers like that, but I can collaborate with them and work with them to solve these questions that they have. Or at least mitigate their ethics issues.

Finally, there’s this more broad aspect of, what are the company principles? Here, I’m not talking about just lofty principles, putting it out there, and having a nice PR. But saying actually, “look, here are our principles, and now we have to think about how to operationalize them”. What are the processes involved? And the most important thing, when our principles conflict– because they will conflict in hard questions– what are we going to do? Which principle we are going to go with, as well as what type of process we’re going to follow. Sort of creating these precedents as you go. You can also look back and say, “hey, we made the wrong decision there. We are changing, but now we know why we are changing our mind”.

That’s the ERD model. It starts all the way from the bottom with the developers, going all the way up to the leadership.

RM: Are there specific ethical issues related to AI that you worry don’t get enough attention today? Because we constantly hear the one about– well, self-driving car has to choose to kill the passenger or the pedestrian–

CC: Oh, God. Yes.

RM: –or whatever. I think that’s almost never going to come up, so I’m not super worried about that. I’d be interested to see if there’s anything that you really want to highlight as like, I wish people would talk more about this or this category of thing.

CC: Yes. I’m glad that it’s getting a little bit more attention since I’ve first started saying “this is my main worry”. But my main worry about AI systems is how our decision making is shaped by them.

Basically, we always talk about the agency of an individual, autonomy of an individual, respecting individual decision making. Of course, when you say individual, this also has its broader aspect of social decision making, like, if you think about democracy as every individual making a decision, collectively. The problem is, I don’t have to—I mean, we’ve seen with the elections, but I think it’s much more than that—I don’t have to play with your decision making. I can just play with the information that you receive, right?

It’s not just about the echo chamber, the social media, and so on. But all of our access to information is somehow AI mediated– it’s becoming AI mediated. If you are looking at the newspaper, even newspaper has ranking in it.

The Google search results are the same. If you are writing an article, an academic article, Google Scholar search results will have the same thing. Or PubMed. Everything is somehow ranked and given to you according to an order. It becomes very important that when we talk about what we know, it starts with what information we have and what we see around us.

Yes, we should still pay a lot of attention to protecting individual decision making. But, before we get there, we should make sure that this decision is valid—you have the relevant information.

This becomes a bigger problem when the machines that we are interacting with become more seamless. In a search result, you still have pages. If you’re asking Alexa, Alexa gives you one answer, for example. Or Siri. If you have a robot that you are engaging with, it’s not going to give you ten results, because that’ll be a terrible conversation. As we are moving to a more seamless integration, the information that’s provided will be even more limited. This is the main worry that I have. I don’t think we are talking about this enough.

RM: Interesting. Well, I’ve never said this on podcast, but I still subscribe to the paper version of The Wall Street Journal. Normally, I feel really dumb when I tell people that. The reason I do it is because I don’t like to have information pushed to me. When I go to The Wall Street Journal’s website, I know what I’m going to see is what they want me to read, what’s popular, or what everybody else has read. I just want to flip through the paper and read what I want to read. Right? I realize that somebody has selected what goes into the paper, still.

But, I feel like the fact that it’s less personalized, is actually good and interesting. I will see things I may not have otherwise seen. I actually find it more efficient to spread out the paper and just look, it’s bigger than a computer screen, also.

CC: Absolutely. I think that’s another thing. Yes, somebody has selected those news items, but you still can, if you feel like it, start from the middle. Don’t get me wrong, I’m not against rankings. I just think that we should be very careful.

(laughs)

RM: Definitely. Really, interesting. What do you think about– well, one of the things that worries me, that I’ve talked to on the podcast before is an issue– take Waze, the traffic app.

Here’s the example that I’ve talked about on the podcast, which is you and I live on opposite sides of the city, we work in the same building, we’re both trying to get to work by 9:00 o’clock, we both normally leave at 8:00 o’clock– Waze learns our patterns. Then one day, Waze realizes you are running late, and I am running early, and it could get you there on time by delaying me a little bit and sending you a slower route.

Now, we both use Waze individually, to get to work as fast as possible. Waze could optimize to get the most number of people to work on time if it actually delayed me a little bit. A human might not even know that decision. It might be a byproduct of the AI algorithms learning to maximize something else.

Then, how does it decide when to do this and not? Am I being punished for being responsible, even though you hit snooze and I didn’t? Or maybe you’re worth more. Maybe you’re more likely to click on a Dunkin’ Donuts ad on the way there, or whatever.

(laughs)

Maybe you’re more valuable to Google as in, maybe have a paid version of Waze. There are a bunch of issues like this that could come up, where these algorithms have to optimize for something. That something could have an unintended consequence.

Or it could be something that is mildly nefarious, we don’t know. The bigger problem is, sometimes we might not even notice for a while because we’re just trusting the algorithms. I might never know that it sent me down a slower path. Do you worry about things like that happening?

CC: This particular one, I was not worried about–

(laughs)

–before you said it. No, but I think your example touches on a lot of these interesting questions that, yes, we do talk about in various contexts. I think that the main issue here is that somehow your world is being “architectured”. Is that the right word? Designed.

RM: We’ll take it.

(laughs)

CC: Yeah. Your world is being designed by some agent. I mean, I’m not saying that AI is designing here, specifically. Depending on the autonomy of the AI, we could be talking about AI is designing your world, or we can say the developers are designing your world.

But somehow, that your world is being designed around you without you noticing. There is an issue there because somehow, some decisions are taken away from your hands in some sense, if you have no idea that this is happening. You don’t know whether you should switch to another app because you’ve never realized that this is what it’s doing. Right?

RM: Right.

CC: That’s one question. The other one is when you are dealing with systems that work as multi-player systems. Forget about AI or technology, we do have this question a lot of the times: Individual preferences and rights, versus social preferences and rights.

Also, how optimizing the social structure actually fits into the individual’s well-being most of the time but not necessarily each and every time. This question, again, is something that exists in a lot of public decision making. Nothing new in the sense of technology, but it becomes more prevalent in the technology.

I think the third one that you mentioned is a different type of question. If you had, for example, the paid version of Waze getting you faster to places, there, we really get into the social justice problem. Because then, is it “right”? Is this the right thing to do for the society to put the benefits of technology on one group that can afford it and make life harder for the other group?

I would say there should be strong resistance to just going for “whoever pays” mentality. It’s an interesting question. It’s not a clear-cut, “you should never do this” type of question. More like, well, if some sort of payment keeps your system alive, if that’s the way that this company survives, then we should think about it and, think about ways of making it not burdensome for those who cannot afford it.

RM: So, are you terrified by the fact that most of our national politicians are lawyers by training, who are not trained in philosophy or technology, and they are the ones that have to make these decisions?

CC: Of course, I am.

(laughs)

CC: I am just generally terrified that we don’t have a “philosopher king”, and we just have to deal with people who don’t know philosophy, or ethics, or political philosophy, and making public policy decisions all the time in tech or outside of tech. For that, again, there are ways of changing this. You mentioned adding courses in the curricula, just for anybody, I would say. You could also have trainings in the institutions and companies. We can push ethical thinking forward if we really want to.

RM: Last question for you. You’ve mentioned ethical training a little bit, and I know you do some of that. You do workshops and your group does workshops. What are the strategies and tactics that you use to teach ethical issues to non-philosophers if you go into a company?

CC: There are various types of workshops that we designed and we are using. Again, that also depends on who exactly is in the room. One of them that I like particularly is, we call it, “The Mapping” Workshop.

Let me start by saying, the ethical discussions tend to be very vague. People make claims, then they revise their claims as they go without noticing or without others noticing. You cannot pin down their positions and say, “hey, but here is the problem”.

What we try to do with that workshop, for example, The Mapping Workshop is really pinning it down. It’s in the format of a floor-game. And the idea is to have the participants– non-philosophers– do what philosophers do —that is, the ethical reasoning— externally. What we do internally, we help them do it externally.

You start with the case and make the case more complicated as you go, and you have to take positions. So every person who is participating takes a particular position, looks at others, who are taking different positions, so sees all the possible options and realizes the weaknesses and the strengths. And if they want to change their minds or modify their position, they literally have to walk from one square to another square. And they get to interact with the others, who are having different views, and see their weaknesses and strengths. That’s one way of really making it clear to the non-philosophers that it’s not just a taste based conversation. It’s not just, “here is my opinion, here is your opinion”, but we are having a systematic and analytic structure in our ethical reasoning.

That does serve a purpose. You do get some results with that procedure. Then we have different versions of this. There’s a reversed version, where they get to create their own maps. We also take them to a different format. We take them through different parts of the development procedure of an AI system and specifically focus on a particular ethical question in these different parts, as a group. This is sort of like roundtable style.

There are different designs, which bring the non-philosophers into this discussion in a way that is not fancy words or loaded terms but in a really clear and actionable way of thinking and discussing.

RM: Like, you’re not going to ask them to define epistemology?

CC: No. Only after four years of PhD.

(laughs)

RM: Good. Well, Cansu, thanks for coming on the podcast today. If people want to find out more about your work at the AI Ethics Lab, what’s the URL?

CC: So it’s very easy– aiethicslab.com . And they can also follow it on Twitter. The Twitter handle– again, super easy– @aiethicslab.

(laughs)

RM: Awesome. For everyone who has listened today, thank you for listening. If you have guests you’d like us to see on the podcast, or questions you would like us to ask, topics we should cover, send those to podcast@talla.com. We’ll see you next week.