Updated: Apr 11, 2022
As AI becomes more prevalent and a part of everyday life, there are many questions that we need to ask ourselves about the ethics of AI - just because AI can do something - does that mean it should? This generation of K-12 students, in particular, should appreciate the ethical questions that AI raises. With this in mind, we answer common questions that teachers, parents, and kids have about AI Ethics, and include a presentation by an Ethics specialist - Megan Branch - where she talks to kids about how we as human beings make decisions - and whether computers (or AIs) can be taught to factor in the same ethical frameworks.
What are the ethics of AI?
Artificial Intelligence (AI) is a set of computer programs that mimic what human brains can do. AI has been shown to do well in tasks like recognizing images, forecasting the future from patterns learned of the past, answering questions, and so on. However, while AIs are good at learning from data, they have not learned how to incorporate the complex human frameworks that we use to decide what is right and wrong. AI Ethics is the field that ensures that humans can guide AI development in a way that is fair and ethical - for example by avoiding bias, repeating historical mistakes, or exploiting user private information.
Why is Ethics important in AI?
As AI becomes more prevalent, AI Ethics is required to ensure that AI development does not harm a subset of the human population, particularly minorities. It is also important to ensure that AIs make decisions that value human life, and operate in a manner that is consistent with human values.
How do you practice AI Ethics?
Many countries and corporations are creating guidelines for the Ethical practice of AI. These include requirements for data - to make sure that AIs cannot learn bias, standards for how human information should be used in the creation of AIs, how AIs are tested, and more. For K-12 students, an awareness of AI Ethics is helpful to ensure that students can understand what AI can and cannot do, and appreciate how it can be used responsibly.
Can you explain AI Ethics with an example?
Imagine that a bank wants to use an AI to predict whether a customer’s loan should be approved. If the bank uses historical data, it is possible that in past history - only caucasian men in a certain age range were given loans. If the AI learns from this data - it will likely bias against women and other minorities - denying them loans - since that is the pattern in the data used to train the AI. In this case, practicing AI Ethics would involve recognizing this bias in the dataset, and removing ethnicity and gender information to ensure that the AI learns to use other factors, such as income, past payment history, etc. to make its decisions.
How do we train Middle School and High School students in AI ethics?
The talk below by Megan Branch is a good example of how to introduce the topic to kids. We start by exploring how humans make decisions, and then whether these techniques can be taught to machines. In AIClub classes, we explore the different dimensions of AI Ethics - AI Bias, the relationship between AI and Privacy, the carbon footprint of AI systems, and more.
Megan Branch’s presentation
Megan Branch is the COO/CPO of CertNexus and a certified Ethics Specialist. In the video below, she engages students in an interactive exercise to explore how humans make decisions, how we apply the ethical frameworks we have learned, and what it can mean when these decisions are handed over to machines.
Welcome, Megan, how are you? I am well how are you? I'm doing well. So the students are taking a short break. We'll be back. There'll be back in exactly one minute so that you can perfect start on time, I'd like to introduce you to my colleague Sindhu. She will be introducing you. Hi, sir. Nice to meet you. So she has all of your details, you know, perfect. Thank you so much for doing this.
No problem. And and Nisha, just just to confirm, how long is my spot to speak? So your spot is 15 minutes. Perfect. Okay, great. Thank you. Just confirm that you're okay to record. Is that okay? Yes, absolutely. Absolutely. Wonderful. Thank you so much. Yep.
Hi, everyone. I hope you're all back from your break. Can everyone who's already back, please post on the chat and say that you're back? Or just Yes or no? Something that indicates that you're back.
Okay, I'm getting some responses. Okay, great. So it looks like everyone is coming back. Okay. So let's start the next part of our session. So we have a guest speaker, Megan with us. today. She is the CEO of certain exes, which does many educational standards, including for AI and data science. And today, she is going to speak to us about AI ethics. Please take it away, Megan.
Thank you. Thank you, Sindhu. I really appreciate it. Well, hello, everyone, it's nice to meet you. I'm going to briefly share my screen here just to keep myself on track for the next 15 minutes or so. And as you come back from break, please feel free to utilize the chat features. And do let me know if there's any questions as we go along. I'm going to try to make this as interactive as possible. So I am going to put a link up in the chat feature here. If you could just get here.
Apologize for that I should have had this up before this is.
Alright, so let's go ahead. And as I get that ready, I'm Sindhu. Can you see my screen? Okay, perfect. I think you're on you're on mute, but that's
fine. Okay, I'm going to open this. And
I'm going to put a link in the screen if you want to participate, you can copy the link in the polls that we're doing, if not send to I don't know, if people have the ability to just put in chat what they want to respond to or how you've been utilizing, you know, communicating back and forth.
Yeah, I think we ask a question in the chat. And typically they respond in the chat.
Okay, well, I'm going to put the questions are actually going to be on the screen. So hopefully, that'll be a little bit easier. If you want to interact with the poll, I did put a link in the chat feature, if it's okay for you to go out into a browser and open that up, you can do that as well. And I'll let you know when you need to do that. So as Sindhu said, my name is Megan. I am the chief operating and product officer at certain Nexus. So we actually make exams, what we do is we create those tests to, but it's mostly for professionals to teach them and also assess their knowledge on things like artificial intelligence and data science. But we also have some exams around ethics. And that's what we're going to be talking about here today. So I'm really excited to talk about this because this is very near and dear to my heart. I am the only high level certification that I hold is in ethical emerging technologies. So I'm very proud of that. And it is a you know, something that I have had the pleasure of working with some amazing people about just to give you a little bit of my background. For those of you that are not are really wondering, do I want to stay in technology? What if I really like to be creative? My background is actually in art and psychology, which is why I think I'm drawn to ethics because most of my work experience has been in technology but it's a great blend in terms Have a career path. So what we're going to talk about today is take aside everything that you've been learning about these amazing sessions. And we want to talk about how do we make decisions? So let's start off with something very easy. Let's talk about something that maybe some of you like some of you don't like, but at least you have some context for First of all, which ice cream flavor would you like? So if you want to click on the app, and you want to go into the app and answer this question, I'm going to navigate over to that. And we are going to launch that poll. Now, if you want to put your answer in the chat, put your answer in the chat. You know, you can assume that we've got banana, vanilla, strawberry and chocolate here. But how would you choose not which flavor? But how did you choose? What made you select that? Is it because you've had it before? Is it because you want to try something new? Is it because it's what your best friend God? How did you select which ice cream flavor? Okay. Great. Getting some good answers here. Let me give you just a few more seconds. responses on zoom chat as well. Yep, I'm seeing that. Okay. What's your favorite? Alright, so let's take a look at what the information is. So first of all, let me know in the chat, was it difficult to determine if you you know how you decided, because it's so instantaneous, right? Something like asking you your favorite ice cream flavor is so instantaneous. And we're going to show some results here.
Let's see if we can get over there.
And let's see.
So we have everything from it's my favorite. I've tasted it before I like it, most people want to chocolate. So this is all of the information that goes into us deciding, but how long? And you can just put this in the chat? Um, how long did it take you to decide which flavor you want it?
For most of you who were putting in flavors, it really wasn't that long was it? Maybe about 30 seconds. All right. So let's try two seconds, one second, maybe 10 seconds sindu. You You took some consider you really thought about that that flavor that you wanted. But we what I'm asking you to do is to stop and many of the decisions that we make throughout the day, even if it's just ice cream flavor are very automatic. So let's try something that's a little bit harder and this time, and we'll take a look at three things here. So first of all, what was the data that you looked at? You know, so you had only a limited amount of data to go with? Right? So you could, you know, take a look at what were the different flavor options? What if I had said that first flavor instead of being banana was something like coconuts, or mango? Or if you had different types? Some of them were dairy, non dairy, sorbet sherbert. Or as I said, What about what your friends were getting? What other people put in the chat? What if it's a really cold day versus a very hot day? So these are all of the data input inputs that you're using to make those decisions, but you're doing it very quickly. Right? And also, what about how hungry you are? And that's very personal, right? Because that depends. Did you just have a snack on your break? Did you not have a snack? Are you waiting to have a fancy dinner or lunch later? So and then the next thing is we we have to think about what are the things that we already know? So these are learned considerations? So have you had the flavor before so you know what type of ice cream you like versus what ice cream you don't like? Maybe you haven't had the flavor before, but you've had it in something else, maybe cake or you've had mango fruit, you know, and also texture Well, maybe if there were different types of ice cream you would prefer one texture over the other and also Of course, you know, you know, if you have ice cream, how that's going to change your hunger level? So these are all of the learning considerations that go into your data. Right? And this is just it took you what two seconds? To answer that. And then finally, you want to outweigh the risks? What if the ice cream melts? Do I want to get it? You know, we didn't even talk about if you get it in a cone or a dish, or you get a Sundae, or one scoop or two scoops. But also, what about if you get vanilla, but your best friend gets chocolate? And she is she might be a little bit more apt to veer towards people who are doing the same things as her. What if you don't like the new flavor choice if you decide to go with a mango? Or what if you have to, you know, what if you actually do ruin your appetite for that dinner that your that your dad is cooking for you later, right? So these are the types of information that inform all of our decisions. Alright, we're gonna do one more quick one here. All right, and we'll just do this in the chat. So on the left, is we're gonna say this is your grandmother. On the right, is your group of friends. So cool, and you have one hour of time to spend with them? Who are you going to spend time with? And how? The question I want you to answer how are you making this decision? What factors go into making your decision? Think of things like, when was the last time you saw your friends? What is the activity that you're doing with each of the groups? So in the chat, put in some of the considerations, how are you making this decision?
I've got grandmother because my friends wouldn't be online. Okay, that's great. I have not spent enough time with my grandmother. Okay. Family is priority. Excellent.
Any other answers?
What if you live with your grandmother? Oh, there we go. GAVI, I see my grandma all the time. Yeah, as if you live with your grandmother, and then that could weigh into your decision, right. So here's another example of risk versus reward. Right. So we you may not be familiar with the acronym FOMO. But that fear of missing out, whether it's because your grandmother is, you know, has something really fun that you always do, but you're already with your friends. So it's difficult to get to your grandmother, or On the flip side, it's the fear of missing out and doing something really fun with your friends, because you feel like you always have to spend time with your grandmother. potential future opportunity, right. So by spending time with your friends, you may build more rapport with these people. by spending time with your grandmother, you may be able to establish a better relationship with your family, and it might make the rest of your family very happy and make your living situation a little bit easier. So all of these considerations go into this decision. All right, one more decision. And this is a really crude, Canada, excellent. My family already visits my grandparents all the time. But I haven't seen my friends this summer. Yep. So the time that you have spent with each of these groups, becomes a consideration. However, there are other things that you're thinking about as well. And that is the impact of what your decision is going to have, whether it's on the rest of your family, whether it's on your peer group, whether it's around your social standing, whether it's strong, I mean, maybe maybe your grandmother typically gives you $10 every time you visit her, but you lose out on the money that you've got. So now we're going to take it one step further. And this one is a tough one. And I don't want you to make the decision. But I want you to take into consideration some of the factors that would help you make the decision. So this is an ethical dilemma. This is a classic ethical dilemma. This is called the lifeboat challenge. So in the lifeboat challenge, you have to select three of these groups or individuals to save out of the group of 13. That's the dilemma. Right? So and you you don't have any options. You have to make this decision. You can only save three. So what are some of the considerations that you take into account? You've got a 60 year old minister or religious leader. You have a woman that is pregnant. You have a doctor, you have a politician. You have an army veteran, an infant, a 27 year old teacher, a 25 year old married couple who happens to be on their honeymoon. You've got an 11 you've got 11 year old twin sisters. A 15 year old nurse, the cat captain of the ship and a senior citizen who has 15 grandchildren, and of course yourself. So, in the chat, what are some things that you take into account? If you were half having to make this decision? Thank goodness, we don't have to make this decision. But if you had to make this decision, what are some considerations that you have to make?
So I'm going to think about the too old and too young to be able to swim. Okay, so survivability, that's a really, really good consideration. Who needs help the most? Okay, great.
Who would be the most helpful to others for future survival after being saved? Correct? What about potential? Do you take into account potential at all. So if you're looking at the old versus the young, you've got somebody who has already lived a significant amount of their life versus somebody that has yet to live their life. Or if you take experience, you've got somebody who has a lot of experience versus somebody that doesn't have experience, you take into the profession, their into their account, how that's going to be needed, okay? Saving the pregnant woman, because you could possibly be saving two lives instead of one. Excellent. So these are all considerations. Now, there's no right answer to this dilemma, which is what makes it an ethical dilemma. And these are the types of things that we you study in courses like philosophy and ethics. When you go into medicine, you take a code of ethics, when you go into many professions, legal government, you take a an oath, which is a code of ethics that you're asked to follow. But even as whatever age you are, you already have some ethical principles. Now, this is based upon what your family believes that what your culture believes in what you feel is right and wrong in terms of being a human being. The words that you see up on the screen are actual ethical principles, but they take into all into account all of these things. So when you talk about relativism, right, when you compare one person on the lifeboat lifeboat versus another person on the lifeboat, and you value, you know, things that are just, and you do some reasoning, and you look at personhood, so with the person who said, you know, one person, the pregnant woman is potentially carrying two lives. So how much do you value the personhood? Right? So these are all things. And also you have to take into account the guilt, right survivor's guilt, if you select yourself, what was that going to look like? What is it going to feel like? And these are all very human decisions. Now, the other thing that we take into account is bias. And bias can be Yes, kissa do? Absolutely. So that is one thing that we have to take into account as a human rights, because that's an emotion that we feel so how do you separate out things like bias, and ethical principles from human emotions like sadness and guilt. And that becomes important when we when we wrap this up here in just a minute. Because when we talk about bias, the bias that we bring in is based upon, we may value somebody in the medical field more than the religious field, we're in other cultures, we may value somebody in the religious field more than the medical field. So the reason we've spent so much time talking about decision making is because now, when we talk about applying this to artificial intelligence, we are not asking machines to make these decisions. So the data that we input into making those decisions is the same type of data that machines have to be input. And you as a human have the ability to select them. So things like how diverse is your data set? How accurate is your data set? How much volume, how much? How much data do you have? Have you do have errors in your data. And then we also have to talk about things like, you know, making predictions. So I'm going to go forward here because, you know, we look at this picture. And some people think that it's a cute picture. Some people think that it represents bad behavior, and this little girl is being naughty taking the lipstick out of her mother's vanity. But that prediction, if we reward a machine for making a prediction, whether it's cute or naughty, that's how they learn, right? And that's how we learn as humans. Those are the principles that incorporate ethics in AI. So those are the principles that have to be built into not only the data, but the algorithm that makes the decision and consistently evaluating how the machine is learning. So is the machine learning in a way that aligns with our ethical principles? And who is benefiting from all of these decisions, because a decision in one culture may be appropriate and in another culture is not appropriate. So who is your audience? So it's consistently monitoring across the entire ecosystem of machine learning and artificial intelligence. And these are the ethical standards that we look at. There's a lot of standards out there. I know I've wrapped up pretty quickly. But I do just want to make sure that, you know, as you're starting to develop and code and create these applications, making considerations from the data that you're pulling in, and how you're implementing machine learning solutions, and teaching the machine how to learn, you stepping away, we, as humans still have an incredible amount of emotion and decision making that we still have to teach our machines. So I don't know if there's any questions that there's even time for questions. But it's always fun to go through this and start thinking about that. And I'd be happy to share anything that I have with the group.
Thank you, Megan, that was really nice and very interactive, like alerts on the third. So it was very interesting for me personally, as well. Thank you so much. One question I have is, like you said, in different cultures, a certain kind of bias might be considered Okay. And then, for example, based on gender, in some cultures, it may be okay. In some cultures, it may not be okay. So how does how does like do you think there is a single ethical standard that could apply internationally across?
Yeah, that's a great question. So a single standard, we're fit, we're human race, which means that there are many different ideas and many different ways that we make decisions. So even within a single culture, a single family, there is no right standard. So what has to be done is there has to be leadership and decision making along the way to evaluate what is going to be least risk for the organization or the culture. And in some cases, that decision may may be pulling away from the machines making the decision and keeping the human involved in it, or having more human interaction along the way to monitor it. And also keep in mind that cultures change ideas and culture shift. So is your algorithm reflective of what's changing in the culture and meeting the needs of the culture either way, so there's no one standard. There are some wonderful tools out there and some wonderful standards out there. But you have to find a standard that aligns to your culture, to the regulations of the country that you're living in to your organization, and also to you as a developer, so there's a lot of conversation around do developers and do machine learning engineers and data scientists feel ethically aligned with the companies that they're in, which is why you're seeing a lot of things and shifting and moving around that. So it's a consistent cyclical
process. Thank you. Thank you so much. So I think, yeah, we I think we are good on time. Thank you so much for speaking and all the insights. I'm sure the students when they will, their projects will keep this in mind about that data and how it influences their predictions. Thank you. Fantastic. Thank you so much, and good luck to everybody. Thank you, Megan. Thank you. Bye bye.
Megan gave this talk at AIStars 2021.