How Can Artificial Intelligence (AI) be kind? : Dr. Yim Register

By JP Flores in faculty

September 22, 2020

In this episode, I interviewed Dr. Yim Register (they/them). A recently minted DOCTOR. Per their website, they are an NSF GRFP Fellow and got their PhD from the University of Washington (UW) Information School. At UW, they studied the ways that AI algorithms can cause harm, and the best practices for identifying and remedying such algorithmic harms.

Transcription

Transcribed by Micah Hysong (he/him)

JP Flores (he/him): What’s up y’all? It’s your host, JP Flores, and welcome to From Where does it stem?

Yim Register: So hi, I’m Yim (they/them/he)and I, let’s see, I’m a PhD candidate.

JP Flores (he/him): Oooh

Yim Register: He always gets so excited when we can use that. So if you don’t know, you start as a Ph.D. student, and then usually, when you pass your masters, or something like that, you get to be a candidate. So I’m a Phd. Candidate.

JP Flores (he/him): But I put Phd. Candidate in my bio when I started not knowing that.

Yim Register: Me, too. Me, too. And then someone was like, you’re just a student.

JP Flores (he/him): You’re not a candidate yet.

Yim Register: So I’m a PhD Candidate at University of Washington. We call it U. Dub for W. And I’m in the AI field, which I’m sure we’ll talk all about.

JP Flores (he/him): What is AI? What does that stand for?

Yim Register: AI. I should. I should come up with something like besides artificial intelligence. Yeah. So you know, everyone in the universe studies AI these days. So I’m not special, but I really like what I do. But okay, so starting where I was born. Right? Okay, from the very beginning. There was a child who was very autistic. Okay. So I was born in Buffalo, New York. Go Bills. And let’s see, I actually there are some funny little child stories where like I, I brought in a book report about neuroscience when I was nine years old on the first day of fifth grade. And my teachers were like, what is this? And I was like, I studied all summer about the brain. And I wrote this, this report on it. And they like hung it up. They were like, OK.

JP Flores (he/him): I love that, that’s so cute.

Yim Register: It was really cute. I was like, I this is what I’m into right now. They hung it up. Thank you. They probably were like, what is this child? I became like their worst nightmare over the next few years. But you know, that’s all right. So let’s see. I the, the, the, the, the, OK, I went to college for. I’ll just jump. I just became. Yeah, I teleported. So let’s see, I went to the University of Rochester, where I studied brain and cognitive science. I was premed. I wanted to be a neurosurgeon. I am not a neurosurgeon today.So I was premed and part of the brain and cognitive science program at U of R is, you have to take a programming class. And I was terrified. I was terrified to take that class. I was like no, I’m I’m too stupid like I can’t possibly code. I fell in love. I love to tell the story because I was so scared and had so many of these conceptions around it, and then I was like coding is the funnest thing ever and literally, this is like me doing cheat codes for my sims like I was that forever like, wait the like control C to like, get into my sims and like start giving myself money like that’s what this is. And I like, really, really fell in love. And I like, switched my trajectory.

JP Flores (he/him): Did you start with print Hello, world.

Yim Register: yeah. Yeah. Even then. Yes, I definitely started with print Hello, world, I will say I had this affinity to like like, I can print whatever I want.

JP Flores (he/him): I love that.

Yim Register: Yeah. All of my code would be like welcome to the Wizard Magic Code Box. Like I don’t know. I was like playing dnd or something. You know. I was just really into the fact that I could print whatever and make the computer print it. That’s amazing.OK, so, so I’m balancing, you know, I’m studying the brain, but all of a sudden I’ve fallen in love with coding. So I’m like, OK, quick, how do I make that a thing? I start taking the like artificial intelligence track of, you know, this kind of cognitive science computing track at U of R And I’m, you know, I’m set to graduate, but I’m like quickly trying to take as many classes as I can in AI. You know, I’m like, oh, this is awesome. So I after that, I’ll just give the big overview. After that I worked in a computation and language lab shout out Koala. And that was where I first learned that you could like use computation to like model human thought. And we were working with babies and kids and monkeys. So I worked there for a bit. And then I start applying to PhD programs. I did not originally want to go into a PhD program.

JP Flores (he/him): Yeah, you were preMed.

Yim Register: Well, I was pre Med. I threw, you know, I threw.

JP Flores (he/him): You threw that out when you found coding.

Yim Register: I I was actually very sad that I like threw away that dream. So if you ever feel that way, but you’re like, but I have to follow this thing, you know, like it’s OK. And I still, I found a way to take what I wanted from pre Med, which was this love of science into what I do now, right? So I’ll tell you a conversation that I had with the lab director at the time. I was like, I don’t wanna get a PhD. I don’t wanna go do that. And he was like, listen, you’re like, you’re like a kid that’s like, you know, 16 that wants to drive a car, but you don’t know how to drive a car yet. And and you could like maybe drive it around the block a little bit, but like, you can’t really do much with that until you learn how to drive a car. I’m like, what are you talking about? He’s like, but if you wait, if you wait like, you know, to learn how to drive a car, like then you could like deliver pizzas and like make money. And I was like, what are you talking about? I told him this years later, that I told him this years later, that that’s what made me go get a PhD. And he was like, I don’t remember that at all. But the analogy was I could go get a job right out of undergrad, you know, but I wouldn’t really have the depth of knowledge that like, clearly I wanted. I’ve, you know, loved science and loved discovery ever since I was young. So like he was like, go get a PhD, like really delve into what you want to learn about and then, you know, then you can deliver pizzas.

JP Flores (he/him): I’m just glad that that stuck with you because that would have confused me. I would have been like, no, I’m not delivering pizza.

Yim Register: Wait, I don’t want to deliver pizzas.What are you talking about? I also, for the record, I think at that time I did not have my license yet. I got my license in my late 20s, so I was even more I was like, do I become like, what is happening? Because yeah, at the time I’d even, I’d had a consulting offer from IBM and I was like, I’m going to go do that. Yeah, I know.Awesome. I was like, I’m going to do that. I decided to get a PhD instead. Now I’m very happy that I did that. I’ve been very unhappy that I’ve done that many times along the way. But right now, in this moment, I am very happy and feel very like grateful for the experience. So that leads us here I am at UW and I assume you can ask me more questions about that journey because that’s a whole other journey in itself.

JP Flores (he/him): Yeah, Before I, before we continue on to that, though, I’m curious about your transition. So personally, I can tell you right now, I am the worst person at math. And when I first started coding, I was like, oh, there’s so much math involved, there’s no way I could possibly succeed in this. I, I’m not going to lie, I, I’m doing bioinformatics, computational biology now I don’t know what artificial intelligence is like. I does is there a lot of math involved? Like what? What is AI actually? And did you have to learn math to do it like?

Yim Register: OK, so I will also preface with I was very scared of math. There is a math fear.

JP Flores (he/him): And chemistry, but we’ll talk about math right now.

Yim Register: Yeah, yeah. You know, I I am also finally in a place where I feel confident enough in my math skills actually, that I can admit all the times that I was awful. Like I will remember the 28 that I got on a calculus exam for the rest of my life. Not out of 29 out of 100. At the same time, I remember being so fascinated with derivatives and so fascinating. Like I actually have this like very geometric approach to math that I enjoy. What I needed was good education and all basically.

JP Flores (he/him): Did you mean teaching like you needed a good teacher to like make it relevant to you or what was the?

Yim Register: Yes, so I needed I needed that. I also like you have to understand some of us come in with such a stigma around math. It’s an identity. It is, you know, like we need to have seen our parents do that or, you know, like this fear of math, like talk about something that’s passed down, right. And then just, you know, I look and sound feminine like it was in my mind that I don’t belong there. You know, I don’t belong. I’m not a chess genius, even though as a child I love to play chess. I love to like do all of these like super mathy cool things. And somewhere along the way, it like became part of my identity. That wasn’t that didn’t belong there. You know, that wasn’t for me. I couldn’t do that. You know, I did. I did like a special essay in high school that won an award about, I know who that won an award about the golden ratio and like history of math. And you have to understand, I’m still like, well, I can’t do math. I can’t do math. Math is hard, too hard. You know, I, you know, I, I can’t do math. And to this day, I still like, I’m, you know, prepping for interviews and things like that. And I’m like, well, well, I’m not that good. I’m not that good. You know what happened? Yeah.

JP Flores (he/him): It’s not a deficit though. Like you don’t need to like be the world’s best mathematician in order to do what you do right?

Yim Register: So I’ll get there. What I will say is what I have to remind myself is like, would I speak to a child that way? Would I speak to a child of like, you can’t do math, so don’t try like you have to be this kind of way to do this to be able to be in this field. Absolutely not. My one friend, she likes to say we all have a superpower to like bring to the table. We all have a superpower to bring to the table and we need to be able to claim that like being able to learn is a superpower. Another thing I just heard the other day I was walking down the street and I hear a mom talking to her kid and I don’t know what they were talking about, but she said the amount of knowledge you have right now does says nothing about how smart you are in general. You’re just learning right now. And I was like this mom just helped me. Like I mean I’m walking I’m like you’re right, you know, like she’s talking to like a 5 year old and I’m like the amount of knowledge I have right now is not representative of how smart I am in general. I’m just learning. So I will say one thing that helped me is bravery around math. So yeah, there’s lots and lots of positions where you don’t have to do math at all and you’re still so valuable. And I really crave doing stuff that I can’t do.

JP Flores (he/him): No, it’s a, it’s a internal curiosity, right? Like that’s why we’re doing a PhD is, is I like the idea of having to do math. I just know myself so well that like, I’m not good at it, but I know I can be right.

Yim Register: Yeah For me it’s fueled completely by Spike. Just kidding, there is definite curiosity. But I am very fueled by my own internal, you can’t do that. And I’m like, well, watch me do this. And also because it’s a language of communication. So as we’ll get to, I’m in the AI field, I’m in the machine learning field, I’m in the programming field. And what I study is ethics, responsibility and harm. And to communicate those ideas like there’s, it’s actually really a defense mechanism. I’ve built up this defense mechanism of like, well, I can talk math, so listen to me, which is unreasonable. Like that’s, that’s not fair. That’s not fair that you have to be able to speak in this language of like, well, look, I look how smart I am now. You should trust me. Like actually my deep care and empathy is also a skill that you should, you know, listen to because there’s some really great ideas there. So that kind of evolved instead of just, hey, listen to me, look, I can do math. It evolved into, oh wow, I have deep knowledge of two the different parts of this problem. I have deep knowledge in both that allows me to inform both. Sometimes when there’s outrage about something in the ethics space, I can be like, I actually understand why that’s so difficult in the technical space. And sometimes, you know, when there’s someone doing something technical that’s like, really ill advised, I can be like, hey, look at this, like this is causing deep harm. And I only have that because I have both of these perspectives.

JP Flores (he/him): Yeah. That’s so interesting. Can you can you give us examples of of that? I think that I think you probably have a lot.

Yim Register: I will, yeah. Let me. I know. I don’t know how you’re going to edit this. You’ll figure it out. Let me answer one of your questions about what is AI first.

JP Flores (he/him): Yes, yes, yes, yeah. This is your show. I’m just living in it. Keep going.

Yim Register: Yeah, yeah, yeah. So because now everybody has heard of AI, Well, maybe not everybody, that’s OK if you haven’t, you can learn it from us right now on our show. But everyone, everyone kind of has heard of AI right now, which is very strange as someone who’s been studying AI for like a couple years. Like, obviously there are people who have been studying it far longer than me, but there’s definitely people who have been studying it for less than me. So I’m like kind of in this these two worlds again.

JP Flores (he/him): Yeah, well, I’ll tell, I’ll tell you right now I’m, I’m at the NIH, I’m an I’m an intern in the Office of Science Policy at the NIH right now. And that is like one of the main focuses is, is how can we, you know, use AI for certain things? So I’m sorry, go back, go back, go back.

Yim Register: No, it’s OK there. There’s a large market pressure right now, right? So, OK, AI, artificial intelligence, how it used to be defined, you know, there’s don’t quote me on there’s so many different definitions, right? Like, you know, but how it used to be defined is some kind of program that is mimicking intelligent behavior. OK, that’s not machine learning, that’s AI. So, you know, these chess programs or things like that. We were like, oh, wow. Like it’s like a human, you know, or sometimes like how animals learn or you know how you know, complex systems that seem intelligent. Machine learning, again, someone’s going to be like, these aren’t the right definitions, but OK. Machine learning is taking in data, finding patterns in that data, finding structure in that data finding, basically how things can link together or are similar and using that to either make predictions of what will happen in the future or classify things and label things into different groups. OK, that all gets kind of tricky with what we all consider AI right now, which is like ChatGPT and AI art. That’s good thing. Everything else out the window. Nobody cares about all of data science anymore. It’s just can it talk to you and can it make pictures?

JP Flores (he/him): Well, I didn’t know that was a thing that it’s not. It’s that there’s things outside of those two. So this I’m learning so much right now.

Yim Register: Definitely. Thanks. So and and The thing is people don’t always call it AI. So for instance, zoom blurring your background right now that’s some kind of AI that is detecting a face and a body and you know, changing the background, right. You know, I always try to start with thinking of like Instagram filters and Snapchat filters. Like that was actually really early, like, you know, using facial recognition to do something kind of fun that we’ll get into why facial recognition is a total shit show. But also, so one of the first like big like AI algorithms that started to make money was Linkedin’s People You Should Know. OK, so a recommender system, this, this recommender through a network of, oh, you’re connected to these people. Well, they’re connected to this person. So maybe you should be connected to this person. Boom, like you’re doing machine learning, you’re doing, you know, you’re doing AI, you’re using this data to make this prediction or this classification of this is someone you might want to know. Then the person is, you know, clicking accept or whatever, and now you have more data in the system and more feedback in the system. OK, so recommender systems. Where else do we see recommender systems? Netflix. OK, so 2009 again, don’t quote me, I’m I just wrote about this, so I’m hoping the dates are correct, but around early 2000s, OK.

JP Flores (he/him): I do a Fact Check so I’ll I’ll like I’ll edit my voice name.

Yim Register: Great. Yeah, let’s see 2009, the Netflix prize. So Netflix was doing this, you know, they mailed DVDs to your house. Amazing. 2009 they’re like how do we build an algorithm, in this case a collaborative filtering algorithm. So kind of like what I just described, a recommender system where based on what you’ve watched and what other people have watched, we recommend stuff you like OK and we’ve done this for a while with machine learning algorithms where we do like competitions, we love competitions. I don’t know, like and even like early, early machine learning and AI, it was like we beat the world chess champion and you know, we beat we, we made a self driving car that drives around this racetrack like the best or you know, Watson goes up against Ken Jennings on We love it. We just love being like, and we don’t even know who we’re rooting for. We don’t even know who we’re for, right? We’re like, like, yeah, we beat him. We’re like, no, we beat him. Great documentaries on all of these things. If you’re interested to go watch about like AI games and stuff, you can go watch.

JP Flores (he/him): Well, I’m thinking about the AI stuff that are competing against surgeons. So can we make something that’s better than surgeons and.

Yim Register: Getting there is awesome. This is great. You’re leading me right on, right this hill. Yeah. So. OK, so with these kinds of competitions and things, yeah, OK, we see like self driving car, OK robots, you know, all right, that’s AI. We also have just data systems making predictions. OK. That’s more where my kind of expertise is in is just like data science in general that encompasses these machine learning kind of tasks. But for instance, allocating resources, that’s a machine learning problem, allocating like insurance amounts, right? So we’ll get into that too because there’s some bias there, you know, approving people for loans, you know, predicting the market price of a house, even like some like admissions metrics into colleges and things like that. OK. So any kind of like metric these days is probably using what we call AI now as of today, Right now what we call AI is again pretty much ChatGPT, right, and generative images. So Gen. AI, generative AI, it’s producing something new using these algorithms from the past. This is all, it’s all built on top of itself. So we had large language models, we had, look, we can do this kind of text analytics and things like that. Like we had image stuff, image classification. We used something called adversarial networks that like actually are two networks that go up against each other to try to create the best image, things like that. We’ve had that for a while, but now they’re kind of just hitting so commercial. I’m not, I’ll, I’ll be totally honest with you. I don’t see many products that bring a ton of value. And maybe I like, maybe I could be convinced on that. What I see instead is like market pressure and fear of like, we got to have AI. We got to have AI in this. I would love to be convinced otherwise like I would, I would love to be like, hey, look at this thing that we made that like really helps people or like really, you know, fixes this problem or really does this. I’m sure that does exist. And I like, please let me know if your AI product is really helping people and you really believe in it. Like I need some more of that kind of joy because right now I see it a little bit like I just related it to fast fashion, OK, just reducing and churning out like AI for this, AI for that. We need AI for this and AI for that. And it’s like, you know, kind of like cheap representations of a human being that make all these mistakes and are using exploited labor for very cheap. You know. Like accomplish something trendy, right? Again, please prove me wrong, please, because I actually am like genuinely, I want to see more examples. You know, I have seen some companies have done some things where they’re really carefully considering what they’re creating. I love to see that.

JP Flores (he/him): Well, can you name an example? Because now I’m curious. I want to know what constitutes a good one. I maybe, maybe we can talk about this later offline or something, but I’m, I’m so you, you’ve intrigued me. I’m now being pulled in.

Yim Register: Yeah, I’m trying to think I I just saw an example that I’m not allowed to talk about. Oh.

JP Flores (he/him): That’s fine. That’s fine.

Yim Register: I’m trying to think if I can think of one that is. Oh, gosh. See, that’s the problem, The fact that neither of us can think of examples of a good AI product right now.

JP Flores (he/him): Yeah. So, so would you say that Linkedin’s AI stuff is good or the Netflix recommendation? But that’s I guess that’s a different type. Yim Register: right, So that’s rewinding, rewinding back to these algorithms before the like Gen. AI boom. I think there’s lots of value there. I also think there’s lots of mistakes and lots of harm. OK, So that’s also like, I studied the balance of that, right? You know, for example, this is less an AI issue, but could be an AI issue where, you know, there was something with Linkedin’s algorithm that was like chucking out international candidates even if they’d be like, willing to move things like that, right? It’s like, oh, you just don’t meet the criteria. Like we’re not even going to, you know, put you into this pool or whatever. Another famous, famous example is like Amazon’s hiring algorithm. OK, so if you’ve heard of this, then you know what I’m about to say. But I will say Amazon did not use this algorithm. To their credit. They like saw this issue. It was published about this was, you know, a big problem that I’m about to say, oh, teaser, they didn’t use this, but basically they were trying to build something that could look through resumes faster, right? There’s so there’s huge demand. People are applying. There’s thousands and thousands of people. Is there any way to like sort these, you know, CVS and resumes? We do this now. This is happening. And because of the dominant male population as software engineers at Amazon, it threw out female candidates. And this wasn’t based on name. This is not based on their names. It was based on even like wording differences or clubs like all women’s colleges or things like that. The algorithm was picking up on that. So even when they tried to get rid of name, right, this happened again, not used. That’s like a oops. Oh my God, Like the AI really machine learning is learning this pattern, right? It was actually like a really good step towards looking at AI bias and AI issues, right? Like these mistakes, you know, occur if you don’t fix them, that’s where you’re really, or if you’re not looking for them. I think, you know, more often people do want to fix these like really egregious issues.It’s that they’re not even looking for them or not even realizing that they’re happening, usually due to lack of diversity at the table, right?

JP Flores (he/him): Right. Well remember that one thing where it was like, yeah, I remember that one Gen. I Gen. AI thing where they asked for pictures of professors and like all of them are white. Do you remember that?

Yim Register: We like, I don’t know if you link things in like your show notes.

JP Flores (he/him): I do

Yim Register: Yeah. Great. I’ll send you some articles. There’s an article that does a deep dive into some like Gen. AI stereotypes and bias of like different jobs, Right. So CEO, you know, I can even, I’m going to try to pull it up. This is important.

JP Flores (he/him): Wow, I wish, I wish you were in Chapel Hill. I kind of want to just talk about this all day now. Now I’m like, I need to know.

Yim Register: Here we go. his is so you can link this. Basically, there’s a very clear skin color bias and gender bias for different jobs, right? And OK, this is making pictures, right? First of all, yeah, racist, sexist bad, right? But OK, where do we use these pictures, right? Like, you know, maybe someone says, OK, yeah, So what? Right. But, you know, these become maybe these become brochures for your college. Maybe these become slides all the time. You know, I will also add very thin bias as well. Able bodied thin another thing that happens with like these filters over your face when you’re using Gen. AI stuff to do like AI art is like the sexualization of the images, particularly for women like all of a sudden you have big boobs like. Right, right. Like big eyelashes and like, OK, great. Yeah, I see AI sees us just like the world does, you know? OK, so anyways, where was I? I think I need a question.

JP Flores (he/him): No, I was just talking about examples of good AI, but now I’m wondering about bad eye bad AI or you just linked to that. But can we talk about what you do at UW? Like what what and and what your research is in?

Yim Register: Yes, OK, thank you. So basically what I do is I do embedded ethics education in AI and machine learning coursework. So there is kind of this issue that ethics is seen as something that gets like chucked at the end. It’s like, OK, we’ve done this whole technical course and now just so you know, like bad things can happen, you know, and, and this happens in a lot of fields. Actually. I feel like this must happen in biology and.

JP Flores (he/him): Yeah, we’re teaching a class on it

Yim Register: It’s like, by the way, also everything you learned is the potential to be really bad, you know, and there’s this great paper. It’s titled more than if time allows, the role of ethics in AI education, right? always chucking it at the end, like it’s this side note and that, you know, it’s not just, oh, we ran out of time. There’s something that happens with that. It like others this, you know, this concept, right? It’s like also ethics also think about that when really it needs to be this like core part of everything you build. Engineers often know this, OK, so there’s also work to show like, you know, data scientists in the field, especially if they’re working in something like medicine, like they’re very clear about like this is impacting real people, right? I, I don’t know to what extent, but I, I like to believe that most people are not just like I make evil computers, like, right, Like most people do care about what they put out in the world, but how we teach it is also in this way where we’re like also, you know, oh, by the way, you know, and if you run out of time, oh, that’s the one that gets dropped, right? That’s the thing that gets dropped. It is my perspective that in our like model buildings, like when you’re building some kind of AI, you know, prediction or classification or whatever, ethics can come in in each lesson, every lesson. This is like not like a, you know, totally novel idea. I’m just for my dissertation, just like contributing some strategies of how to do that, particularly also in like a trauma informed way. So what I mean by that is there are students in your classroom who have experienced harms either from these systems or have experienced, you know, something that you’re talking about in the classroom. We have a lot of like datasets and examples that we teach on that are actually pretty like dicey. So even when we do these like, medical examples of like, let’s predict who has cancer or not, who dies or not, I really need everyone to take a pause and like, think of the students in the room, like right there, who now have to like do this little exercise where they have to compute if someone’s going to die of cancer or not, right? Who do they know? Who have they lost? Who do they love, right? That just matters so much to me. And sometimes it baffles me when it’s like we have to put those things together. We have to put this together. So another thing too, you know, we talk about like, you know, homelessness, incarceration, like, you know, drug addiction, things like that. These are not like the other people. These aren’t just numbers. These are not just numbers and also like, and, and maybe it is the case. Maybe we’re so in the ivory tower that we’re so separate from these issues. But like I’m not I. Was going to say that that’s not. Not and like it is because I’m there and because my peers are there that we get to like speak up for ourselves and be like, hey, this has happened to me or this I’ve experienced this or, you know, things like that. And, you know, my dissertation work, at least it contributes some ways to teach while also keeping like, the technical in mind while also like protecting your students and including their worldviews. So I really do straddle, you know, both technical and like human care. Like, that’s what I do. And I’ll give you another example just based on something that you just said. I like to. So we’ve all seen like a scatter plot or many of us have seen a scatter plot. I won’t assume, but a lot of us have seen a scatter plot points on a 2D grid, often with like a linear regression line, right? And let’s say that there’s a really far outlier in that data, right? You have a point way out over here, right? As a data scientist, what do you do with it? Throw it out. OK, so you know, that’s one intuition, right? It’s like, well, we don’t need that, right? And, and there’s many mixed takes on this, you know. Really do it really, very exclusively right. Right. Real data scientists will really, you know, have a protocol for this, you know, but often I think we’re even taught maybe early on in in high school or something like that, like, OK, you know, throughout the outliers, that’s a human being. Often, you know, maybe it’s a product, maybe it’s a, you know, whatever. Maybe it’s a gene, you know, but but you know, somewhere down the line, that’s a human being’s experience, right? That doesn’t mean, you know, keep it. If you’re doing insights and things like that around the averages and you’re, you know, you know, trying to make some kind of predictive model, you’re not always going to take into account the outlier, but it will teach you something. And looking into it, not only is it good data science, right? To just be like, oh wow, the, the range on this thing can actually be like up to this high. And we should make sure that in our real world, you know, test set that it includes that kind of thing. It’s also just thinking like, OK, the scope of our problem may have just changed, right? Someone could have this kind of experience and it’s like I try to hit home like that’s a a human being’s experience. There’s another thing too that like part of like a contribution I’m trying to, you know, move forward is in the machine learning world, we often have this is how we set up our problems. OK, So we have representation, the model that we choose to, you know, explain this data to work with this data. It’s usually kind of problem specific and data specific. You know, if you’re using language, you want to do this. If you’re doing classification, you want to do this. Here’s a representation and then we have optimization. So we want to make it better, we want to make it work. We want to have it have good accuracy and good precision and recall, and we want to make it do really well. In the real world, you have evaluation, so checking that it did good, right? Optimization sometimes also is like an algorithmic thing. Maybe you changed part of the algorithm, you tune the parameters, whatever evaluation you’re trying to see how well it did. I would love to add another step there, which is at the end impact, yeah.

JP Flores (he/him): Yeah, definitely, yeah.Because people I feel like say that’s that evaluation and impact might be the same thing, right? So.

Yim Register: Right evaluation. I’m talking just the technical sense of like, you know, you’re checking how accurate your model is, You’re checking, you know, how many false negatives did you have? How many false positives did you have? You’re checking the prediction, you know, accuracy or residuals, blah, blah, blah. Buzzwords, buzzwords. Look. Technical. But then, right, so that’s fine to keep that technical, that’s fine. But then we need this added thing of like launching this in the world, what did it do? For a lot of my students, that might be business. Like, OK, that might be business. That might be. And this is what data scientists do, right? Then they give this little talk, this little talk, give this little, little talk about it. What’s all my talks are? There’s a little talk. So, but really thinking through, if we put that for every model that we do, let’s go back to that recommendation algorithm right from LinkedIn, Someone’s opportunities are being chosen through that algorithm, right? Someone’s future like I’m. Getting the balance. Right. Like I’m very dramatic, but like someone’s future is riding on the decisions of that algorithm, right? If you upped the like, show me a random person just a little bit more, maybe they would get discovered and be the next Justin Bieber, Right? Exactly. Right. So like, getting my students to think along those lines is one of the most beautiful things I get to watch. It’s really like putting these two worlds together where it’s like, yes, I want to be a technical data scientist that can take in any data and, you know, know these algorithms and do these things and accomplish this stuff. And also I want to be in community with my fellow human beings on this planet Earth. It’s called dramatic. I love it, I love it.

JP Flores (he/him): So I I’m assuming you can’t be the only person trying to do this, right? What has been done in this space in the past? What has been doing what? What’s been going on in this space now? And what what direction do you think it’s it’s going in?

Yim Register: Absolutely. So interesting, yeah. Yeah, absolutely. So I am by no means the only person doing this. I would start looking towards like Joy Bolamini’s work so and Ruha Benjamin, Safia Noble Cathy O’Neill there’s. Weapons of mass destruction, Cathy. O’Neill, yes, I would start around there. So there’s actually a documentary you can watch, I think it’s on Netflix, called Coded Bias. That was a really excellent documentary that kind of covered the start of discovering that facial recognition algorithms did not detect dark skin right. So this amazing discovery in gender shades. This was 2018 Fact Check me, which also just goes to show how recent, how recent some of this stuff is right and people have been working on this for far longer than that. I know like, I don’t want to, you know, say like this is where it all started, right? I’m also like ignorant in some ways. And there’s many scholars, like it takes many scholars putting paper after paper after paper until it, you know, the avalanche starts, right. But those are some really like notable incredible women who have really like dived into like racism and sexism of algorithms. Let’s see. And also you can check out the Algorithmic Justice League. And then these days there’s also even like, you know, you can kind of like tell on the algorithm. So like there was a Twitter challenge to like report bias. So I don’t know if you remember the like cropping algorithm issue. Oh, yeah. Yeah, yeah, yeah. Yeah. So, you know, you’d post a picture and if there was a black person and a white person, it would crop on mobile to be the white person, like almost every time. Right. And that kind of spurred this like, uh oh. Like, we need some kind of way for people to report these issues. I’d like to see more of that. I would love to see, like, explained self advocacy pathways. That’s what I call them, self advocacy pathways, Right? Or advocacy pathways. It also needs to have some kind of explanation. Not that people are like, don’t get it. Actually, people are very, very smart using these systems teaches people so much more about AI, like talk to any creator on social media, like they know the algorithm more than anybody, right? But having these kinds of like pathways for advocacy with an explanation, with actual recourse would be great. So another paper that we’re that’s coming out this year, we worked with some advocates on Instagram who have been discriminatorily content moderated, OK, content moderation, long history of, you know, issues there, including moderators who were labeling, you know, all of these kind of awful images and text and things had PTSD from that and were paid super low wages. There’s a lawsuit going on against Meta. I don’t know which really when I’m interviewing with them, but it it also, you know, goes to show that a lot of these systems are built upon exploited labor, low paid, you know, quickly produced labels in order to build these systems that then make so much money, right? So anyways, these content creators, you know, banned for being anti racism educators, banned for speaking about trans rights, banned for speaking about sexual assault, banned for actually we have one who’s a fat model and people just reported her so much because they didn’t want to see her or, you know, whatever that it started to add up into violations on her account. And then we have, you know, a political activist, political journalist now and people just giving her death threats, things like that. And then she gets banned on the platform, whereas people giving the death threats don’t, right? So our systems are broken. The problem there is there’s no recourse. Yeah.

JP Flores (he/him): So how do you, how do you propose we combat this? Is this more of like a add these issues into the curriculum and make sure people are educated? I feel like you are the type of person to be good at, at getting followers to, you know, help make that change, right? Like you, you are a teacher that I’m sure people admire. I follow your work on social media and it is awesome. And I’m sad. I don’t know if you’re trying to go in academia. It doesn’t sound like you are. But one, I wish you were two, how do we how do we make this change actionable? Like do we start adding this in every high school curriculum, middle school? What are your what are your thoughts there?

Yim Register: This is awesome. OK, so first I’ll give you my politician answer because I can’t solve all of these problems. Like people love to ask me. This is like what do we do? And I’m like Oh no, I studied too many problems and I don’t have the answers. So here’s my politician but also genuine answer. There is something called multiple points of intervention. OK, I learned this from the environmental sustainability movement. So when we want to save the earth, there are multiple points of intervention. OK, so some of us will chain ourselves to bulldozers and not move and get arrested for protesting. Some of us will work on policy. Some of us will simply like fix people’s clothes so that they don’t have to buy new ones. You know, some of us will do research. Some of us will, you know, you know, start boycotting. Like there are all of these different points of intervention, OK? You cannot do it without all of those points of intervention. And none of us can do all of them. That took me a while to realize, especially in my like PhD, like early years of my PhD, I was like, I’m going to solve AI ethics, like, OK, you’re 20, right? I was like, this is so important, right? And you know, that’s because I needed to learn humility. I needed to learn collaboration. I needed to learn all of these other things about myself, about the world, you know? But instead now I encourage people like, find your lane and stick to it, right? Like, find what you first of all, enjoy. Like, have some joy in what you do and also trust that your part, however small, is needed if you jump between all of them. And I still do this, right? Like, I take my own advice. But, you know, if you jump between all these different points of intervention in a problem, you actually aren’t there when your part is needed, right? So. So yeah, you know, education. I also learned from my public health education in undergrad. I did some public health courses. Education alone is never enough. This was, like, drilled into our heads, right? So, you know, you want people to quit smoking or cut back on smoking, right? You can tell them over and over that, you know, this is doing this to your lungs or this is doing this. But what you also need, you know, yeah, you need those horrific posters. Maybe you need the prices to be changed. Maybe you need, you know, less smoking areas. What you also really need is like peer pure, you know, support and understanding and you know, so like you’ve got policy, you’ve got friendship, you’ve got education, you have doctors, like you have all of these points of intervention. So my political answer is still like we do it together. Like that’s how we fight AI. Injustice is together.

JP Flores (he/him): I mean, you’re not wrong, right? You’re not wrong.

Yim Register: But I’m not wrong exactly. And the way to do that is like, pick what you’re really good at. You know, I do think so. Like, you know, if I’m making, you know, my lane sort of clear is like sending these future data scientists out into the world with this kind of perspective. OK. So I’ve actually, I have had the pleasure of working with code.org. They, you know, produce materials and activities for teachers to teach code in their classrooms and also for kids to learn on their own or with their families. And I’ve gotten the pleasure to work on some of their AI ethics and like generative AI lessons, I can send those too. But what we infuse in these lessons is like, how do we think about the impact and how do we, like, use our story, our identities, our joys, our passion? Are, you know, what we care about to look at a problem, right? And how do we work together? Like it all sounds so like, you know, like, you know, My Little Pony. But but that’s OK. I also, I call all of my students future data science leaders of the world. Yeah, that’s what I call. JP Flores (he/him): So cute, so optimistic.

Yim Register: Future data science leaders of the world, like, what will you do with that?

JP Flores (he/him): It’s it’s. Funny you mentioned that because I, I remember seeing your slides for something. I think I literally clicked on something you put out on Twitter and it oh oh, it was for the UNI think it was for I think

Yim Register: It was not for it was not for the UN. For the UN, it was not for the UN. It was for high schoolers. Model UN.

JP Flores (he/him): Yeah, same thing. Same thing in my head.

Yim Register: Little bit of a different level. I was not testifying to the UN. Maybe someday.

JP Flores (he/him): I think UN was in there, but I don’t know if there’s actual UN.

Yim Register: It was model UN, which was really cool to see students engaged in like these different issues. So my talk there I I titled it the future of AI can be kind. Some days I believe that some days I don’t, but I every day I try to recenter myself to really believe in that. OK, because I see people like me, I have conversations like this, you know, I, I see my students like all they want is the pathway to be able to do data science for social good. They want that pathway, right?

JP Flores (he/him): That’s what I want.

Yim Register: Yeah. We like, there’s enough of us who want that that I think it can become a reality, right? And it can become a reality in small ways. So even what I was talking about earlier with like, we got to make AI, it’s trendy and we got to do it having won, you know, senior data scientist, be like for what? Not even for what? But like, OK, how do we make this like really help us or help people or, you know, having that one person say that and then, oh, you’re right. How do we make sure this and this and this, right? And now all of a sudden you have a team that actually believes in this product, first of all, and is like worried about it and wants to see it succeed, right? And I don’t know, I don’t know, in every case, like sometimes it really is just for productivity, right? Like, you know, or, or creating your slides better, like automatically doing your resume or something like that. In in those cases when it’s something for that, you know, after thinking about justice and possible injustice that can happen, I shift the focus over to joy. So like, what else can we build that’s fun and like, you know, like enjoyable. Those are like my two pillars, right? So like I have joy and justice. OK, if you’ve taken care of justice, your responsibility now is joy.

JP Flores (he/him): That’s awesome. I want to quote that so much. Yeah, yeah, yeah, that’s, that’s so right. So what people thought of Yeah, what do you what? What have people thought of? What brings joy to them?

Yim Register: I actually shout out on it for a, a person that I think you should have on this podcast. A, a friend of mine is an entrepreneur who has started a company called Revere XR. And what she does is she restores history through VR. So she takes neighborhoods, particularly black neighborhoods that have been gentrified, gets the stories and experiences and memories from elders who live there, and transforms them into like VR and AR experiences.

JP Flores (he/him): Oh, that’s so cool.

Yim Register: Yeah, we just did a hackathon at UW. She like ran this amazing hackathon, like all inclusive, no matter your tech background, you can come do this, right? And each team got an elder to work with an interview. And like, here are these stories of like, how did the Black students Union get started at UW? Like what was, you know, the, the fire department like in 1950, whatever, right? Like, they got all these stories of Seattle, including about, like, Jimi Hendrix and, you know, we have some famous musicians out of Seattle. Kurt Cobain. Yes, yeah. And each team got someone to work with and then, like, created this experience to, like, preserve Seattle history and memories. So check out preview XR. Yolanda Barton is incredible. Other forms of joy. Let’s see, I’ll tell you a really silly idea that I have had that I need someone to build cuz I don’t have time. I have wanted to do a little AI project for my cat or dog where you, you know, put the camera over their face and you label like their different moods and then have it shout out with like, I am happy, I am hungry. Like dogs from up. Yeah. That’s. So funny, I would like to see that. Oh, I also, I ask Alexa for fart noises a lot. I don’t personally have an Alexa, but whenever I’m over at someone’s house that has hey, Alexa, can you work for me? Yeah. Oh, yeah. Oh, I could do that for like an hour. I’m not even kidding. It’s really funny. I get deep, deep joy from that. Also just like creative expressions, like editing your videos, you know, I dance, skate, things like that, you know, and I want to see more joy. I think also like we are in a time where justice needs a little more attention, right? Joy is beautiful. Yay. Let’s aim for that, please. And also what did I say when you’re finished with justice, which you’re never finished with justice, but like when you’ve, you know, done your due diligence around justice, your responsibility is joy. But I think we’re, we’re in a time right now where we could focus on that justice. And, and hopefully, you know, here’s another thing too. It’s really easy to get depressed when you work in both when you’re alive as a human being, but also when you work in the field of like algorithmic harm. Like something bad happens every day and people are creating new bad stuff every day, right? And I’m like, like what, you know, like plays, listen to me, like, right. And it that can get really depressing. One thing that I focus on is like this. What is the future look like? What does this imagined world look like? So there’s a beautiful quote from Ruha Benjamin who wrote coded Bias and or sorry race after technology. Also mentioned in the in the documentary Coded Bias. Sorry, beautiful quote from Ruha Benjamin about imagine the world’s you cannot live without while you dismantle the ones you cannot live within. Yeah, so people are smarter than me. I just quote them.

JP Flores (he/him): Well, the fact that you know that quote just like in your. Head and you just pull it out. That’s wild.

Yim Register: So code.org just used it as part of their a ethics lesson, which is like students will imagine a world. You’ve learned all this stuff about how AI has caused harm. Now it’s your turn as future AI scientists and future data science leaders of the world and future policy makers and, you know, future artists and advocates like of the world. What is your imagined world? Maybe I could even ask you like you’re in medicine and in like biology and in these fields, Like what is a world that you imagine? It doesn’t have to do with technology. What’s a better world that you imagine?

JP Flores (he/him): A better world that I imagine is a world where the scientific workforce actually does reflect the diversity of the world or our country, let’s say, on a smaller scale. I think a big motivation for me to do the to do this PhD wasn’t necessarily the bioinformatics or like trying to solve cancer. It’s always been if I get a PhD, I somehow get some power to M power EM power the next generation of scientists that hopefully would look like me. So that is the world that like I would imagine.

Yim Register: And you’re doing it. That’s such a beautiful answer. Thank you. It’s a beautiful. Answer. We’re doing it. Yeah, well, we also have to remember like, yeah, sometimes we’re not, you know, we don’t need to cure, cure these, you know, diseases, even though I’m sure your work again, is adding to this, you know, body of knowledge to do those kinds of things. But it is also just showing up. Sometimes it’s just showing up and using your voice and using what you do have and your experiences to be, to be present, to be a part of the room. I can tell you right now, like when I am part of a room, things shift. So, you know, even around like gender, for instance, we have so much data in the AI world is split into two genders. And then, you know, I’ve gained enough confidence at this point to be like, he, he, what about me, right? And I’m never trying. I’m never trying to be, you know, like harmful. I’m not trying to be like, you’re evil, right? But I’m like, well, what will you do with my data point? Because when I you know, even in surveys these days, they add in like, OK, gender. Now there’s more inclusive options, but the bucket that you will get out for the non binary people will be smaller. So you’ll have class imbalance first of all. So what are you going to do? What are you going to do with class imbalance? Like here’s where that technical and advocacy comes into play, right? I know the words class imbalance, but right, so those kinds of things. So me being in the room, you being in the room, we make this difference.

JP Flores (he/him): Right, right. Well, it’s a valid thing to think about because in biology, you know, an analogous example is a lot of the reference genomes that we are using, right when we do sequencing, a lot of those were made with people of white ancestry, right? We don’t have good representation of the diversity of the world. And here we are trying to do all these sequencing things when we don’t have a good representation, we don’t have a good start, you know, so and it’s right now just to what you just said, yeah. And the knowledge that’s built off of that, this happens in machine learning systems a lot, AI

systems a lot, right?

Yim Register: Exactly. And the knowledge that is built off of that and the predictions that are made are only as good as the data that we’ve given it. And yes, can be optimized algorithmically and you can change some things no matter what the data is. But really it does come down to the data. Yep, Yep. You don’t have diverse data. You have harmful predictions. They they call that garbage in, garbage out.

JP Flores (he/him): So quick question about, because I know you have to leave soon, but quick question about my listeners and what they can do, right? So let’s say there’s a listener here. Let’s say they’re a high school students and they’re very interested in the concept of artificial intelligence and machine learning, and they want to be that next data scientist that will help change the world and make it better. And they’re very socially, they’re in it for the social good. How would you advise them to get started? Is it classes online? Is it, you know, should what should they look up? If they have Google access to Google, what what would you recommend?

Yim Register: OK, first of all, let’s see, I’m speaking to a high school student. First of all, please take care of your mental health during this time. Number one, I don’t even care about AI. Hold on. Like #1 it’s hard applying to colleges. It’s hard being in high school. It’s like, you know, I had a really hard time. I continue to have a really hard time in life. OK, so #1 before you’re thinking about any kind of career, you’re good enough exactly as you are right now. You told me I’m speaking to a high schooler. So that’s what I’m going to say. And now let me also say how cool are you? Like thinking about social good and thinking about data and like, you know, wanting to get involved. Like you must be so driven and so cool and you must have already had experiences that make you care about the world and are passionate about the world. Like I’m sorry for whatever you’ve experienced. And also I’m so excited for you to use it for good, OK.

JP Flores (he/him): I love it. I wish you were an educator of mine. I wish you taught my classes.

Yim Register: Yeah. So now what? I’ll tell you. So first of all, your passion is your superpower, OK? So channeling towards social good, that can mean so many different things. It can mean anything from trying to, you know, make things more sustainable for the environment to promoting good mental health and your peers and yourself. It can mean, you know, resisting medical racism. Oops can mean sorry, in resisting medical racism. It can mean getting more people like you into STEM. It can mean, you know, looking at unhoused populations and helping people get on their feet. It can mean so many different things. OK, find that thing that lights you up and honestly, if you’re at the point where you want to like start looking into research about those things, you can always look into things like AI4, something like that, right? And people probably already worked on some of these things. We can link some things also in the show notes below. So I would really start with code.org. I’m going to plug code.org. They have a great intro to AI content. So they have AI 101, they have some games you can play. They have some like getting starting with coding in Scratch. It’s a like drag and drop language. And I would just say don’t be scared. Don’t be scared. You can do this, you can do anything. Don’t be afraid of the math. I know everybody says that you can do anything like OK, whatever. But like, really, you can again, remember my 28 on my calculus exam and now I’m getting a PhD and studying machine learning. OK. Like, you know, I like, I was not the best math student and I’m still here and like, what I have to offer is really important and what you have to offer is really important. Get started at code.org and that’ll link you off to like different, you know, different areas of knowledge that you can study. You can watch the documentary coded bias. I don’t even get Commission. You guys like? It’s that good. Yeah, it’s just good. And start reading and just remember those pillars of like justice. But do not sacrifice your joy, OK? So and find a friend to do it with.

JP Flores (he/him): Yeah. Community is always great.

Yim Register: I didn’t even know what that all meant, though, honestly. Like I hear a lot of this like buzz stuff. It’s like community, you can do anything like and I was like, I don’t know, like I’m just depressed. Like what? Now I get it. Now I get it.

Posted on:
September 22, 2020
Length:
51 minute read, 10668 words
Categories:
faculty
See Also: