In this episode, Rob May had a conversation with Karen Hao, an AI Reporter at the MIT Tech Review. Tune in for insights on AI, what inspires her to write about the intersection of technology and society, her thoughts on the way mainstream media covers AI, and much more.
CEO and Co-Founder
MIT Tech Review
Rob May: Hi, everyone. And welcome to the latest episode of AI at Work. I’m Rob May the Co-Founder and CEO of Talla. And today, my guest on the podcast is Karen Hao, a AI reporter at MIT Technology Review.
Karen, welcome to the program. And why don’t you give us a little bit about your background pre MIT Tech Review. How did you end up there?
Karen Hao: Sure. Thank you for having me, Rob. So, I actually started off my career as an engineer, not as a journalist. I graduated from MIT with a mechanical engineering degree and the first job I took was out in San Francisco as an application engineer for the first startup that spun out of Google X.
It was a software company that was trying to build a platform for urban planners, essentially, to help them make better decisions on where to place buildings, where to place bus lines, things like that. That company pretty quickly dissolved because it was very mission driven and it didn’t really survive very well in the hypercapitalism Silicon Valley scene.
During that period when I was watching the challenges and struggles of this mission driven company trying to survive, it sort of dawned on me that I wasn’t really cut out to work in the private sector. So, I wanted to go into the social sector. I really enjoyed writing. And I thought I might try journalism out. So from there, I ended up doing several journalism internships and eventually ended up as a staff reporter at MIT Technology Review.
It was a bit of a haphazard route, but it actually worked out quite well in hindsight because I love thinking about the intersection of technology and society. I love writing and explaining technology to people in an accessible way. Given my background as an engineer, I have the ability to do that confidently, and speak with researchers, and kind of get to write about technology in a deeper way than I might have been able to had I not had that training.
RM: Did you initially join to write about AI or were you a general technology reporter? Did they assign you the AI beat as a focus or did you did you choose to go into that?
KH: I joined as an AI reporter. That was the position that I applied for, but prior to that I was a general tech reporter. I was working at Quartz, which is a digital media company. Interestingly, at Quartz, I had this dual role where I reported on tech and I was also a data scientist on their product team. Through being a data scientist, I took a lot of machine learning courses. That sparked my interest in actually covering AI. That’s why I ended up seeking out the Technology Review opportunity.
RM: Tell us a little bit about how you find information for that and how you decide what to cover. What are some of the topics in AI that are most interesting to you these days?
KH: That’s a good question. I think my general goal or mission that I have for “The Algorithm” is to really try to be as accessible as possible in introducing technical concepts to people who have only heard of AI but don’t even know what it is as well as introduce more like social, societal cultural concepts to hyper technical people that have been in AI for a very long time but don’t necessarily have an interest in the humanities. My mission is really to try and create a bridge between these two different camps that I think should be talking to each other and learning from each other.
That’s in general when I’m searching for stories, I’m always looking for things under both those buckets– really technical things that I think the average reader should know about because it will start affecting their lives and AI ethics and AI and society type stories that technical researchers need to really start thinking about as they’re developing the technology.
In terms of the day-to-day actual searching for stories, what I do is every week I will read– first of all, I subscribe to half a dozen AI newsletters myself and another half dozen like general tech newsletters. I will try to skim through them every week to see what is happening. Those news that span both a general audience type newsletter versus like a very hyper technical audience type newsletter.
Then I also go to a list of sites that I trust in terms of their AI coverage– The New York Times, Wall Street Journal, Bloomberg, Wired, sometimes The Atlantic has really great stuff, Axios. And then I will scan through all of those headlines and see what people are talking about. And then the last thing that I do is I follow a lot of researchers on Twitter. I am constantly looking at what types of papers they’re buzzing about. Are there new breakthroughs that are happening?
Through those three, I kind of pick out what the broader themes are and then pick different topics to write about.
My newsletter follows a biweekly format. I have an issue that comes out on Tuesdays, which is more of a newsy type thing, what is happening in the industry, what is some new research that’s interesting and exciting. Then the Friday issue usually takes a step back and looks at some of the concepts that have repeatedly popped up again and again that I think are worthy of a deeper dive, particularly for people who are being introduced to AI completely with a clean slate and are trying to understand it better.
RM: Now, as somebody who has an interest in ethics and was an engineering major, I am curious. Did you have to take any type of ethics course as an engineering major? Because I was an electrical engineering major, I did not. Only the civil engineers who built bridges and buildings were required to take an ethics course. I’m just wondering, what the trends are these days and if you think there’s not enough of that, or too much of it, or it would impact– have we laid the groundwork so that the people that are doing this AI work even know enough about ethics to understand the problems and the possible outcomes and have a discussion about it?
KH: We are absolutely not doing enough. The answer to your first question is no. I was not required to take an ethics course as an engineer. I actually took an ethics course in high school. And I read a lot of ethics and philosophy. I have a lot of friends who are interested in these subjects as well. So, we have our own little debates and conversations.
That’s been my informal education, but, no, MIT’s engineering departments don’t require ethics, unless, perhaps, maybe there’s a bio ethics course for biomedical engineers. What I’m excited about is two things– the first is that there’s been a really big groundswell where students, and researchers, and professors have sort of started informally organizing to create optional ethics curriculum or other means of engaging in ethical learning.
For example, there’s an AI ethics club or an AI ethics reading group that started at MIT last year or two years ago. It’s just a weekly gathering of researchers that are really interested in these issues and want to debate these issues. Every week, one person both lead a particular topic and they’ll send out reading beforehand. It’s a great, informal way for people to start grappling with it.
There are also a lot of other informal pop-up classes that I’ve heard about at MIT, at Stanford, at other universities. That’s one of the exciting things.
Then the second thing that I’m excited about is– at least with MIT in particular because I’m still embedded within this community– the university is opening up a new college, the Schwarzman College of Computing. And part of the effort for creating a college focused on computing and on AI is the ethics of it. So they really want to design a curriculum that will be required and that every student has to take. And it would not just be an ethics course that you tack on to your other technical courses, but an ethics module, perhaps, that is embedded within every single one of your technical courses.
I’m excited about that trend happening nationally and about people finally realizing that this is incredibly important. We definitely have a very long way to go.
I think in a lot of these issues that we’re facing in particular are very complicated. And even if you had an ethics background, you could argue about what the appropriate ethical response was depending on your view of the different ethical systems that people use to make decisions and everything else.
What always bothers me is we don’t even have a grounding to have the discussion. I think that’s a problem.
RM: Can you talk a little bit about, one part of this AI ethics piece is AI bias. Back in February, you wrote a really good piece about that. What inspired the piece? And what did you learn from exploring in writing that?
KH: What inspired the piece was sort of I felt that AI bias was suddenly coming into the mainstream. I think one of the big moments was when AOC started saying that this is an issue. It sort of created this rush in the media of everyone started covering like what is AI bias? What is this thing that most mainstream technology reporters have covered for a long time, but most mainstream outlets have not necessarily engaged in?
I felt like there was a bit of a lack of grounded understanding of what the issue actually is. So, a lot of people think that– or how I wrote it in my piece is like we shorthand our explanation of AI bias. It’s like, oh, the data is biased, the data is biased. That’s just where the conversation stops.
Why are our facial recognition systems biased against people with dark skin? Oh, because there aren’t enough people with dark skin in the data. That’s one part of the story, but I really wanted to write a piece to explain how there are a multitude of ways that bias can really enter a system.
It was very inspired by this great foundational research paper that was written by two researchers– Solon Barocas and Andrew Selbst who are really experts within the fairness and AI bias field– where they laid out, I think it was, six or eight different actual ways that bias can be introduced.
Data bias is definitely one of those things, but there are other subtle things like how you actually frame your problem as a machine learning problem. That can in and of itself introduce bias because if you’re maximizing your machine learning model– if you are optimizing your machine learning model to maximize profit, that is a subjective goal that you’re asking the machine learning model to perform. And that might not necessarily be aligned with another goal which is perhaps to not price discriminate against low income people. Because those goals aren’t aligned, you could unintentionally create a system that does perform that discrimination.
That was sort of the thought behind the pieces. I think it’s fantastic that people are starting to talk about it in a more mainstream way. I would like to expand the nuance of that conversation so that– in order to solve the problem, we have to recognize the problem. I wanted to expand the conversation to really encompass all of the different ways that we’re actually biasing our machine learning models.
RM: Now, you’ve given this a lot of thought. You have a technical background. And, you work for a pretty technical publication. Do you get to get frustrated with the way the media sometimes covers AI? What are some of the big things that really, really frustrates you about that?
KH: I do. I have a couple of pet peeves. One of my biggest pet peeves– and this is a pretty minor one, but it does irk me every time– is when people say AI and machine learning, because it perpetuates this misconception that AI and machine learning are separate things. But machine learning is a subset of AI, so it’s almost like you’re saying fruit and apples.
A lot of people don’t realize that AI is derivative of machine learning which is derivative of statistics. When you’re actually able to trace the lines clearer between those different disciplines, it makes the AI less nebulous and more accessible, I think. It also helps you start understanding the problems better behind AI.
The other thing that I struggle with is people sometimes conflate algorithms with AI. So, not all algorithms are AI. I notice that sometimes reporters who don’t necessarily have a good understanding of themselves, they might automatically assume that any company that’s using an algorithm is an AI company. They don’t necessarily do the due diligence to actually see if that company is doing machine learning. Again, I worry about the way that the public starts to learn or mislearn or poorly learn what AI actually is.
The last thing that I also kind of struggle with a little bit myself is conflating AI with robotics. I think every publication struggles with that. It’s like AI is the software, robotics is the hardware. Robotics also sometimes uses AI because hardware can have software within it.
I don’t think that distinction is necessarily made very clear a lot of the time. And it’s like Tech Review’s definitely guilty of this where the way that we organize our topics on our site, robotics is a subtopic in AI. That’s because we know that the public commonly associates those two, so we assume that if someone wants to read about robotics, they would naturally go to the AI tab. It does kind of cause that false correlation or false equation.
Then the other thing is AI is like a really abstract concept to illustrate. Sometimes when we try to illustrate ideas, we will fall back on using a robot to illustrate AI concepts and, again, that ends up conflating the two.
Interesting. Then being out there and seeing a lot of things, and writing about the research that’s coming out, and seeing what the companies are doing on the cutting edge, what do you see out there as the most sort of interesting opportunity, or things happening, or the thing that you’re most excited about?
KH: I’ve been very deeply embedded in the research world in the last couple of weeks because I’ve been reading lots of research papers. One of the things that I think is sort of the next frontier in AI research is causality. What I mean by that is machine learning is really good thus far at finding correlations within data, but it’s not at all good at finding causation. There is a bubbling movement within AI research community to really try to push the boundaries to actually figure out if there are ways to design machine learning systems that can tell us why things happen, not just that they happen.
There was a talk that was given by a researcher, who works at Facebook AI and is a professor at NYU, just last week on how the company might approach doing that. It was like a really buzzworthy talk. That’s something that I’m really excited about because a lot of the problems with AI sort of do come from the fact that we are using correlations to predict things. If we could use causation to predict things, that’s like a much more robust approach.
Another thing that I love writing about is generative adversarial networks which is commonly known as GANs. And people may have heard of this concept from a few months ago, there was a piece of art that was sold at Christie’s, or auctioned off at Christie’s, that was created by again a GAN algorithm.
I just find GANs really, really fascinating because they are basically algorithms that are exceptionally good at mimicking the data that it’s fed. So if it’s fed lots of dog images, it can create hyper-realistic dog photos. And if it’s fed like van Gogh paintings, it can create paintings that look like it could have been made by van Gogh.
You can also extrapolate this beyond images to like if you fed it– recently, there was a research nonprofit that fed a GAN a bunch of Beethoven pieces, I think. And then where they were able to produce a piece that kind of sounded like a symphony in the style of Beethoven.
It’s just like one of those fascinating topics that I think has a lot of potential to be really powerful and positive in the world. But it also could do a lot of harm, because it’s really easy to falsify images, video, audio now. And given the political context that we’re currently in where there’s just a lot of misinformation, a lot of distrust of digital media, that could really be the straw that breaks the camel’s back and create a lot more distrust in our society.
RM: Excellent points. Last question for you is something that we talk to a lot of the guests at the end is that there’s a big concern amongst some researchers, and some of the media, and some of the average citizens about killer AI. There’s some people that believe, actually, like we’re on the verge of it and it’s going to be here in a few years. And there’s other people that believe that it’s not going to be here in a few years, but we should still be thinking about it. Then there’s other people that say it’s probably never going to be here and we don’t really need to worry about it, or it’s so far off, we shouldn’t think about it. I’m always curious, as somebody who’s very active in the field, what’s your perspective on that issue?
KH: That’s a good question. Well, I think the whole conception of what killer AI means is very different from what researchers say when they’re talking about killer AI. So, the public might think of robots or androids that are coming to kill you, something like WestWorld. That is not the actual risk.
I think what the more immediate risk is designing machine learning systems that start making decisions about life and death things. The military is definitely, both the US and Chinese military, are investing a lot in trying to use AI to enhance their systems. People are worried that they will enhance not just, you know, beneficial systems, but also weapons systems and make automated weapons that trigger without a human in the loop. I think that is a genuine concern.
I don’t want to be apocalyptic, but I would definitely feel good about the fact that there are a lot of AI researchers that are talking about this, and elevating this issue, and making sure that the people who have the power to make these decisions don’t just sleepwalk into this reality where we suddenly have autonomous weapons at our disposal and are engaging in autonomous warfare. That being said, there’s also, like that apocalyptic narrative aside, there are also other ways that AI can be really damaging and potentially life threatening.
There’s a concept in machine learning called adversarial tax. This is essentially ways that you can sort of trip up machine learning systems so that they just fail spectacularly.
An example of that is you can plaster stickers, like specially designed stickers on to a stop sign. A self-driving car will suddenly see it as a speed limit sign that says go 45 miles per hour. That is sort of another way that you can end up getting into these situations where hackers could theoretically perform these adversarial attacks and cause whole fleets of self-driving cars to go haywire.
I think in terms of like where we should be focusing our energies in this killer AI narrative, that’s the main, most immediate, most relevant thing that we should be focusing on. And fortunately, there are researchers that are working on these kinds of issues and really pushing to make machine learning systems more and more robust so that they no longer have those kinds of vulnerabilities.
RM: Well, Karen, thank you for being on the program. I think this is really, really great. And to the audience, thank you for listening. If there are questions you’d like us to ask in the future, topics you’d like us to cover, guests you’d like to see on the program, please send those to email@example.com. And we will see you next week.
KH: Awesome, thanks so much for having me.
Subscribe to AI at Work on iTunes, Google Play, Stitcher, SoundCloudor Spotify and share with your network! If you have feedback or questions, we’d love to hear from you at firstname.lastname@example.org or tweet at us @talllainc.