AI at Work Episode 43: Data Science at Facebook and iRobot with Brandon Rohrer | Talla

AI at Work Episode 43: Data Science at Facebook and iRobot with Brandon Rohrer

Episode Overview

In this episode, Rob May had a conversation with Brandon Rohrer, Principal Data Scientist at iRobot. Tune in to hear their conversation about deep learning, neuroscience and models, Brandon’s take on if we will ever be able to deal with “fake news”, where he sees data science impacting where robotics is going, managing Data Science teams differently, predictions and much more. 

Listen to every episode of AI at Work on iTunesGoogle Play, StitcherSoundCloud or Spotify

Rob May

CEO and Co-Founder
Talla

Brandon Rohrer

Principal Data Scientist
iRobot

Episode Transcription   

Rob May: Hi everyone and welcome to the latest episode of AI at Work. I’m Rob May, the host and CEO of Talla. My guest today is Brandon Rohrer, a principal data scientist at iRobot, who was also at Facebook previously. Brandon, welcome. Why don’t you tell us a little bit about what you did at Facebook and then what you’re working on now at iRobot.

Brandon Rohrer: Thanks, Rob. It’s great to be on the podcast. I spent a couple of years at Facebook, working on, among other things, mapping out medium voltage electrical grids through Africa and the rest of the world with the goal of helping to get electricity to people who don’t have it yet.

Super fun modeling problems, a lot of satellite imagery involved, some deep learning, and some signal processing and then also doing some natural language processing, specifically focused on the problem of text categorization. If you have text, and you want to break it into two categories, specifically if one of those categories doesn’t occur very often, that’s a tough problem.

If you don’t want to limit yourself to English, and if you don’t want to limit yourself to proper grammar, it gets a little trickier too, so great fun as a technical problem. Now at iRobot, I’m on the data science team there. We get the exciting job of combing through massive amounts of robot data to try to learn how robots do what they do, and how we can help them do it better.

RM: Is your background math, physics, computer science? How did you get started down this path of machine learning?

BR: The beginning of the path is when I was 13 years old and watching The Empire Strikes Back. I thought that Luke Skywalker’s robot arm was the coolest thing.

At that point, I decided I was going to do robotics. I bought myself a little Commodore VIC-20 and started programming in Basic. Eventually, when I went to college, I decided that mechanical engineering was the way to go.

I got some degrees in mechanical engineering, went into a grad program in robotics– used that to help with human rehabilitation, both with upper limb amputees and with stroke patients. In the process, I got to play with a lot of data– dozens of stroke patients working over weeks to make better movements.

That was my first exposure to data and data science. It also got me really interested in how humans do what we do with our brains. We have a terribly hard learning problem with sloppy hardware. There is no software learning algorithm that’s capable of doing what humans do.

I’ve also really enjoyed spending the last 20 years thinking about that and making my own kind of moonshot side project attempts at trying to understand that better in the process, learning some statistical tools and some machine learning tools, eventually rebranded myself as a data scientist and then migrated through agriculture into Microsoft, Facebook, and now back to my original love, iRobot.

RM: Let’s pick on this thread a little bit of how the brain learns and everything else. If you’ve looked at a lot of these different models, we had this deep neural network models, or like, the thing that has driven the last wave of interested AI. But there’s a whole bunch of other things out there. There’s symbolic logic processing. There’s Bayesian stuff. There are evolutionary algorithms.

I’ve read a lot about this guy named Jeff Hawkins who writes about hierarchical temporal memory, and some of that stuff. What have you explored that you like or that you think might be the next trend whenever deep learning starts to hit its crest?

BR: When I was doing a decade-long research career at a national lab in between, one of the things we got to do is dig into neurology, neuroscience– what we know about how the brain works, and some of these models, like hierarchical temporal models. There are some other popular computational neuroscience models. The premise is amazing. It’s like we have an, for a solution to this incredibly hard learning program.

We all carry it in our heads. If we can just understand how it works well enough, then we can build one. The trouble was that if you dig down very deep at all, you realize we know next to nothing about the intricacies of the system of exactly how it works, at least, well enough to recreate it.

Our information, you can imagine it like we know a bit here and a bit there and a bit there kind of like stars in the sky. Like stars in the sky, you can take any kind of picture you want and connect the right dots to make that picture. For instance, if you look at the six layers in the cortex, which a lot of models focus on, because that’s what we can measure– it’s on the outside– some models will focus on layers, certain connections between certain layers.

Other models will play those down and focus on others. They kind of pick and choose to support a theory that’s already there. I came to the conclusion that probably, at least in the near term– the most productive path– is to try to build something that does what a human does or what an animal does, rather than try to do from observations and first principles recreated from scratch, just because we don’t have enough information to do that yet.

RM: I mean, there’s one school of thought that says, like, we don’t fly the way birds fly. We built airplanes and accomplishes a similar goal in a different way. And so I still think it’s one of the biggest ongoing debates in the whole sort of like neuroscience AI community is. Do you have to understand how the brain works? There are people who say yes, and there are people who say no.

I’m an investor in a company that embeds rat neurons into silicon chips, because the CEO will tell you, neurons fundamentally process in a different way. The only way to really build intelligent chips is going to be embed neurons in them. It’s going to be really interesting to watch this play out over the next few years.

You spent some time at Facebook. Let me ask you the question that I know is on all the listeners’ minds, which is, are we ever going to be able to deal with fake news from machine learning perspective?

BR: Speaking as a private citizen with the caveat that I was not working on this problem while I was at Facebook. I don’t know all of the details. From what I know of trying to tease apart truth and false in something that someone has published or an image or a video or a recording– it’s really tough.

If you take away even the notion of deep fakes where someone intentionally changes the signal to make something fake, there are forensic tools. Most of the time, we can pull that apart. If you actually tried to address the more common problem, which is disinformation– someone willingly saying something false in order to get someone else to believe or do what they want– that’s really hard, because in order to do that, there are a lot of things that we can’t objectively say are accurate or not accurate.

They’re subject to interpretation and discussion. It’s really hard to say whether someone published something with the intent to deceive, the intent to manipulate or not. And that at the root is the problem. Now, that said, there’s a lot of low hanging fruit.

There’s a lot of very strong cases for people who are doing things with the intent to manipulate and deceive and harm. I personally support all of the effort that I saw my colleagues at Facebook putting into that. I think we can’t put enough effort into that. It’s a social health ill for our age.

RM: For sure. Now that you’re at iRobot, where do you see data science impacting where robotics is going to go sort of at a broad level? Like, what can I expect from my Roomba? Am I going to get a better Roomba? Or am I getting an entirely different kind of robot from this? Like, what’s it going to mean?

BR: I’ll throw out another caveat here. I can’t comment on anything that we have in development. That’s pretty easy, because I’m pretty new at iRobot. I don’t really know much about what’s going on, then.

One distinction I want to make is that it is easy to conflate data science with machine learning where machine learning is taking a bunch of data and making models that can predict data that hasn’t happened yet or that we weren’t able to measure. The possibility of learning more complex or more better adapted behaviors.

That is definitely a part of it. A large part of what data scientists do at iRobot and other consumer companies is, just try to answer questions, digging into the numbers, just seeing that, hey, I got this table full of data that I’ve been collecting for a while.

I just want to know, should I add this feature to the robot or not? Like, do people care? Will it help? Will it make it better or make it worse? It’s a really fair question. It seems pretty basic. There’s just not a basis for answering it until you start digging into the data.

A good chunk of what we do is that type of thing, just what does the robot need? What do the people supporting the robot need? And how can we take the information or disposal and get the best answer we can for them.

RM: Now, you’ve worked at a couple different organizations. I’m curious, what’s your perspective? A lot of the people listening to the podcast are senior executives somewhere. They’re not super technical. They’re trying to figure out, “I have to work with data scientists. I have to work with machine learning engineers. I don’t understand exactly what they do.” They’re always trying to figure that out.

What have you seen work and not work around meshing data science or machine learning teams with engineering teams and with product teams? Have the organizations you’ve been in embedded them in their own product teams? Has data science been like a separate group? Do you have any opinions on how somebody should think about when to structure that one way versus the other?

BR: Yes, I do. None of my opinions are original but I’ve seen them in print and nod at them vigorously. But I can’t claim that they’re mine. But the ones that really resonated with me are not too recently. And forgive me, because I’m probably going to butcher the pronunciation. I’ve only seen it written. But, Pete Skomoroch, one of his talks was recently published about managing data science and machine learning teams differently than software engineering teams, because it has more uncertainty, because you don’t know exactly how long something will take. You can only know that if you’ve done something similar 10 times in the past you can get a rough estimate.

A lot of times with data science, the answer that we get to the first step determines what step B is going to be and step C. So, I’ve been asked in the past to write six-month plans for my data science investigations. They’re fiction writing. I understand that sometimes that’s necessary for reporting.

I don’t enjoy it. It doesn’t feel very productive. But as someone managing the data science or an ML team, the most helpful thing I’ve ever experienced is having a clear goal set. Here is a question we want to answer. Or here’s a functionality we want to enable– go– and not being tied to a timeline, not being tied to a certain set of tools, because you just don’t have enough information to pin those down at the start of the project.

RM: Do you think data science is going to become more like engineering over the next decade? Like, if I go to my engineering team, and I say, the website supports 10,000 visitors a month. I needed to support 10 million. There might be some bumps on the road. They kind of know what they need to do, right? And data science is not always that way.

Sometimes you’re like, we don’t know if we can make this model better. We can try, right? Do you think that’s going to change? Or do you think this is sort of going to be the state of data science for a while?

BR: There are certain things that have already shown a lot of steps in that direction. So, for instance, I have some data. I want to make some predictions– what model should I use? There are a set of tools referred to collectively as AutoML, which tries a bunch of different things on some training data, chooses the best model and then deploys it for you.

That part often gets a lot of attention. But it’s really the easiest part most of the time of doing data science. The toughest part is getting the raw information and using our judgment, usually judgment that’s grown from interacting with the world from things that we know outside of this table, outside of this computer about what these numbers mean and how they interrelate.

The ability to go ahead and take that, pull out the pieces that are likely to be predictive relies on things that I don’t know how to code. And I don’t know how to predict, given a data set, predict ahead of time, like whether that process will be short or will be long. So my best guess is that we’ll get a little bit better at predicting it. But for the most part, it’s good job security for data scientists.

RM: What are you seeing in the last sort of wave of machine learning and popularity in terms of nontechnical people that you work with and trying to communicate to them what a data scientist does, key ideas and data scientists? Is this getting better? Or is it getting worse? Are they getting more confused? Or are people starting to understand how to work with data scientist, and when to engage them in what you can do?

BR: This is something I’m personally invested in. I’ve written some blog posts and some course material aimed specifically at educating non data scientists on what data scientists do and beginning data scientists on data science concepts. And it seems that some things are becoming a little bit better understood.

The myth that the amount of value in your company’s data is related to the number of megabytes you have seems to be decreasing in popularity. You don’t hear people pounding big data drums as often. There’s more emphasis on being able to find value in subtlety. If you want good data, you may have to pay to collect it carefully, things like that.

This is still fighting the headwind of the excitement and the passion around AI and machine learning and deep neural networks and some of the cool demonstrations that are done. On the surface, it seems kind of magical. And sometimes that sells really well. It’s fun to write it out that way. Expectations can be set unreasonably high.

In the past, how this has played out is expectations have soared too high. Realities failed to deliver. People have gotten disappointed. I hope that it doesn’t swing quite that far this time. I’m actively on my blog and the course material that I’m publishing, actively trying to spread the grassroots, understanding of what the limitations are– concretely what’s going on under the hood.

RM: It’s not magic. It is cool. It can’t solve all of your problems with the flick of the wrist. That’s definitely a good point. What kind of opportunities do you see in the near term around data science, machine learning, artificial intelligence, like, things people aren’t looking at with these tools or aren’t doing with these tools that they should be sort of just at a macro level?

Like, you look out there and say, why isn’t there a company doing this? It seems like an obvious application for it. Or, do you think the ecosystem’s done a pretty good job of filling up sort of the near-term opportunities that are out there?

BR: I have a bias, a prejudice here in that I love robots. (laugh)

Little physical things– they don’t even have to be robots or look like robots. Or they don’t even have to move– but things, like a thermostat, heating a room and a building– things like, walk signals and traffic lights.

There’s lots of opportunities where something has to do a function. It has to do it regularly. But, sometimes it has to respond to changing conditions. And the cost of failure is not catastrophic. For instance, with stoplights, as long as you’re coordinating with the other stoplights, it isn’t if you initiate the cycle five seconds earlier or five seconds later, it’s not going to make a huge difference.

With a thermostat, if you have the temperature drop by a few degrees, one way or the other, it’s not going to make a huge difference. Every time a human pushes a walk button, has to sit in a stoplight or adjust the thermostat, like, that’s a piece of data. That’s a piece of information.

That’s something that an intelligent method, function, and algorithm could take and say, oh you know what– I see that this happened. Here’s what time it is. Here’s what day of the week. Here’s what the traffic is on the street. Here’s all this information. I’ll just keep looking at it.

Maybe I can find a pattern. Maybe I can anticipate this next time. I think there’s a lot of little things like that that they don’t have quite the cachet of self-driving cars. They are vastly closer to practicality.

RM: You have all the people trying to cross the street to the parking garage right after work, right?

BR: Exactly.

RM: The lights– a walk sign still on the same timer.

BR: And they’re sitting there. And it says, don’t walk. But there’s no cars coming from either direction.

RM: Yeah, yeah, very cool. As somebody who works in the field, what are your thoughts on sort of AGI and killer AI? Do you worry about this? Do you think we should be looking at safety methods? You think Elon Musk is right? You think Mark Zuckerberg is right? What’s your point of view?

BR: Well, I was going to AGI conferences 10-plus years ago when it was very fringe. It’s really surreal to see it in news articles and being common term today. At the time, there were people who were talking about safety and making sure that when it does become superhuman, it’ll be benevolent, and things like that.

I’m less concerned about that for a couple of reasons. One is that even when it gets so good that it can computer can beat us at chess without trying or can drive better than we can. There is still a long way to go, again, with this physical interaction dealing with the variability in the physical world.

We generalize in ways that I think that computers and machines are still very, very far away from. We’re on different playing fields with different types of tools for solving different types of problems. I don’t see any danger of being replaced or robo apocalypse, penned in and harmed.

But, that’s it. Robots and machines and AI are already deeply embedded in our lives and control lots of aspects of it. And most of the time, we’re really happy to have it do so. If you’ve ever been in a dark movie theater, and someone misplaces their phone, it’s like, it doesn’t matter what plans you have, whether you had reservations.

Everything gets put on hold. Everyone’s out on their hands and knees in the popcorn, looking for this phone. I mean, in that situation, who’s calling the shots? (laughs)  It’s the machine. And we are happy to do it, because in partnership with these tools, we can do and feel and experience more than you could, otherwise.

My biggest concern is that, as always, people and entities with a disproportionate amount of power will use this as another tool to widen the gap between their power and other people who are not empowered and to do harm and to do the bad things that humans have always done to each other but to do them faster and better. And anything we can do to avoid that I think it’s worth our time.

RM: What do you think about– speaking of robots and machine learning models, testing these things is often much more difficult, given that they can learn and adapt than you would test a software program that, every time I put this in, this should come out. With the machine learning model, you’re like, maybe something different should come out, because it’s seen different data after I’ve put it out in the world.

What made me think of it was self-driving cars, right, which is, I’m actually an investor in a company that’s a little bit against level five autonomy, because I actually think it’s going to be harder than most people think to actually test that and get some level of confidence that cars all around the world are at level five.

I think we forget sometimes that the world grew up and was built around us. Driving is no exception. There’s just things a human would never think to do that a machine learning model that might have entire– there’s like, 10 million different things you could tell, oh, and by the way, don’t ever randomly jerk the steering wheel left, as it or whatever.

What do you think about the state of the art for testing these models? And is that anything that concerns you in certain types of fields? Or are we getting better at it?

BR: Coming from software engineering where, especially for certain types of code and functions, you can probably write a set of tests that covers all possible corner cases. And you know it works. I didn’t grow up as a software engineer. Coming to it from a mechanical system standpoint, you can think of like a fighter jet.

People sat around, a bunch of very smart people sat around and tried to foresee all the ways they could break. They tried to test those. But, they can never quite find them all. So, are these very brave test pilots who would take these things out and fly them slowly, fly them a little faster, fly them a little faster, flip them up–they’d turn them upside down and gradually push the envelope there.

Testing was an expensive and a risky proposition, because the real world’s so complicated. This is to underline what you said. I’m 100% agree in anyone who undertakes to build something that has the potential to harm a passenger or a pedestrian or another car needs to take this testing very seriously.

One of the things that makes me a little nervous– you can put in a bunch of rules, like, hey, never hit anything that looks like a person and just hard code that in. There are the oddball, like, well, what if you have to, stopping or someone going, or someone you have to make a decision. I don’t even think that’s the hardest thing.

The hardest thing is, what about when a camera gets hit by a heavy blob of snow, and all of a sudden, your sensor that you’re relying on isn’t there? Or what if you just come across a condition– you’re driving in Phoenix, and for the first time, you’re in a dust storm. And you’ve never been in a dust storm before. Again, what humans are really good at is taking all of our past experience and making a reasonable guess.

Deep neural networks are not good at that. They’re really good at taking exactly what they’ve seen and doing something close. If they haven’t seen it before, all bets are off. So I think that it’s feasible. And it’s actually a goal that I personally will enjoy working toward in the long term. But, we’re not there with our current toolset.

RM: Do you worry at all about where these algorithms may go as they impact our lives but also impact our lives with each other in a way? The example that I give a lot of times is, let’s say we’re all using Waze. Waze learns that you and I both show up to work every day at 8:30. And today, you’re running late, and I’m not.

Would Waze send me down a path that’s a little bit slower so that you can get to work on time? I want to do that because it’s fair, because you’re going to be late, and I’m not. I can still make it with a three-minute delay. That’s one definition of fairness. Another definition of fairness is, you’re irresponsible. You’re always late, right? I’m always leaving on time.

So fairness should be, hey, I’m responsible. I get there on time like I planned. Or is Waze going to say, well, I’m more likely to click on ads, so it’s going to send me– I make more money for Waze, so it’s going to send me the faster way. Do you worry about things like this playing out? Do you think they will?

BR: Very much so. I think that’s really realistic. It boils down to, like so many things in our human interactions. Whenever there’s a decision to make, there are stakeholders, and there’s people who benefit and who lose. Unless we go way out of our way to ensure this isn’t the case, those with the most power or the most resources will come out the winners again and again.

That is the default. That happens if we don’t really try hard to break it. And, I mean, one of the things that’s particularly disheartening right now is the use of facial recognition technology, which, yes, there’s a lot of cool stuff it can do. But when you start tying that to security, like, if my phone recognizes my face, it’ll unlock for me.

There are some well-documented failures of that, both unintentional opens and also, racial and gender bias and how effective it is. And then also, face recognition for law enforcement– hugely problematic. The logic being, well, it’s not a perfect tool. But it’s the best we have. We’ll go with it. The trouble is, it’s not unfair uniformly. It’s most unfair to groups that have the least resources to push back against it. Those are the things that trouble me the most.

RM: Very good analysis. Brandon Rohrer, thank you for being on today. If people want to know more about your machine learning courses and blogs, where should they go to find that?

BR: Just google my name. It’ll be the first or second link. It’s a End-to-end Machine Learning is the name of the blog and also, a set of courses on teachable.com

RM: All right. And that last name is Rohrer. Thanks for being a guest on the podcast. For those of you who are listening, if you have questions that you would like us to ask a future guest or guests you would like to see, please send those to podcast at talla.com. We’ll see you next week.

Subscribe to AI at Work on iTunesGoogle PlayStitcherSoundCloud or Spotify and share with your network! If you have feedback or questions, we’d love to hear from you at podcast@talla.com or tweet at us @talllainc.