AI at Work Episode 44: AI and Dentistry with Pearl's Founder and CEO, Ophir Tanz | Talla

AI at Work Episode 44: AI and Dentistry with Pearl’s Founder and CEO, Ophir Tanz

Episode Overview

In this episode, Rob May had a conversation with Ophir Tanz, Founder and CEO at Pearl. Pearl is a computer vision company focussing on solving challenging problems in the dental industry.  Prior to Pearl, Ophir founded GumGum, a leading computer vision company and creator of the world’s first largest in-image ad platform. Tune in to learn more about this unique opportunity in machine vision and machine learning.

Listen to every episode of AI at Work on iTunesGoogle Play, StitcherSoundCloud or Spotify

Rob May

CEO and Co-Founder
Talla

Ophir Tanz

CEO and Founder
Pearl

Episode Transcription   

Rob May: Welcome, everyone, and thank you for joining us on the AI at Work podcast. I’m Rob May, the co-founder and CEO of Talla. Today, my guest is Ophir Tanz. He is the CEO of Pearl, which is a computer vision company that actually spun out of GumGum. Ophir, welcome to the podcast. Why don’t you tell us a little bit about your personal background and then tell us the Pearl founding story?

Ophir Tanz: My personal background has really been a serial entrepreneur, having gone all the way back to starting a company in high school. Since then, I attended Carnegie Mellon, where I got my bachelors and masters. That was really where I had my first real exposure to computer vision, which CMU is the leading Institute in that particular field.

My recent company that I’ve been running for the past 11 years is a company GumGum, as you mentioned. The idea behind GumGum. was really to apply AI and specifically computer vision, to the category primarily of media. We were ingesting very significant parts of the open web, analyzing imagery and video for the purposes of enabling brands to target marketing messages, for understanding content to deliver brand-safe environments.

We had a sports division which analyzed broadcast television and social media and streaming and various esports platforms. We’re using CV there to calculate the value of exposed sponsorships, which could then be used by rights holders and brands to negotiate fees and understand ROI.

GumGum, I think, is a good example of a commercialization of CV. The company grew to nearly $200 million in revenue, profitable. Really, it’s at GumGum where we started incubating this dental opportunity, which ultimately was spun out and became Pearl. That was driven by a core thesis that I developed while leading that company.

As you know, all these developments in AI and what we refer to as the AI spring occurred 2010, 2011. We’re really starting to see that stuff come to fruition. GumGum was founded in 2008 really as an image recognition-oriented company. We focused on these capabilities before they were the cool thing to do.

The idea that I had, as we continue to develop and grow and become more profitable, was that these capabilities in AI and, specifically in our case, computer vision were very powerful, but also very new, and therefore were not yet widely applied. The idea was that we could apply this capability widely and do so very efficiently because we built out significant infrastructure to support solving CV-oriented problems.

And ultimately, obviously, we made the decision to spin what became a dental opportunity out into a new company. I’m happy to talk about how that all took place or the company itself. That’s just some high-level background.

RM: Tell us a little bit now about where you see Pearl going, because it’s kind of a unique opportunity. You’re seeing a lot more of this of software coming to more traditional industries that applies to AI. But, tell us why it’s so interesting as a machine learning opportunity.

OT: I mean, it’s an opportunity that is completely driven by machine learning. It’s really core to everything that we do. About three years ago while still at GumGum, I started the process of collecting what had become the largest collection of dental radiographs in the world specifically focusing on traditional dental X-rays, but also a wide range of other radiographs. We began the process of annotating that material.

The goal of Pearl is a few things. At its core, what we’re trying to do is to solve fundamental challenges in the field of dentistry at present using computer vision and AI in a way that could touch every single constituent within the category. For example, if you’re able to read dental X-rays at superhuman levels, then that has really profound implications for insurance, for example, to automatically be able to process insurance claims, which is an effort that’s underway with a number of entities.

You have the opportunity to integrate that capability into X-ray sensors and the software that powers those sensors to serve as a second set of eyes for dentists. We’re working with groups called DSOs, which stands for Dental Service Organizations, which are these entities that own large collections of dental offices.

In addition to being able to provide practitioners with this second-opinion capability, we are also able to harvest a lot of insight and data on the back end in terms of what patient populations look like, how you might go about codifying treatment plans over time, practitioner performance so that the appropriate training could be deployed, and a variety of other exciting opportunities like that.

Additionally, we have a suite of tools that’s really oriented towards automating tasks on the part of dental laboratories. There’s some esoteric but very critical, repetitive tasks that dental laboratories need to engage in in order to print out things like restorations and crowns. We’ve developed tool sets to automate a lot of that– so really, a lot of opportunity to deploy this technology in ways that could fundamentally impact the industry.

RM: Interesting. You mentioned earlier that you got this data set annotated for Pearl. I’m interested in two things. Number one, tactically, did you use CrowdFlower? Did you guys do it yourselves? How did you find dental experts for that? And then number two, talk a little bit more about the– are there other opportunities or challenges? Or what do you see more broadly for annotation on machine vision data sets?

OT: Yeah, it’s a really good question and one that I have a lot of experience with, because at GumGum, we really solved for thousands of different types of recognition. Think make/model vehicle recognition, nudity detection, thousands of objects, all this stuff that extends beyond ImageNet. I describe it as building a company within a company because you have to have a significant toolset and infrastructure and, in many cases, relationships to be able to manage annotation processes that are high quality and that are operating at scale.

But, when you look at something like Pearl, it’s really a fundamentally different type of challenge because this is very specialized knowledge that’s required to provide high-quality annotations, obviously, when you’re talking about medical radiographs. Even if you’re a fourth-year dental student, you’re not skilled enough yet to provide the level of accuracy that we require.

What we’ve had to do is leveraging a lot of the same tool sets that had been built at GumGum, which were proprietary annotation environments with a lot of quality assurance mechanisms built in. We recruited about 1,000 dentists from around the world who have been labeling the radiographs for quite some time. We have tier ones and tier twos and a special council that we built in order to break consensus and ties, and have really had to drill down significantly into specific pathologies.

In order to solve for them, we currently solve across 21 different pathologies in the dental category. It’s no small feat, and it also speaks to, as you’re very well aware of, just how much it helps to be very narrowly focused on solving specific problems when you’re talking about using the current state of the art that is AI. It’s very good at doing very narrow things, and it’s not particularly effective if you’re trying to do too much at once. That’s a particular kind of challenge.

To my knowledge, as we compare ourselves against other efforts within radiology, other companies, university efforts, I think we have a significant stable of labeled annotations that really surpasses a lot of what I’ve seen out there, although, in some cases, private companies do not have complete insight. I could tell you that the predictions that are getting returned are very high quality and just really exciting to see.

If you built annotation into the tool and the workflows that you’re using now so that as you get more dental records and as you make mistakes and have to maybe correct them or clarify things or label new things that you didn’t realize before, is that an ongoing part of what you’re doing at Pearl? If so, is that done by your team, or do you outsource that, or are you trying to get the dentists that might be your customers to do that?

RM: It’s a good question. So we do have the capability to edit annotations and return corrections or ideas back to our core system. The idea there, obviously, is if you can create that flywheel whereby your partners are helping you improve your quality scores, then that’s a good thing.

I would say that to date, it’s been primarily coming out of Pearl, and it’s direct annotation efforts. But now, we’re beginning to apply this into market at very large scales– so we’re really getting this capability distributed, like I mentioned before, via the X-ray manufacturers themselves and the insurance companies. We’re not going to individual dental offices, but we’re seeing significant scale. Therefore, if we built our tools right, we should be able to get back really high-quality edits, whereas that would be incredibly helpful.

RM: Interesting. Let’s change gears a bit and talk about some of the things– maybe you’re less involved in this. But what do you see as some of the trends in computer vision, particularly as it applies to either the broad versus narrow model distinction that you were talking about in terms of use cases, and then how some of the technologies might be impacting use cases at the edge. And are there those edge use cases in dental, for example, or can it still be mostly cloud-based? Or, what do you see happening in the industry there?

OT: I mean, look, it’s really interesting. When you’re trying to solve hard problems and what matters is the output and the accuracy and consistency levels of those outputs, then you get very tactical and do what needs to be done in order to solve these problems. So we certainly are leveraging, obviously, deep learning and a lot of the obvious methodologies for acquiring data sets and training up on them and optimizing that.

But there’s a lot of other heuristic-based approaches and non-machine learning-oriented computer vision technologies that we’ve implemented to solve for particular pathologies. One thing that we utilize and I think I’ve been seeing a lot of excitement around, especially as of late, is just GANs, particularly as it relates to– in our case, one of the things we use it for is increasing the resolution of an image.

If you have a pathology– say, you have a cavity, and there’s a box around that cavity. And that box is– it’s a low-resolution image and it’s 23 by 24 pixels. You really need or want to be feeding the machine something that is of higher resolution. So using GANs to improve the resolution of that input is very helpful.

Generally across the industry, I think things like synthetic training and generative modeling are areas where I think there’s been a lot of philosophizing historically, but now we’re starting to see some of that stuff be put into use. I think it’s very interesting and very exciting.

I think modeling visual attention is an area where there’s some sort of progress being made. And that relates to both CV and NLP. And that’s an interesting area of study. Obviously, efforts around unsupervised learning– nothing really to speak of in terms of major breakthroughs there. But I anticipate that that’s coming at some point, and that would really change the game in a number of important ways.

RM: Now, when you look at the ecosystem that you play in, is there a problem that you’re not working on but that you wish somebody else would work on or solve that would make what you do at Pearl much easier? It could be a new type of model or a model that would run faster or do something better. Or it could be tooling and infrastructure, just anything where you think, like, man, here’s a problem that we are not going to solve for whatever reason, but it would be really nice if somebody else did this.

OT: Well, I mean, look, at a very high level– and I’m sure every company would say the same thing in this regard that’s trying to use this technology for practical real-world use cases– is that we have a very smart team of CV engineers and mathematicians that are working on these problems. They’re spending most of their time trying to build around the existing frameworks such that we’re able to solve the problems that we’re looking to solve.

They’re not spending a huge amount of their time trying to rewrite the low-level code of a deep learning network. Certainly, any advances, specifically in our case, towards convolutional networks that were to take place and that I’m sure will come down the pipes at some point, will enhance our ability to increase our predictions, our F1 scores, and to do so with less data.

We’re currently looking at more traditional dental X-rays, but we also have a very large collection of CBCT scans and SEF scans and panoramics and all these different types of radiographs. And we have experimented with so many of the frameworks out there and I think have a pretty good handle on what the best combination of those tools are that yield the optimal results.

In our case so far for this particular use case, we’re getting accuracy levels that are in the 99 percentile across a lot of pathologies that we’re looking at. So we’re pretty good on that. But that required a lot of time, money, and effort, and it would have been nice if it required half the amount of time, money, and effort. And I think that that is going to be possible potentially in the relative near term. I guess we’ll have to see, right?

RM: Now, as somebody who’s relatively technical running an AI company, what you think about the general media coverage of computer vision and machine learning related to vision stuff? Are there things that really bother you that you feel like the media doesn’t get right, or are there things that you think they’re not covering that they should be more focused on?

OT: I mean, the media wants to put out things that are somewhat sensationalized because that’s what’s going to get clicks and article reads and whatnot. So that’s not hugely surprising. But, I do think it’s kind of really fascinating and almost comical, just the general ferocity of the debates that exist not only in media, but obviously Elon Musk and Zuckerberg and Hawking and all these folks throwing their opinions into the ring.

I guess I just have another opinion to throw into it. My personal opinion of all this is AGI is coming. That, to me, is just not a question. I think that if we survive long enough as a species, it’s coming. That might be in 500 years. I don’t think it will be that long, but it might be a really long time from now. We’re certainly very, very far away from it at present.

One of the things that I like to point to and I’ve seen other folks point to is, I think it was in 1933, there was some nuclear physicist who came out and said that it would absolutely never be possible to extract atomic energy from atoms. The next morning, it was solved.

I think that AI, in the current state of things, is very much like that. It’s like, you have all of these paradigm shifts happening in terms of computing speeds and just different developments within computer science. Then you have brain science, which, if we are able to advance there, will have obviously very significant implications for our ability to model that understanding within machines.

Then you have just the pure computer science-driven and mathematical-driven efforts as it relates to advancing the stage of AI. We could really find ourselves, in six months from now or a year from now, with a fundamental new paradigm that just advances things in a dramatic way. It would be very similar to what happened with deep learning.

We’re living, in my opinion, in an incredible moment in time where you have Yoshua Bengio. You have Yann LeCun, and you have Geoffrey Hinton. These are three guys who got together, and they were like, we need to be doing more on the deep learning side of things because there’s not enough attention being paid to it.

That, in large part, is what led to us being able to now apply all these capabilities. I kind of think of them as Benjamin Franklin and us as Thomas Edison, and it all happened over the course of a few years. And they’re still young and active and researching and talking. It’s an amazing moment in time.

Now, whether or not AI is currently the new electricity or all this hype, I don’t know. It might just be a really powerful pattern recognition tool for the moment, but one that will have really dramatic and fundamental effects across a wide range of industries. I really believe, at Pearl, we are going to largely reinvent the way that people are able to pursue dentistry and dramatically increase efficacy and the nature of what it means to have structured data in that category. I mean, I left my previous company because I’m such a believer in this particular effort.

But, I don’t doubt that over the long term, all these things are going to be solved. Then I do think that Elon Musk is right in that long term, the biggest threat to civilization is probably AGI, assuming that we survive all this other stuff that we’re doing to ourselves with nuclear weapons and the environment and all this other– you know? But I don’t think that’s around the corner. I’m not nervous about it, necessarily, in my lifetime. I don’t think it’s an impossibility.

Killer robots are always going to get more press. In the meantime, what people aren’t writing about as much is these very narrow but very powerful and impactful applications of the technology. I think Pearl is a great example of that, but I think there’s a million others. That’s really what’s going to change the nature of, in health care, patient care, and in agriculture, efficiency yields. Then all these really magnificent things are going to happen as a result of this capability.

RM: Yeah, it’s interesting. You mentioned AI is electricity, which is a common theme that a lot of people are talking about these days. I’ve actually read a couple of books in the past year on the history of the electric industry to sort of understand how it evolved. I don’t draw the parallel yet because I think a unit of power is relatively fungible and a unit of intelligence or whatever you want to call it is definitely not, at the moment. We may get there at some point, but we’ll see.

Talking about intelligence things, just so we can wrap up on this question, but there’s some people predicting that we’re entering a phase where deep learning is going to hit some of its limits, and we’re going to have to rethink some of our other types of AI.

I’ve started to see people in some of my angel investing that I do, I’ve seen people working on things like symbolic logic processing systems again, paired with neural networks. I’ve seen some evolutionary algorithm approaches to things, probabilistic programming. Some people in the NLP space are moving back to these sort of cognitive architecture models where you have some specific models that do more specific things, going back to the Noam Chomsky-view of the world.

When you think about so much of modern computer vision, gains have been driven by convolutional neural nets and deep learning and big data sets and these latest trends. Do you see anything on the horizon that tells you the next breakthrough? Do you think it’s more of the same and more labeled data and everything else, or do you see any of these other AI approaches that are maybe not connectionist approaches coming into play?

OT: I mean, it’s a really good question, and it’s a really difficult question. It’s not a question that any expert in the field is going to give you a fantastic answer around, right? A lot of these technologies and frameworks that you mentioned really carried the torch for a very long time in AI. They got a lot more attention than deep nets did.

The notion of them coming back into fashion and, in some way, shape, or form, dramatically offering some level of improvement that is currently non-obvious I think is something that would not be surprising at all. I think that makes a ton of sense. I think there’s a lot of good, basic math and theory there in the same way that a lot of the math and theory that is behind what’s currently driving the revolution was developed in the ’50s and ’60s.

I think a lot of what we’ve been able to accomplish that has been practical with deep nets that’s has been driven by these, granted, very simplistic sort of models of the human brain. That has offered up ideas that have brought us to this point where we’re now very functional. I’m personally a big fan of neuroscience research and continuing to understand this miraculous thing that is the human brain, just because it’s so deeply efficient. It seems to be doing things like backpropagation and gradient descent, but it seems to be doing so much more efficiently. So it’s kind of like we’re on the right path, but it’s not exactly the same kind of thing.

What I could say, I guess, in closing is there’s now arguably the smartest people in the world and a massive amount of capital being dumped into solving some of these problems and generalizing these capabilities that I do anticipate a string of breakthroughs that will occur, both minor and major, over the next decade. It’s really difficult to know where those are going to come from, and that’s why there’s so many disparate research efforts and so much fundamental research happening.

One of the things that we do at Pearl is we have a Pearl University, I want everyone here to be very fluent in AI and ideally fascinated by it. Some of the required reading that we put out is this book called Architects of Intelligence.

I would highly recommend it. It’s basically interviews with 30 of the top AI folks in the world and asking a lot of the questions that you’re asking. And it’s just amazing to see the level of either disagreement or just a general having no clue as to where the next thing is going to come from, but everyone having their own unique ideas about it.

I mean, we’re definitely in the first inning. But it’s, again, very functional. This stuff works incredibly well such that this is going to be, even if all the progress stopped on it today, it’ll be part of the tool set into the future because it works better than other things possibly can, especially as relates to, say, image recognition, certainly to the various aspects of NLP. I know that’s probably an unsatisfying answer, but I guess I would highly recommend picking up that book to your audience and giving it a read because it’s just super fascinating.

RM: Awesome. No, that was actually a really interesting answer. Ophir, thank you for joining the podcast today. If people want to find out more about Pearl, what’s the URL?

OT: It’s hellopearl.com.

RM: All right. And thank you, everyone, for listening. If you have questions you’d like us to ask or guests you’d like to see on the show, please email us podcasts@talla.com, and we will see you next week.