AI at Work Episode 47: AI at Fractal Analytics with Chief Practice Officer, Sankar Narayanan | Talla

AI at Work Episode 47: AI at Fractal Analytics with Chief Practice Officer, Sankar Narayanan

Episode Overview

Tune in to hear from Sankar Narayanan, Chief Practice Officer at Fractal Analytics. Learn what a Chief Practice Officer does, what they are working on at Fractal Analytics, Sankar’s insights on AI, ethics, and much more. 

Listen to every episode of AI at Work on iTunesGoogle Play, StitcherSoundCloud or Spotify

Rob May

CEO and Co-Founder
Talla

Sankar Narayanan

Chief Practice Officer
Fractal Analytics

Episode Transcription   

Rob May: Hi, everyone, and welcome to the latest episode of AI at Work. I am Rob May, the co-founder and CEO at Talla. Our guest today is Sankar Narayanan, the Chief Practice fficer at Fractal Analytics. Sankar, why don’t you tell us a little bit about what Fractal Analytics does? Then, what’s your background? What does it mean to be a chief practice officer? 

Sankar Narayanan: Rob, it’s good to be here. Thank you for the invite. My name is Sankar Narayanan, and I’ll do a little bit of introduction about the company and then my profile. Fractal Analytics is a pure-play analytics company. The reason I say “pure-play analytics” is because the definition of “analytics” is evolving by the day. Depending on the type of company, the type of people that you speak with, you would get 100 different definitions. 

The way we think about this is from the point of view of enterprises which make large, complex, high-velocity, low-velocity decisions at scale, right? Our aspiration as a company is to power human decisions, every human decision in the enterprise through AI engineering and design. I’ll, over the course of this discussion, talk you through how we bring these together. That’s what Fractal is in essence. 

My role as chief practice officer is to lead our Fortune 500 and Fortune 100 clients along the journey towards institutionalizing analytics and solving problems at scale by bringing together algorithmic sophistication, setting up an effective data engineering pipeline, and applying design and behavioral sciences to ensure that each decision that they make for their consumers, for their shareholders, for their employees, is relevant and operates within the context of each of these stakeholders. 

I lead a specific set of vertical practices for Fractal. I look after banking, insurance, health care, and technology. 

RM: Interesting. Now, you’ve shown a strong interest in AI ethics in particular. I’m not sure how much of that is personal and how much of that is because of what Fractal’s doing. Can you talk a little bit about AI ethics, and analytics, and the intersections there, the ways they interact, and the things that concern you? 

SN: Sure, I wouldn’t use the term “concern” as much as “intrigue”. The reason I say this is my interest in ethics around AI and ethics in general is very strong thanks to various experiences that I have gone through in my life. 

Now, when it comes to AI-led decisions, there are sort of two frames of reference or frames of thought. One is the thought that AI will be all-pervasive, which means that every decision, every problem can be framed as an AI problem and can be solved through AI. These are the AI optimists who believe in the part of algorithmic decision-making. They believe that every decision can be done better through the part of data analytics and AI. 

Then there is this second group, which are believers of AI but believers with additional constraints and fears. My intrigue around this is the context and the background behind the fear. As I’ve thought more deeply about it and as I’ve followed all of the initiatives that different governments, different agencies are trying to do with respect to ethics around AI, that the problem is it can be broken down into a few dimensions. 

Number one is lack of complete knowledge about, what is AI? And, what can AI do? That’s, in our minds, the number one reason why there is fear. There is always a fear of the unknown, right? The more unknown somebody gets exposed to, behavioral science tells us that the more fears that it causes in humans. That’s number one. 

Number two is half-baked knowledge. This, for me, is an even bigger problem than no knowledge whatsoever, because if one does not have any knowledge, then there is the curiosity angle that kicks in and actually makes this a positive problem. With limited understanding and with perfunctory understanding, there are assumptions that get made. 

That’s where an algorithmic decision-making process incorporates a certain amount of bias. That bias, for me, is the biggest challenge that we need to, as a collective, handle for organizations that want to leverage AI. That’s where my interest lies. 

Then I thought about, how do we handle bias? Is bias something that can be eliminated? Can it be handled? I’ve heard a couple of strategy consulting firms talk about a world without bias. How can human bias be removed? 

My direct address of that point is, we are humans and we will always have our biases right from the point of waking up in the morning to watching the last TV show that we want to watch or reading the last article that we want to read before going to bed. Everything that we do has the element of our underlying context. This shows through in everything that we do. 

We’ve all heard that communication is not about what the person says, but what the person hears. That inherently means that everything I assimilate in terms of information as an individual has the inherent context that I come with. That introduces bias. 

The real objective function is not to remove human bias, but to identify, how can AI be powerful? That’s where the transparency and accountability of AI becomes an important, key driver behind the use and the scaling of AI. That’s where my interest area lies. 

RM: Interesting. When you think about model bias, a lot of the bias that we get today is from data sets that were created for some other purpose. They weren’t initially created to go into a model. They were created for humans to use in some workflow or whatever. And so there may be some implicit bias in there. Do you worry about intentionally biasing models down the road? 

Just to take an example, maybe there starts to be data and algorithms that show that people who wake up earlier get some kind of benefit. The algorithm favors them somehow for whatever decision it is– jobs, credit, whatever. So, people start to fake or control their inputs to the model in order to be seen as something that they’re not, for the algorithms. 

RM: Do you see a time when maybe I can hire somebody to log in as me online, and go browse the web, and read a bunch of scholarly, sophisticated articles instead of watching cat videos on YouTube, so the algorithms that are watching us have a different perception? Do you think we may move towards a time where we, as individuals, intentionally try to bias models in our favor? 

SN: A system is never going to solve a problem 100% of the time. Oftentimes, the challenge that is associated with building good systems and very strong systems is that we are always concerned about the 1% or 2% of things going wrong. 

Think about 99% of the time, everything goes right. We should be building systems that work 99% of the time and involve or incorporate humans in ensuring and in creating guideposts or swim lanes that allows the algorithm to become better and better, right? Of course, everything, at some point in time, there’ll be something that goes wrong, right? 

Let’s take a simple example of driverless cars. Now, there is so much press and news about harm that can be caused because of driverless cars. The recent example of a Tesla car, an autonomous car having an accident and what it does to the drivers and the people inside the car and so on, right? 

If you compare that, and look at it as an isolated data point, it looks like algorithms are making mistakes and can’t be trusted. If you think about this problem as comparing human drivers versus algorithmic cars or driverless cars, then the problem can be reframed or redefined. That is really where the meat is in terms of understanding, how can algorithms push the overall progress of the society forward?

Yes, there will be the odd mistake that happens, and there could be multiple reasons why that mistake happens. If we continue to think about designing systems that’ll operate 100% of the time and emulate human decisions, that’s just impossible. 

I agree with what Elon Musk said, which is humans are underrated. Humans need to lead our sales higher and therefore build systems that bring together the bar of human cognition and the ability of a machine to process at infinite scale together. That’s where the real magic is. 

RM: Interesting. Let’s shift gears a little bit and talk more about Fractal Analytics. Part of what I want to dig into is, I think for your average person that is deploying an analytics solution, analytics is always something that has been very mathematical and data-driven. This is one of those fields where a lot of people are sort of promoting AI, where they don’t have AI, calling things AI that aren’t AI. 

What’s your advice to somebody who’s evaluating analytics tools, looking at artificial intelligence as partially being some piece of some of the analytics tools? What kind of advice would you give somebody to sort of make that evaluation as to, is this the right tool, and how much does this tool actually have some artificial intelligence in it? And, where are these roadmaps going? 

SN: That’s a great question, and something that our clients and the larger ecosystem asks us all the time. Before I answer that question, I just want to define AI a little bit to simplify and to talk about what we call as AI and what it means to the rest of the world. 

For us, AI is any system that can match or exceed human capability in a wide range of cognitive tasks. That’s what AI is. It could be a simple descriptive analysis that somebody does. It could be the most sophisticated deep learning algorithm somebody builds. The objective is to match or exceed human capability and capacity in a wide range of cognitive tasks. 

The more it can exceed, the better it is as an artificial intelligence system. With that context in mind, we constantly interact with companies that are trying to figure out, what is the right way for them to exploit and explore the power of algorithmic decision-making? 

Depending on who they speak with, who they interact with, they get different kinds of views. In other words, the way we think of it is a three- to five-step roadmap. That’s how we genuinely think about this. It does not always start with building an all-pervasive, how do we transform the organization through AI type roadmap. 

The first step, really, is to execute one or two use cases and genuinely see what can be done in the limited availability of budgets, limited availability of people bandwidth, limited availability of capability and expertise, and see where the organization is. That is the first step. 

The second step then is to figure out who are the key sponsors within the business teams that have the motivation to experiment. Experimentation is a very, very critical part of succeeding with AI– experimentation with respect to use cases, determination with respect to data, experimentation with respect to techniques, and being OK to fail and fail fast. This is the second big requirement, which is to identify people in the system that are going to be ready to experiment. 

The third step then is to realize some value out of AI. The use cases that are constructed, take them into production and see how easy, difficult, cumbersome, complex, complicated is it to get impact, business value out of AI. 

Once these first three steps are done, that is when you truly look to build a road map to transform the organization through AI. As you build the road map, when I was introducing the company I talked about, two other dimensions. 

I’m going to bring that back here, which is our belief. Time and time again, we have seen this in all of the use cases that we’ve worked on– that for organizations to get their value out of AI, AI is not enough. I’ll say that again. AI is not enough to get organizational value out of AI. 

There are two other important dimensions that need to play along with AI to get the maximum value, which is engineering. Engineering is all about setting up the right data pipeline, right data management framework. Development operations, or DevOps, as some folks call it. That framework. The ability and the motivation to set up platforms that can scale algorithms. That is an important component.

The third dimension is design. I remember reading a very famous quote by Steve Jobs, which is “design is not about how something looks, it is about how something works.” AI is only as powerful as it is relevant to the person at the end of the value chain that is actually executing on that decision. 

It could be a claims handler handling a claim in an insurance organization. It could be a contact center agent that is speaking with a customer, an irate customer at that point in time. It could be an analyst that is looking to improve the overall data quality of the underlying data. It could be a consumer who is, at any point in time, just thinking about all the options that are in front of her when she makes a purchase decision, whether she’s online or in a store. It is about that contextual relevance of that algorithm to the user or to the consumer. 

That’s where the element of– with all the biases that we, as humans, have. It is about bringing these three elements together– AI, engineering, and design– this is where the real magic lies. 

RM: Very good points. Along some of those lines, let’s keep talking about sort of analytics and AI. One of the differences between AI models compared to previous forms of technologies is that the outputs can be probabilistic and less binary. 

Do you see customers struggling with that at all? Do you convey in your tools, hey, this isn’t a yes/no decision, this is an 82% probability decision. Do you address that in the tool? Do you address that through education of the end user? Do people understand it, or are people struggling with it?

SN: That’s another good question and something that we constantly encounter. Let me give a couple of examples. We were speaking with an organization, a couple of stakeholders, and let me define the profiles of the two stakeholders, so you get a picture of who they are. 

There is Joe and then there is Jane. Joe is an analytically trained individual, has done lots of database analytics work in the past and is generally well versed with how to use data. 

Jane, on the other hand, is someone that is driving a lot of impact in the role that she’s playing for the organization. Now, how she drives impact could be through data, could be through better marketing, could be through better products, and various other things. 

Both of these are people in the organization that are driving impact to the organization through multiple different ways. Now, when we have conversations with Joe, who is analytically far more well-trained, if you will, typically, the conversations become about, what better techniques can we use? How can we improve the overall accuracy? How can we improve the robustness of the model? And, so on and so forth, right? 

There is a lot of motivation around building the best analytical tool that one could possibly build and to make it as sophisticated as possible, so you can actually talk about it in various places, you can develop some leadership within the organization, so on and so forth. 

Jane, on the other hand, could not care about how brilliant a model is, what latent variables were used in building the model. What Jane cares about is, is our product and service relevant to the maximum number of people, consumers that interact with us? That is the key outcome that Jane is seeking. You could do it in multiple different ways. 

What is the best, and most optimal way, and the most sustainable way to develop this loyalty or make our products and services more relevant to consumer needs? She’s thinking business impact, being relevant to most number of consumers, and therefore driving a better organization at the end of the day. 

Now, where am I going with this? The real point is to bring both of these lines of thinking together. Today a lot of what we do is to help organizations identify the “Joes” and the “Janes” and bring the two together. When you bring the two together, you’re actually creating products and services that consumers would love, not just be relevant to them, but they would absolutely love. 

Because the solution that we end up developing is going to incorporate algorithmic sophistication, it’s going to incorporate the fact that it needs to be sustainable and high impact, and it’s going to incorporate the fact that it’ll be personal to every consumer at scale. That’s a really, really tough problem to solve. That’s where we are playing today. 

RM: Gotcha. A lot of the stuff that we’ve talked about today is around bias and ethics and some of those things. How are companies, how are some of your customers starting to deal with this, or are there methods they could use to start to deal with this? 

As one example that we could talk about, people have tried to start doing things like setting up an ethics board, but even that can be controversial, right? I mean, Google set up this ethics board. I think, what, did it make it a week, or 10 days, or something like that before they had to dismantle the thing? I mean, what do you think about that? 

SN: Look, Google is Google, right? If they sneeze, there’ll be 10 million people tweeting about it. If they cough, there’ll be a bunch of people that don’t like the fact that they coughed, right? Some of these companies, thanks to how visible and how game-changing they are, they will encounter questions about these kinds of initiatives. 

Google is also so deceptively simple that it is really easy for people to ask questions and to understand the why behind it. Think of the search bar, right? It is one single search bar and an entire white screen. It is so simple that everybody can use it today. 

Now, this is where the power of Google is. Because behind that simple search bar, there are trillions of lines of code that are running at breakneck speed to provide us the information that we need in the time frame that we want. And as humans, we have become more and more intolerant of waiting for information. Instant gratification is key. Therefore, everything that we want, we want it now. Google is operating in that world. 

The flip side to that is we are worried that behind all of this information, behind all of this intelligence, how do things work? How many people know my payment details, my card details? How many people know where I am at whatever point in time? Google is collecting all of this information about my locations, about my payment details, and so on, and so forth. 

Back to the fear of the unknown, right? The more unknown or esoteric something is, the more fear it causes, it induces in humans. That’s the context. 

Now, what is the solution to this? Is an ethics board the solution to this? Perhaps. I don’t know the answer to that. I don’t think I’m qualified enough to be able to answer that. But here are some ways in which the AI collective can help the rest of the world, which is by making the algorithms be more accountable. 

We have to continue progressing towards making algorithms be transparent and be more accountable. The more accountability the algorithms display, the less fear it will cause in people. That is the real magic, which is to make AI friendly. There are some interesting areas of progress that are being made. 

For example, there is this concept of LIME, which is an approach to make the explanation of a variable on what goes inside an AI algorithm more transparent. LIME is Local Interpretable Model-agnostic Explanation. This is about dialing up those variables which may not come through in a typical AI algorithm, which looks like a black box. 

There is a way to bubble up, what are the key variables or key drivers that are actually resulting in the decision that the AI system is making? This is one example of a really good, robust manner in which analysts and AI practitioners can help the rest of the world in making algorithms more transparent, more usable, more robust, and ultimately, more friendly. That is where organizations and people can see a lot more value and lose all the fear that is associated with something that we don’t understand. Make it more understandable is where the real issue is. 

RM: Well, Sankar, thank you. I think that was a great answer to end on. Thanks for being on the podcast today. If people want to learn more about Fractal Analytics, what is the best URL for them to find you? 

Yeah, we’re right there. It’s www.fractal.ai. It’s as simple as that. I’d love to connect with anybody that is interested in thinking collectively about how AI can be more demonstrable as well as friendly to humans. I welcome the opportunity to connect on LinkedIn as well as my details are on our website. Thank you, Rob, for the insightful questions. It actually made me think quite a lot about some of the issues that the rest of the world is grappling with and how, as a collective, we can help make AI a lot more usable and a lot more simpler for everybody to get value out of it. 

RM: Great. Well, thanks for joining us today, and thank you all for listening. If you have questions that you would like us to ask or guests that you would like to see on the podcast, please send those to podcast@talla.com. We will see you next week.