The Intelligent Workplace

The Intelligent Workplace

Episode 7

Should we really care about a Bot’s feelings? An exploration of Artificial Intelligence ethics.

Priya Gore, Microsoft

Share on facebook
Share on twitter
Share on linkedin
Share on email

This is the first of my interviews that I recorded while in attendance at the Microsoft Inspire event in Las Vegas. Priya Gore is a Global Black Belt and Director of AI & Advanced Analytics for Microsoft Americas.

An experienced Tech executive and has worked in a number of high level positions across the industry, Priya now enjoys helping clients solve their toughest and most challenging business problem. She champions diversity in the workplace, supports the Women In Technology movement and was a founder in the International Association of Microsoft Channel Partners in New England.

Aside from all of her amazing work she does in the industry, she is also a wealth of knowledge when it comes to Artificial Intelligence specifically the ethics of implementing AI solutions.

How do we ensure remain in control of AI? Can systems be trained to think like humans? If so, do they have feelings? And, is societies motivation for AI aligned with the best long term interests of humanity?

Priya’s insight in this interview are fascinating…enjoy the discussion.

Chris.L:                

Today, we’re going to have a chat about the ethics of AI, a hot topic in the industry right now. So welcome to the intelligent workplace podcast Priya Gore.

Priya Gore:        

Thank you. Thank you so much for having me on.

Chris.L:                

It is an absolute pleasure to have you here today. And I’m expecting to learn a lot from you. But no pressure.

Priya Gore:        

No pressure. I can tell you and assure you and I don’t have all the answers, but I hopefully have a few.

Chris.L:                

That’s okay. That’s the beauty of this thing. We can just we can discuss them and we can learn from each other. And I don’t think we’ll ever have the answers to everything. But that’s the tech world really, isn’t it?

Priya Gore:        

That’s right. We have to get the conversation going and keep it going. That’s-

Chris.L:                

And every time we got the answer, we learn more.

Priya Gore:        

That’s right.

Chris.L:                

And it keeps changing. I like it. Before we get into the nitty gritty around AI and ethics, I think it’d be really great for you just to give us a bit of a background about your career in tech so far.

Priya Gore:        

Absolutely. I’d be happy to.

Chris.L:                

Yeah.

Priya Gore:        

So I was one of those young professionals. Actually, when I decided to go to college, I actually worked my way through college. So I started my professional career, pretty much right out of high school. And so as I got into technology, I literally started from the ground up and as I worked my way into, from a marketing role to a sales role. I learned very quickly and early in my career and it just so happened I should mention, by the way to be around the time of the Y2K happenings.

Chris.L:                

I remember well, yeah.

Priya Gore:        

Yeah, so being in tech and being a young professional at that time was really exciting because it was much like, in some ways, the way people sometimes perceive AI to be like the world is going to end, oh my goodness, what’s going to happen. And so at any rate, that was a really exciting time to enter into technology, and I’ve just never looked back.

So I started in tech, like I said, working my way through college. Once I got into sales roles, I started to really love and find a lot of appreciation and working directly with customers, and thinking about, frankly, it’s changed management, right? When you’re in technology sales, a lot of times it’s really almost taking the technology off the table, which I’ve learned over the years, and really talking to customers about the business that they’re in and what it is that they’re looking to do. Maybe better, you know, a smarter versus harder. How do we make it smarter versus you know, how do we overcome all that.

Chris.L:                

So you are saying, you listen to your customers?

Priya Gore:        

This is what we do.

Chris.L:                

Oh my goodness.

Priya Gore:        

Like I said, you know, the best lesson I’ve learned through this career of mine is taking that technology conversation, almost putting it aside because you know what, like, that’s going to be there. And it’s going to be an enabler, and it’s going to help us. But it won’t be any help to us if we’re not looking at the right problems that we want to solve. And so, really, that’s something I’ve learned since the beginning as I’ve grown up in this world. And I also want to say that I started my career working as a Microsoft partner. So prior to coming to Microsoft, I was actually with a couple of different Microsoft partners for the first 18 years of my career. And so that was a really exciting adventure as well, because I really got to learn the technology business and the business of being in the realm of innovation. It’s such an exciting time, right alongside Microsoft before joining the company.

Chris.L:                

Yeah. Awesome. And tell us a little bit about the black belt. It sounds awesome. What is it?

Priya Gore:        

Yes. At Microsoft, we have an organization called the global black belt organization. And what it is, is it’s a group of us. It’s a worldwide team, but we are organized regionally, by timezone. So for me, I’m part of the America’s timezone region. That translates to covering Canada, the United States and Latin America. And for me, I focus on artificial intelligence, machine learning and cognitive services. That’s sort of the area that I specialize in. From an Azure perspective, our cloud and and I work with customers, really helping our field sales organization, when they’re meeting with customers, and our clients are curious about finding their path to innovation or looking at a next way that they might be able to engage technology to advance their business, new paths to revenue, things like that. We’re helping them really identify those high priority use cases. And then we have a team in the Global Black Belt organization. We have many folks that are deeply, deeply technical.

So we have a team of really expert data science resources, as well as cognitive services, resources, cloud native app developer resources, people that can go really, really deep into the technology that can help our clients actually really envision these solutions as they come to life. So that’s what we do in the GBB org.

Chris.L:                

And so just to wrap that all up, it just means you’re carnivores in what you do? Is that right?

Priya Gore:        

Well, you know, I find this to be such a great space to work in, because I think the domain of AI is still very new to people. And it’s something we’re all learning together. And so as long as you can sort of approach this whole concept with a growth mindset and really think about what’s possible, and really try not to just come in with all the answers really come in with more questions, and really think about taking an agile approach to what it is you’re looking to do so that you have almost you build in and make space for failure and make space for failing fast and remediating quickly. I think that that’s one of the best ways that you can really take on something like artificial intelligence or really any any sort of emerging technology.

Chris.L:                

Is that what intrigues you? The fact that you don’t have all the answers that you sort of building this not on the fly, but it just continues to change every day?

Priya Gore:        

Every day. Every day, we talked to a client. Everyday I talked to someone or get on a whiteboard with a customer and work through something they’re thinking about. I’m listening, and I’m learning, right? I’m helping guide and facilitate more than anything, right? And that is so important to be able to do I think as someone in this business is to be that that person that your client can trust. And I hope that I’m working towards earning that trust. Every time I speak to someone and also, us as one Microsoft team and our partners, really doing that together for our clients is something I think we all really strive to do as a common goal.

Chris.L:                

Yeah, it’s that element of trust sort of leads me on to my next question, because the reason why I want to speak to today is I want to talk about the ethics of artificial intelligence. Can you tell me why this moment, that’s such a hot topic in the industry right now?

Priya Gore:        

Well, it needs to be. So this is a very important topic. It’s something that you know, the ethics and I’ll call it governance around AI is is really loose, it’s not really that firm globally. And so it really isn’t coming on as to really take that on, right, as individuals. Each of us as individual consumers have to really care a lot about the ethical impact of AI. And then as you think about a company like Microsoft, that is building really amazing capability and really what I call the building blocks for customers to build their own IP and their own innovation and imagine their own ways of engaging and infusing AI into their ecosystem of customer engagement and workforce enablement.

Then, you know, we have a responsibility, that responsibility is on us at Microsoft to ensure that things like fairness, and, you know, transparency, model explain ability, all of these things, accessibility, all of these things are super important and without sort of that governing entity rules around it, it’s it’s can very easily become AI for bad. So we all have to take it upon ourselves to take that personal responsibility, and also that organizational responsibility. And that’s something that we work equally as close with our clients and helping and still, alongside helping them understand what the technology itself is capable of.

Chris.L:                

It’s a fantastic summary, it sort of feels like the right point now to deep dive into a few bits and pieces around AI and ethics. I spoke to a guest on this podcast recently about unintelligent organizations. And it kind of got me wondering, we know that intelligence comes from learning and we are the teachers of the systems and we as humans, we in our own lives, we build up experience we have like a gut feel over time that this sort of stops us from being forward or making mistakes, but machines don’t have that. So my question is, how do we protect against creating foolish or maybe gullible II systems?

Priya Gore:        

Right, I think that we have to sort of level set and sort of get grounded on the expectations that we really have of these intelligent AI systems. And I think we need to really make them purposeful and we need to sort of engineer build and apply them in the right frame. In other words, we shouldn’t overcast like, forecast what it is that this thing is going to do for us. So for example, I like to think of AI right now as something that really could be more like decision support, or really augmenting, you know, the human factor, right? It’s not really taking over and it shouldn’t, right. And so that is my personal view, as well, you know, I think that these systems can be super powerful.

I think we’ve seen many examples of machine learning driven algorithms, being able to really help humans make better decisions, see things that we couldn’t see ourselves with our own two eyes or that we just couldn’t find right without the support of AI. And so when we use it alongside our own human ingenuity and our own sort of, as you said, gut feeling, and don’t forget that when you actually go and start to build a model, the first thing you do when you frame out what it is you’re going to go, the question you want to answer. First of all, you have to have really strong data sets. So none of this is really going to be intelligent, without excellent quality data coming in on the front end. And so without that, you really do you have kind of dumb AI. So that’s first step.

The second thing too, once you understand sort of that data state and you really understand and wrangle what it is you want to use for data so that you can then ask questions of that data, and use these tools to do that more intelligently, then you really have to. When you build out the use case, a lot of times what we do is we actually interview subject matter experts in the business. So all of this intelligence that we’re getting from the subject matter experts, is feeding into our approach to the model. So as you can see, It’s data dependent, but it’s also human intelligence dependent like we can’t even ask the right questions without getting that subject matter expertise built into the model and framing it in the model, so, to me, it’s like it’s just a great powerhouse at the center that we can use to just help us make better decisions.

Chris.L:                

The human involvement. It leads me on to another similar element. And that’s obviously bias.

Priya Gore:        

Yes.

Chris.L:                

Potentially the systems we can create, they can take on the bias of their creators, you know, whether it be racism, sexism, ageism. How do we ensure that the systems we build aren’t built so in such a way that they might become judgmental in their decision making?

Priya Gore:        

So first of all, my answer is it starts with mindfulness. I think just knowing up front that these models are subject to that bias that you described in itself is a huge step. I think that in a lot of ways, a lot of the models that are probably out there that maybe have that bias, they may not have been intentionally bias. It just was unintentional bias, right? It’s almost like people just not thinking that through and I think that the more that we talk about this topic, and the more that we really explore what that means and how we can actually start to mitigate bias and model creation and itself and to end to answer the direct question. And I think that it also involves needing to have a really diverse team of people kind of weighing in and working on the project working on the model and continuing to contribute to the refinement of that model as we move forward.

It’s also about explain ability. So anytime that we deploy a model we should be able to explain. So as it’s giving us answers to the questions, we’re asking of the data, we should be able to explain, you know, how and why it’s giving us that answer. So again, human should always be able to and should have to be able to explain how we got to an answer. And if if the model is doing its own thing, and people aren’t really part of that equation, and part of that, that workflow. We’re in trouble.

Chris.L:                 

That was really interesting. Can we talk about one of the major feeze that sort of comes up when we talk about AI? It gets a lot of airtime in the media, and I don’t think you’ll be surprised by this, is the fact that a lot of people think that the robots are taking over the world, and that is going to steal their jobs. Can you once and for all sort of put that to bed and give us the real story?

Priya Gore:        

I don’t know that I could put it to bed. But here’s what I’ll say. And again, this is a domain that we learn every day, we learned something new every day. So I think that, you know, the robots taking over, I mean, we have movies that like Terminator. All movies right, that just, you know, have this perception that have instilled this perception on AI.The other thing that we see a lot is, you know, you say AI, artificial intelligence. Sometimes people just roll their eyes, you know, it’s just so over, like you said, over hyped, overrated. But I think that really, again, it’s about augmenting humans. It’s really about helping us up level you know, what it is, the tasks that people are doing on their jobs.

I saw a documentary, very recently, a really, really great documentary. I didn’t actually agree with everything in the documentary. But that’s what I loved about it is like it gave me another point of view. One of the things that was in it, it showed a burger joint in California that has basically a robotic burger flippers. And what was so interesting and again, just again, seeing it, you can read about it, you can see it in a movie, but to see it really happening and in operation like in production, I should say, in a burger. In a fast food chain. They actually have a robot flipping the burgers perfectly and cooking them perfectly.

And the upside, and that is you know, the quality that you need to have going across multiple restaurants. The burger has to taste the same, it has to be that same quality. What could potentially be perceived as the downside is that wow, what about the people that used to do that? But interestingly, in this case, they showed the workers that are responsible for operating the robots and putting the meat on the grill before it starts to cook. And they interviewed her and they asked her, like, who is managing who here? You know, is the burger flipper robot managing you? Are you managing it? You know, and it was a funny question. But then they asked her the question, you know, do you like this job? How do you like it and she said, “I really like it” she’s like, “this is a really interesting way to work.” And it is to about sort of like just the paradigm. Times are changing.

Chris.L:                

She’s like the robots boss now.

Priya Gore:        

She’s the robot boss and also that, like, we have to embrace I think, just sum it up in terms of the jobs because it’s much bigger than then the impact than just you know, for example, the burger joint, but it’s, it’s happening everywhere. And I think that one of the things that we really have to do is, as I said before, really continue to have a growth mindset, continue to be open to change, continue to be open to innovation and involvement of what careers in certain industries look like, and be willing as a workforce to really take that on and as a society. To really understand what that means embrace it, and then make sure we have the right training and skilling in the ecosystem to continue to ensure that our employees and workers of the future have the skills they need to work alongside these great technologies and tools and systems.

Chris.L:                

So along along those lines, in terms of ethics, should we be setting humans up to be the most successful versions of themselves? Or should we just be ensuring that they have a job no matter how mundane?

Priya Gore:        

Oh, certainly the former. The best version of themselves., absolutely. I mean, I think that, this is what it is. The the technology and innovation, the capability of technology, being able to enable intelligent systems like this is here. It’s  happening. And so I think for any organization that is sort of reluctant or it’s like, you know, I just don’t want to, I don’t want to change. I really worry for them because  I feel like that’s just that’s going to probably set them back even further. There again, this is much more of a change management sort of themed conversation than anything. And if you think about even having grown up in the essence of things moving from being on premise to in the cloud, and really watching that sort of transformational journey that’s still happening with our customers today. And that kind of element of hesitation and people not sure about the cloud and is it safe and what really is it? It’s kind of like that with AI too. And I think that we have to be patient with ourselves we have to be patient with our employees, with our leadership teams.

And by the way, I think one of the important things about thinking about taking on AI and using it to help us be the best version of ourselves is that you know, we have to be culturally ready for it. So even before you think about, I mean, or while you’re thinking about the aptitude and the appetite you have for the technology itself and whether you have the right people and partners, aligned with your business, to take that on, you also need to be thinking about the culture of your company, and making sure that your employees, every employee is ready to embrace these new systems because we can build the best AI technology and the best AI systems. But if people don’t use them, because they don’t get adopted, because people aren’t ready for it, then what’s the use? We’ve just built something and spend the money to build something as a company that people won’t use.

Chris.L:                

I said before that people were telling me that it was key to get you on to this podcast. And so I had a few people asking me to ask things in the questions and all that sort of stuff. And one of the questions that one of the people gave to me was this; is our motivation for IR aligned with the best long term interest of humanity?

Priya Gore:        

Well, it depends on whose motivation you’re talking about. I think that that is in itself quite subjective. I do think that you know, there is definitely again, going back to ethics and, you know, kind of fairness and transparency, and all of the things that we need to be thinking about if those things are aligned first, then yeah, then I think that there’s a real strong promise for AI to be a huge game changer for making us the best version of who we are, the best company that we could ever be. But if we’re not putting that as a priority and as a first step, and again, sometimes it’s unintentional. There could be systems out there that are doing really bad things and actually are infringing on people’s livelihood, infringing on people’s privacy, infringing on people’s freedom. And I think that that is a very dangerous and a very slippery slope.

And so I think that it’s so important for anyone listening to really think long and hard and you don’t have to have all the answers and you don’t have to know everything about it to sort of take this topic on and it’s about researching, it’s about talking to people it’s about getting advice from your partners and from your, the companies like Microsoft are here to help you work through that. Because it is complex, and there are a lot of variables, but if we don’t address that, we have a risk of there being even unintentional, really negative consequences of these systems.

Chris.L:                

So in that regard, you mentioned transparency before, do you think there’s a need to find a balance between… do the intellectual property of the companies who create the systems and being transparent about the processes for how decisions are being made?

Priya Gore:        

Yeah. So I think you should always understand how the technology you’re using as a sort of foundational building block operates. Yeah. So if you’re going to bring a building block from a platform into an AI system that you want to build with your, you know, and then put your brand and name on it and call it your IP, you darn well better know what those base bits are kind of all about. You should understand sort of the terms and conditions of how that data is stored. You know, where it goes, who has access to it, how that all works. Again, none of these systems are anything without data. The data should be yours. If you’re building a system, it should be your data. You should feel it’s secure. It’s in a place where you can protect the users of your system, and protect the rights of people that are using your system. And you should be able to, as I said before, be able to explain that. This is a huge piece of, I think, just innovation in general, anytime you’re your Creator of IP, and you’re building something that you’re then going to license and sell for people to use, you need to make sure that you can really have a solid product that you can support. And that also you can explain sort of how it was built, people need to know I mean, now the recipe-

Chris.L:                

Without giving away all the secrets.

Priya Gore:        

Without giving away the recipe and the secrets exactly. I mean, that’s the competitive differentiator, but I think that you as a product, if you’re a product manager, and you’re building something out, just… I would investigate and make sure you really understand those core bits. Because you know, it is a big part of what your brand will be. That’s the one thing I will say. Taking this just in a little bit different interaction in the sense that we think about AI systems from a user experience perspective. They’re extremely powerful.

So if you think about something like even as simple as conversational AI and bots, they’re everywhere. Many of us as consumers are interacting with bots on a regular basis, most mobile applications that companies are deploying for us to use, as consumers have an aspect of a bot associated with it. This is an excellent way to extend the company’s brand to meet people where they’re at, wherever it is, they are. But at the same time, it can be a detriment if you don’t do it, right. So if you have a bad bot something that’s not really interacting the right way with humans, you know, it could really damage your brand. So again, like being super thoughtful, and being really thorough, and methodical about how you actually put these things out there. And what’s cool about it is you know, you can get using the cloud and using some of the building blocks, for example, like we have an Azure just by example. You know, you can actually get these systems up and running in a matter of weeks, many times.

 It’s pretty cool because you can sort of prototype things you can try them out, you can even A/B test them with a pilot group, kind of get some reactions from, from consumers and users. And it will tell you a lot. And that’s the other thing, you have to be open to that feedback loop. If you’re not, then the system is probably not going to survive.

Chris.L:                

You talked about us interacting with bots. And I guess in some ways, they almost take on a human form. Sometimes you don’t realize it’s a bot, Hollywood’s all over that, they’ve got robots to take on the human form as well. So in that sort of way does that mean that these human form robots also have rights?

Priya Gore:        

I don’t know. That’s a tough question. Do bots have rights? I don’t think so. Look, I mean, I think it’s, to me… Again, this is the Priya Gore answer. I think that bots… no they don’t have rights. They have responsibility. There’s there’s a governance that needs to be there on behalf of the person, the IP owner of that bot solution, to deploy that thing responsibly and it has to be in my opinion, human monitored. There has to be a feedback loop, there has to be a way for humans to constantly engage in that loop between the bot and the user of the bot. So that there’s a way to calibrate what the bot can handle and what it needs an off ramp to a human life person. So I mean, I don’t know about calling them rights. But to me, it’s about again, going back to governance and responsible deployment, and it’s not a set it and forget it thing. And it shouldn’t be. In my opinion.

Chris.L:                

I like it. In my head, I’m thinking that, you know, there’d be a movie. What was the one with Will Smith? Was actually called AI?

Priya Gore:        

I think it was.

Chris.L:                

The one that Hollywood just depicts them, you almost feel like they would have rights. And in doing some of the research and the reading around this, there’s a lot of people that were like, pro robot rights and all that sort of stuff.

Priya Gore:        

Yeah, and it’s interesting that you actually asked about the human rights because you know, what is confusing, I think to people, especially if you don’t realize you’re engaging with a bot, a lot of times, obviously bots are really smartly branded. They’re given names. And they’re sort of, they’re human personified. And so, again, as I said many times, they become an extension of the IP owners brand. And so it becomes kind of a persona of its own. But I think that you really need to, in my opinion, again, be transparent, and be really authentic about the fact that you know, this is a bot. And let’s be clear, when things like bots are deployed, most times for many consumers not I’m sure there’ll be many of you out there that are listening that will be like, I can’t stand it when I’m engaging with a bot. I don’t ever want to, I wish I could just turn them all off.

But for many people, self service is expected. And so this becomes something where if I have a couple simple things I want to do with a vendor or with you know, a company that I interact with as a consumer, whatever, I just want to be, I don’t want to talk to people, like I just want to go do what I need to do, check on availability, or book a reservation or do whatever I need to do and not have to make a phone call or not have to talk to anybody or wait on hold.

Chris.L:                

Yes, I was gonna say that-

Priya Gore:        

… serve that and using bots is an excellent way to do that and that’s amazing. But when it’s more complex too where we can maybe start by few simple Q&A things back and forth, or let me do this, let me do that. And then you have a more complex concern or question that requires more human interaction, then that bot needs to equally be smart enough to go off ramp that to a human and get me to help me quickly. And also, by the way, the really intelligent ones, and this is pretty mainstream is those bots when they transfer to human, all the transcript of what I’ve already done with the bot itself, should be live in front of that person, real time so that I don’t have to repeat myself. That’s a better customer experience. And that’s what we’re really after. Meeting clients where they’re at and giving them what they need.

Chris.L:               

Yeah, absolutely, absolutely.

Priya Gore:        

AI for good.

Chris.L:                

Yeah. So we’re heading sort of down that human path as well. We spoke earlier about creating a gullible system, which is a very human characteristic. Could AI be developed to experience a human emotional? Could I maybe feel anxious, scared, lonely or maybe even feel pain? And if it could, should we as humans feel bad for making it feel that way?

Priya Gore:        

Oh, my goodness. That’s a good question.

Chris.L:                

I’m loving this. This is great.

Priya Gore:        

This is an interesting one. I don’t know. I mean, I think that, here’s what I would say. I think that AI systems that are interacting with humans that we want to personify, when we want to personify these systems, what we’re seeing from just a design perspective, as well, we are seeing the aesthetics and sort of a sentiment if you will, of these systems be taken into account. So for example if I’m imagining a robot with maybe a touchscreen on the front of it greeting me in the front of a store, I want that to be a happy bot. I want that to be a bot that’s smiling at me and welcoming me with a nice tone of voice that’s giving me excitement to be in that store. Usually what we’re going to see there is, so it’s taking on a persona. It’s being programmed with a persona, sure with an intent of actually greeting people with a happy smile and maybe a tone to its voice that is uplifting that welcomes me to the store. That can be very positive for consumers walking store.

Imagine if that that bot actually helps me triage to what I need done in that store more quickly, I’m there for support or I’m there to engage with a customer service rep that specialized in a certain thing and accuse me into that person so that I can get to them more quickly. That’s just a better experience. So I think that we’re seeing that.  The other thing… I’m going to reverse this a little bit.

Chris.L:                

Sure.

Priya Gore:        

So one of the other things, I think that AI is doing really well, today, right now with the developments that we have, and the capability we have technology wise is being able to interpret sentiment. So when humans are interacting with these Intelligent Systems, being able to sort of, interpret the mood and sort of the sentiment of the user interface, that is hugely powerful. So being able to really understand happy versus sad. Or even like when you think about some of the video analytics and some of the things we can do, with vision, and being able to just like keep people safe, right, and understanding just even something like crowd management. And if an area that’s really dense with people that we want to keep safe, becomes really crowded in a certain area. Maybe we need more security over there. Or maybe we need more even simple things like being able to do like stadium logistics and thinking about like, let’s take a positive thing, like going out to a baseball stadium, and actually watching a game or going to any sports stadium and watching a game.

When you go to a game you typically… when you when you go through the gates you may need to go get something to eat or, you know, grab a drink or hit a restroom. What if we actually could have AI engaging to help us with a mobile app connected to me as well, that can tell me where the closest restroom is with the shortest line, or being able to, you know, it knows my favorite thing to eat at the ballpark, as a regular when I go in there, and it tells me where the closest concession is with that particular thing I want. Being able to just sort of get me where I want to go. So these are just examples,[inaudible 00:31:47] things just off the top of my head where I think sentiment analysis looking at computer vision, using it ways we can engage these intelligent systems to really improve the experience can be a really positive thing.

Chris.L:                

Yeah, nice. In the process of developing AI, does that give us a bit of a better understanding about who we are as a human race? I mean, by passing on our thoughts, our ideas, our biases, our lessons in life, are we also learning about ourselves and what makes us tick?

Priya Gore:        

We most definitely are. And the reason for this is most AI systems, what happens on the other end is they actually are an insight engine. So remember, it starts with data, you bring data through the intelligent system, you’re surfacing insights, you’re surfing information that maybe we didn’t have access to before that now we can see, now we can look into. But what happens is you have to close the loop. So you continue to train the model. How people react to that information and what it is that we do with that information actually helps us better inform the model and then therefore it better informs us as we continue to look at those insights. So having that closed loop is super important.

But at the end of the day, thinking about it almost like a channel so at the beginning, you’ve got really good data sources. These can be structured and unstructured data sources. It depends on the system you’re developing and what you’re looking to do with it. But you can bring, and by the way, I should say, and we all know this, but just a reminder, we have so much data at our fingertips, and it’s only going to continue to proliferate. So it’s really important that companies look at these systems and say, How can we actually use the data, we have to create better outcomes to create better experiences to create better customer service to create better, faster time to market right for something we’re trying to do or engage with a customer on. I mean, it’s all there for us to leverage but getting those insights and then being able to learn from them, and then bringing that back through the system is really the magic. That’s where the magic really happens.

Chris.L:                

So if we think of bots, as being part of the workforce, I’ve seen diagrams of where you draw the old org chart and next to some of the managers they’ll have a board now as part of the part of the workforce, so what responsibilities do organizational leaders have to the human and artificial workforce? Like, do they need to perhaps, I don’t know, do a performance review on a bot every now and then?

Priya Gore:        

I think you should be doing performance reviews on bots every time you can. These are systems that need to be controlled, and they need to be maintained.

Chris.L:                

And if you talk about that feedback loop and feeding things back in, they’ll always be improving.

Priya Gore:        

Absolutely, absolutely. And I think the tools we have available for people to engage and not just with bots, but with AI systems in general, to be able to really understand sort of what I’ll call the performance of that intelligent system are really fantastic. So it makes sort of managing through that, easier than the maybe it was even several, you know, a few years ago. But the other thing I’ll say, and I’m going to speak for a moment about robotics process automation and digital agents in that sense being able to take something which RPA is not new. Maybe RPA and the cloud is more is a little newer and more progressive. But, you know, thinking about something like RPA Digital agents, I mean, this is a great way when you think about Intelligent Automation to be able to actually take real like, almost like menial tasks and I hate to use that word but things that  honestly like people having to do that type of work, it’s mundane, it’s like it’s so tiresome, it’s like people are so like just hand keying like the same thing and actually even some of the potential for error right and just and just keeping that stuff and having you know, a number transposed or something like

So engaging something like digital agents and RPA into an intelligent system to sort of take that work on and up level people to more important stuff. That is huge ROI that is high value opportunity for any of our customers that are looking at this type of ensemble of technology and innovation. And so I think that that’s something that people need to look at. And also just remember, like machine learning and AI, like it’s not new. It’s been around for decades. It’s just the capability that we have the compute, that we have the ability to actually compute models at the edge, both online and offline, being able to just engage with massive quantities of data like we’ve never had before, that’s the innovation. That’s what’s really exciting about this. And I think what’s making this really, really become something big and what’s all the buzz, you know, making it all about it.

Chris.L:                

I wish people could see your facial expression right now and the way that your eyes are lighting up when you’re talking about this because obviously you are very, very passionate about this. It’s fantastic to see. I think we probably need to wrap up shortly. But we are take into account all of the ethics, is there some kind of moral framework that should be guarding the creation of new AI systems that we need to keep in mind?

Priya Gore:        

I think there should be you know, again, I think that this is not something yet that has been matured and really means, you know, sort of been brought out to the mainstream market per say, I do think that several of the of the companies that are building a lot of this technology, like Microsoft, we are providing customers with, I think really good guidance on a frame, you know, how to frame this for yourselves as a company. I think that any company that’s looking at building AI systems or leveraging AI systems, to engage again, their workforce and their employees, etc, their needs to also be their own, their own policies and ethics need to be in cultural sort of choices need to be brought into that framework as well.

So I don’t think that it will ever be something that sits on a shelf and you just grab it down and say, Okay, we’re going to just, you know, apply this, I think it also it always will have to have sort of that company’s fingerprint and stamp on it. But I do think that there are some really good guiding principles that are out there and they continue to be evangelized and offer the opportunity to make better. And I think that the more systems we see, and I think we have to analyze used cases. So looking at before you build an AI system, really do a 360 degree pivot around that use case and think about it from all different angles of the users of the system, the people that are able to sort of interpret the data from the system, all of that that surrounds it, like that all needs to be looked at as part of making sure that this really is a system that can be deployed ethically and, you know, meet sort of those principles and standards that we believe in and that hopefully our customers believe in to ensure that we are deploying in production AI for good AI systems that are really going to amplify human ingenuity versus be damaging to society.

Chris.L:                

I said today when I was going to speak to you I thought I would be learning a lot today and on I was right. Absolutely right. This has been fantastic. But I’ve got one final question for you. AI systems are made to help us with decision making. Having to make the final decision is obviously, the next step in the process, potentially. But when will you be comfortable in handing over the reigns To AI 100%?

Priya Gore:        

Oh my goodness. I think about this a lot. I really do.

Chris.L:                

Do you think you’ve got a responsibility to your children to make sure you get this right?

Priya Gore:        

Yes.

Chris.L:                

They’ll be growing up with your decisions.

Priya Gore:        

I think it’s already happening a little bit. I think that there are… It depends on the severity of the intent of the system. So for example, if I’m looking at a video subscription like Netflix, and it’s, it’s making a decision for me based on my past what I watch, it’s recommending to me shows, that’s going to do a much better job than me just combing through 8000 options. So systems like that, I think are fine.  I mean, I think that that’s harmless.

It’s pretty low risk in terms of you know, it recommending me the right or wrong thing. But if we’re talking about a system and healthcare and life sciences or something where there’s an outcome that, you know-

Chris.L:                

A life or death sort of thing.

Priya Gore:        

… a life or death type of thing on a human, I think that that will forever in my mind needs to be something that a human expert is also helping validate. Now, that’s the point, though. I think that the systems will get an are in some cases to a place, and I’m diverting, going more general for a minute outside of health and life sciences more broadly, in industries in general, there are systems out there that really, really do help us make better decisions as experts on a specific domain that we’re looking at, but it still requires a human. For those  life or death or really important sometimes it’s like, a decision for A or B and it’s like a really big financial impact or it could, you know, could be anything, when the risks are high.

But even when the risks are low, it almost comes down to sort of consumer tolerance. So like, what are we willing to, you know, are we do we really want to come through 8000 choices? Or do we want a system to be able to help us curate just a better version of something for me, and personalization comes into play to in that realm, like the the price I’m willing to pay as a consumer, for personalization, and maybe to see stuff that I would never have found on my own, with the help of AI and an algorithm, versus like, having to take the time and hours to comb through all this stuff and never get to that page, that cool thing that I you know, wouldn’t have found without it. I think that’s, a great place for AI to be helping us.

But again, those life and death decisions are things that are more critical, in my view, I really believe that there should always be sort of human intervention and human validation associated with that and multiple humans. I don’t think it’s about one person just glancing and saying, Yeah, that looks great.

Again, going back to even just model bias and thinking about this is a place, this is a domain AI in general is a domain that needs, it should be very open and have a lot of space for multiple points of view. Because the more inclusive you can make these systems and helping guide those decisions, I think just the better outcomes you’re gonna have.

Chris.L:                

I’m pretty glad to hear you say that as a man in his 40s, who’s probably in some time in next 20 years is going to need some sort of an operation on my dodgy hip or something like that. I’m glad that there’s going to be a human behind those decisions.

Priya Gore:        

I hope so. I really hope so.

Chris.L:                

I like it. Priya thank you so much for joining me today. This has been absolutely wonderful. I can just say you are just so passionate about everything you’re doing with Microsoft, and you are an absolute wealth of information. And I’ll say it again, I love your title of being a black belt. That’s awesome.

Priya Gore:        

Thank you so much for having me. I really appreciate it. And I’m really excited to have had the chance to talk with you today.

Chris.L:                

I appreciate your support. Thank you very much.

More Episodes

Decoding Disruption in the age of Artificial Intelligence

Larry Quick, is the founder of Resilient Futures, a company that specialises in working with their clients to leverage disruption in their industry. He is also the co-author of “Disrupted: Strategy for Exponential Change” and the original developer of the Strategy in Action framework.

View Episode

A “Happiness Hacker” seeking happiness in a scheduled life

Penny Locaso isn’t afraid to shake things up and go all in on what she believes in. She launched BKindred in 2015 and turned her life upside down in doing so. She quit a successful career as an executive, called time on an 18 year relationship and relocated her family across the country. She was searching for happiness and wouldn’t quit until she found it!

View Episode

The “Four Birds” of Change Management

Debbie Ireland is the Managing Director of ShareThePoint Ltd, a New Zealand company that specialises in Office 365 and SharePoint training. Debbie’s involvement in the tech industry doesn’t stop there, she also manages the Annual Digital Workplace Conference in Australia and New Zealand.

Other than running her business and organising conferences, Debbie is all about people…understanding what makes them tick and how to get the best out of them alongside the amazing technology we use in our daily lives. I thought I would borrow upon her expertise to discuss something that has been mentioned quite frequently in some of my interviews lately…Change Management.

View Episode