As part of our AI For Growth executive education series, we interview top executives at leading global companies who have successfully applied AI to grow their enterprises. Today, we sit down with Dr. Rob F. Walker, VP of Decisioning & Analytics at Pegasystems, a technology leader in cloud software for customer engagement and operational excellence.
Dr. Rob F. Walker is one of the world’s foremost experts in omni-channel customer engagement and front-office decisioning. He’s responsible for the strategic direction and market share of the decision technology and related applications that power Pega’s enterprise solutions for customer relationship management (CRM) and business process management (BPM). In this interview, he shares how to deliver value to customers while preserving the safety and privacy of their data in the challenging climate of regulations like GDPR and scandals like Cambridge Analytica.
Read the interview to learn:
- How Pega identifies the “next best action” to use for every customer engagement across any channel.
- Challenges that GDPR poses to enterprises using AI & machine learning.
- Methods for increasing transparency and control of algorithmic processes while reducing bias.
Mariya Yao: Hi everyone, thank you for tuning in to our AI for Growth series. In this interview series, we talk with leading executives at global companies to find out how they’ve applied artificial intelligence at their own companies, and what their biggest lessons and takeaways are. So today, I am with Dr. Rob Walker, who is the VP of Decisioning and Analytics at Pegasystems. Dr. Rob, tell us a bit about yourself and how you got into AI.
Rob Walker: Thank you for having me. I got into AI, I think, really early on. Call me psychic, but…
MY: Before it was cool, right?
RW: …But I did my PhD in the 90s, so that clearly dates me a lot, but even then, AI was incredibly attractive. At the time, I think AI was still trying to beat Gary Kasparov, maybe.
MY: I think it was 1998.
RW: Yes, yes, so I wasn’t before that. But AI was very different then. Although we were playing around with neural networks and expert systems, but it was really very fascinating to me, and I’ve basically not done anything else except make it more practical for business.
MY: Speaking of being practical for businesses, can you give us an overview of what it is you do at Pega, and how you’re using AI to drive business ROI for the company and your customers.
RW: At Pega, I’m the VP for Decisioning and Analytics, and AI falls into that. I’m responsible for that space, what we do (in summary) around AI, essentially, is we try to optimize customer moments. I think that’s probably the most concise way of putting it. So independent of channels, any customer moment we’re try to optimize, and we do that in two ways as Pega??? We try to use a lot of AI to make what we call “next best action” decisions—that’s where all the AI comes in—and at the same time, we then also have automation, process automation, to make good on all of those decisions. But where the AI comes in mostly is in relevance for customers
MY: Going back a moment, what exactly is a customer moment? Is it just any customer engagement or is it something quite specific? And what kind of data do you look at in order to predict and prioritize what you believe is the “next best action”? What are the kinds of “next best actions” that you do suggest?
RW: The “moment” is any engagement in any channel. The reason we call it “moments” is because it feels like if you talk about “channels” that’s very inside-out kind of thinking, it’s really about all the moments a customer has with their company that they’re working with and then trying to create the optimal customer experience, to be relevant and to basically take the next best action.
The data that we use for that is the whole historic profile, so typically the companies we work with have a lot of data on record. And then there is the context, that’s important, that’s the immediate context. So it’s what is this customer doing right now on the web side, what did she click, or browse, or starred, or what did she do in the contact center? Any channel, really, so that’s why we call it “moments.” And we’re trying to decide our next best action, so that’s basically using a lot of AI to get us to the evidence and the insights, and then we use economic decisions, business decisions to finally prioritize what’s being done.
MY: That makes it a lot more clear. So let’s make it even clearer. Do you have any specific customer case studies where you can walk through “here’s the type of data that came in”, “here was the customer’s goal”, “here’s how they worked with Pega”, and the type of decisions that you were able to suggest and by suggesting them, what kind of business ROI you’re able to achieve for your customer.
RW: Sure. Let me give you a few examples from a couple of industries. One of the things that we would typically do is, first of all, we would use AI to “follow the money”, not our money but the money for our customers. For instance, for a very large telecommunications company in the US, one of the top three, we would be particularly interested in reducing churn. So they were bleeding a lot of customers, there’s a lot of competition. In their case, initially, AI and business rules and decisioning was used to look at a particular customer, decide on the risk of that customer leaving in the near future, but also to calculate the budget for retaining that customer, to make sure that we right-sized the effort. So with all of those sorts of insights, AI was being used, and then, when a customer that was likely to churn, we would proactively decide on the next best action and try to convince them—again, with AI—to stay with that company.
MY: In this particular example, you probably had many next best actions that could be used on a customer. From this particular case study, what did you find was the most useful or the most effective in terms of retaining some of these almost about to churn customers?
RW: The thing is, and this is actually the trick, this is where AI comes in. There are no generic statements about that, because it’s completely one to one. It completely depends. So maybe the AI has figured out that it’s because you just are not always connected…you have dropped calls and it’s very annoying, especially because you have them at home or at work, places you frequently are at and making calls, or maybe it’s price in your particular case. Or there’s a phone model that this company doesn’t support or not immediately support. It can be all sorts of things. We’re trying to figure that out and then make you a completely personalized offer, also based on the actual value we predict you will have to the company in the future. So it’s very, very personalized, both on the cost perspective and from the relevance.
MY: How does the extreme personalization affect the business result? Before, this telecommunications company…who knows what they were using? After using Pega, what has been the change in terms of the effectiveness of these anti-churn measures?
RW: Yes, they are pretty extreme, I would say. This particular telecommunications company, it really changed their whole business outcome and the part of the value of that company because churn, in telecommunication, is a really big thing. If you’re bleeding customers, that’s an issue. If you can retain them, and retain them at a cost, that’s really a big deal. We were seeing something like an 18% increase in NPS, so that’s really cool stuff. Equally, we would look at things like 40-50% churn reduction, which is huge, so this is not small change. That’s a material change to the business process.
MY: So other than customer churn, are there other business use cases that you tackle with AI, and can you go into a specific use case for that?
RW: So in communications, we typically follow the money and churn is a big thing, but then, initially, after that sort of success improves AI, it ?? our confidence/competence factor, not just for the company, but also the people that are touched by the AI. Like for instance agents, or people in the retail shops, they have to trust these sort of AI recommendations. But once that’s established and the company’s confident about that, you will then typically add other business issues, like cross-sell, upsell, or risk mitigation. Sometimes a company might have trouble getting their money back, right, people are not paying the bills. Maybe in banking, you’re missing payments, or your mortgage, whatever it is. Now, first of all, we want to be proactive and see that coming, and make sure we do that, so we do that for the banks, but if it’s already too late, what’s actually the best way to get your money back? How can we negotiate, supported by AI, how can we negotiate with the customer a promise to pay, and how can we make that effective? So that’s another business case. Typically we see 80% of the customer engagement use cases is retention, cross-sell, upsell, and risk mitigation, some form or other.
MY: A lot of the [power?] of the AI comes from the really rich customer data that you have and that your customers have. As we’ve seen from Facebook and the Cambridge Analytica scandal, some of this customer data can be extremely valuable and it can be very risk if you don’t protect it well. What are some of the things that worry you, or that keep you up at night when it comes to customer data: privacy, security, and transparency?
RW: So I think that’s almost an underestimated area…it’s hard to believe given all the information we’re getting about that kind of hack. But I think, it’s really important for companies to understand that, first of all, data is only part of the equation. Even if you have very limited data, with the sort of advanced AI that we have now, it’s still possible to infer a lot of things. The Cambridge Analytica example is one of them. People ask a few questions…now obviously Facebook does have a lot of data on you, but it’s basically only looking at [lights] and then predicting all sorts of things. Your sexual orientation, your political affiliation…obviously things like gender and age. So it may not even be…and I think this is the warning that I would put out there…you don’t have to have the data of your customer records for AI to infer it and use it like it’s true, and it probably will be true. That is the sort of things that I would worry about a little bit. There’s also another side to that, which we may want to talk about a little later, but challenge here is that AI is very capable of inferring stuff that may not be politically correct, but then you can use it in your marketing or in your risk management strategies, and you may not even be aware of it. Especially as AI gets more opaque, like all the stuff we’ve been seeing about deep learning and those kind of algorithms that are really fancy—I love them, right, that’s why I did a PhD in AI—but it can make decisions that are very intractable, and if they are not politically correct, the company that’s using it may have a big issue.
MY: I love that you brought up the fact that sometimes the customer isn’t?? even giving you their data, but from the small amount that you know about them, you’re inferring other features about them. Now, GDPR is a really big deal now, and I’m really curious, how has this new regulation affected how you build AI systems, especially because these AI systems can infer a lot of information about customers that they did not voluntarily give you.
RW: Yes, well, or even if they did give it to you voluntarily, it may infer things that you never thought about. You’re thinking, oh, this is about getting me better service, but, you know, who knows. So there are all sorts of things.
I think GDPR is making a lot of difference, especially in the conception around what AI can do and data, right, so it’s asking a lot of good questions. Where is your customer data? It’s not that obvious, it’s in a million systems, well, thousands of systems. So that’s one part of it. Obviously the immediate effect for companies that have to be compliant with GDPR, which is not all companies but a lot of companies, and I’m sure GDPR will come to a continent near everyone, so in the end this is probably the standard on customer data, customer privacy, it will be a global thing in one form or another. It’s not just about the data, it’s also about the algorithm that you’re using. One of the clauses in GDPR is that any decision that carries any legal significance, which who knows what legal significance is, the lawsuit will make that clear. But it’s definitely a loan decision or a mortgage decision, but that’s not clear what it means. But if it carries a legal significance, there is a requirement for companies to be able to explain that, and that puts a burden on AI, that I think is a double-edged sword. So on the good side, it means that companies need to control the transparency of their algorithms and be aware that some of these algorithms are intractable and potentially opaque, so they have to make sure that it’s allowed in marketing or for retention, but it’s not allowed in risk management or lending or those type of decisions. The other thing, and this is an argument that I don’t hear a lot, it’s that GDPR with these constraints also does a disservice to customers, because the actual decisions that companies are making will become less accurate. Here’s an example, if you try to get a loan for $50000 and I insist, as GDPR would, that all my decisioning around it, the rules and the algorithms are all transparent, I’ll actually make more mistakes. So I will give loans to people that will not be able to repay it, as we have seen in 2007 and 2008, that’s not a great story, and that will happen. So it’s an interesting debate in how much transparency you should insist on, and where to insist on it.
MY: I would like to know how this has affected Pega specifically, because you do touch so much customer data, and a lot of this customer data does touch something that you could call a legal decision. Can you speak specifically to some of the changes that you’ve had to make, or maybe even some of the compromises that you’ve had to make in order to ensure transparency and compliance with GDPR.
RW: We had to do two main things. One, the easy GDPR requirements that everyone know about. The right to be forgotten, the right to query the profile. But that means that we had to make a change in our software to make sure that is all possible. If a customer says, I don’t want you to keep the data, especially the data that is not completely relevant to the way that you do business, then I want to see it. And if I don’t like it, I want to erase this. So that means that, our own products that are touching customer data, had to compliant so that they all…part of their APIs is to make that possible. The other thing is about the compromise that you mention, and especially because we’re pretty hot on AI, we think that’s a really important thing, and it drives a lot of the return on these kind of things. We want to make sure that companies could be more in control of that. I think pretty much everybody is trying to explain the decision process, but if you’re using opaque algorithms like deep learning, then that’s inherently impossible, and it will get a lot worse…or genetic algorithms, all of these fancy stuff that are really cool but they may have a problem explaining how they got to that decision. We implemented something we called the T-Switch, and the T is for Transparency, but it’s also for Trust. It’s not actually a switch, the switch is binary.
MY: You should call it the T-Squared Switch to make sure you have the Trust and Transparency both covered.
RW: Yes, yes, but it is about transparency and we believe that is what engenders trust, but it is actually a dial, so you can dial it up and down. The company could want to be completely transparent, if it’s a bank or anything to do with risk. Whereas in marketing, they may want to be a little bit more opaque, but not too much, because they may want to make sure that there’s no weird advertising, external websites, or things like that.
MY: So when I dial this Trust Dial, what actually happens? Are you switching over, for example, from …if you’re trying to use it for a legal use case or a risk management use case, do you stop using neural networks and start using more transparent algorithms like decision trees or…what is actually happening when I, as one of your customer, decides I have to be more transparent in order to be compliant, or in order to deliver better value?
RW: If you assume you have this customer strategy sitting in the middle of these organizations, basically trying to optimize all these moments that we talked about, all the interactions with all of the customers, that’s a centralized type of thing. Now, that strategy or parts of that strategy are all tacked according to a business taxonomy, so we know what part of the strategy is for marketing, and what part of the strategy is for the collections process, or for lending. So we know exactly how that works. And then if you change the dial, you will get compliance warnings. So it would say, hey, you’re using a neural network here to optimize your lending process. That’s not really transparent, so it will give you a warning. It will also, actually, block it from execution, so the decision engine itself will not execute that strategy…it will just fail to execute. And then it can get very clever in mixing and matching it because if you have a transparent model, not to go completely [??] on anyone here, but if you have a transparent model and an opaque model, the only challenge you have is where they disagree. So if they agree on something, you might as well use the transparent model. And then for the 5% of customers where it disagrees, there you have to make the choice what algorithm to use for a particular use case.
MY: I’m really glad you brought this up, because what you actually do is not easy. Everyone wants to reduce customer churn, increase customer retention, everyone loves to upsell, cross-sell. Everyone wants to make sure they’re compliant. That’s not actually easy. It’s easy to say but not easy to do. What’s your technical advice? I guess my question would be, maybe, the top three pieces of technical advice that you would have for executives that are handling sensitive customer data, trying to stay compliant while trying to deliver value. So for example, you just offered one, which is if the transparent and the opaque algorithms agree, well then you don’t have a problem. So then, how do you handle cases where they disagree? So, looking that kind of technical advice that you think is really important for executives to know about and to master before they can successfully do what you’re doing.
RW: There are a few things. So one of them is, as you said, there is this switch, and that or a similar mechanism will have to be in place. Which also [??] You will need to study to assess all the algorithms that are used, and you don’t want to be naïve about it, you can’t just say, corporate-wide we’re only going to use decision trees. That’s just, your competitiveness…that would not really be sustainable, so it has to be really granular where you can change the dial or the switch to use that. So I think it’s important to have that kind of mechanism in your organization. The other thing is, what you should also institutionalize, are bias tests. Even if you have the switch, even if it is transparent, but especially if you are allowing opaque algorithms—as more and more companies will do—you need to be able to test…almost like another quality test where you do your unit testing on how strategies work and what business outcomes they are achieving. You will also need to look at this and say, hey, this is actually a weird bias for a particular gender or race. Which is also not completely trivial to do, but that is something that I would really institutionalize in anyone. Even if you don’t use particularly advanced AI but just more simple predictive models, I think that is a very good practice to follow. And then the third thing is that data…don’t look at just the data. I think I’ve made this point before, but the data may be completely innocent. Forget about that. The AI can look at completely innocent data and infer all sorts of stuff you don’t actually want to infer and use during customer engagement, so that’s the other thing. Data does not tell the whole story. Years ago, consultancy around this…and people were really like, “oh well, we never actually have that in our database, and we never feed it to our algorithms.” That’s not enough anymore.
MY: Going back to this idea. You mentioned all of these things, and they’re not easy to do. Detecting bias in systems is not easy to do, making sure your inferences are accurate is not easy to do. What have you found have been some of the difficult aspects and how have you solved these problems?
RW: We believe that, first of all, for AI to really proliferate, it needs to be accessible. Not just by data scientists and people with PhD in AI and that kind of profile. It needs to be accessible by the business, so that makes it even harder, because the business is not going to inspect [it ] and say, “Oh, this is a weird ensemble model” or “randomforest model”. They will look at the business result. So we put a lot of effort into making that simpler, to make sure that business can be in control, and marketers can be in control, and risk managers can be in control, and then still have control over their AI algorithms. That’s a not trivial kind of thing to do. For instance, we try to make it easy and having these bias tests as part of our quality assurance methodology. You would just put in your bias test for the stuff that you’re worried, and it will be automatically tested before you put your strategies into action.
MY: How does a company create a bias test? Because there’s a lot of controversy around what even is bias. For example, what’s statistical bias versus a social bias? What counts as unfair or fair? It’s a loaded question. SO I’m curious, when you’re doing these bias tests, how do you create a good bias test or valid bias test?
RW: From our perspective, we don’t have to be the bias police. We are providing the tools to set up the bias test in whatever way it’s acceptable and [is] part of compliance that you need to be under. In general, there are two different ways of doing it. One of the things are the trivial way where, for instance, let’s use gender or age. You have your customer profile, you’re using your decisions, and the decision may be around retention or risk or whatever the decision is, and you’re looking to see if there is an outcome that is particularly skewed versus your normal population. And then you can see if that’s significant. And then, you look at the algorithm to see if it is skewed and why it’s doing that. If the algorithm can explain itself, there may be actual reasons that are valid. So that’s your first kind of thing. Where it becomes more difficult is if you are looking for bias on data or an outcome that you actually do not have a record. For instance, maybe you don’t have rates in your data, but for some reason, your opaque AI is inferring that from all sorts of other stuff and has unsavory algorithm or strategy. I think the only way out of that is to have a panel of your customers that opt in to this, and that you may even pay for their services, and ask them these questions, so you actually have the sensitive data on record for just this sample. Small sample is enough, and that’s what you would test in ethical or bias tests. I don’t think there’s another way of doing that kind of thing.
MY: Just as a side?? Question. We’ve already talked about how AI can make systems more difficult to build because they infer so many features about your customers they may not explicitly tell you. So the data may be innocent, the inferences may not be. Are there ways that AI can actually make achieving privacy, security, and transparency easier?
RW: Mostly it makes it harder. There’s obviously a big upside to that. I’m not saying that it’s a bad thing. I think most of these aspects, AI is a really…it is a bit of a challenge that we can control, and we explain that, but we can control it. Where it can help, though, is to actually look at the data and not insist on actually keeping as much data. Companies really keep way more data than they probably need to or should. It’s very popular now to have big data and potentially listen to every post and tweet and…all of these things. And some of that may be important, and I think AI can really help determining what you actually shouldn’t be looking at, what doesn’t matter enough to make the investment or risk being out of compliance.
MY: All right, makes sense. Final question, what’s your vision of the future when it comes to customer data management and the kind of customer engagement optimization that you do at Pega? What are you really excited about?
RW: Two different questions. I think on the optimization front, I think where this is going, and I think where AI will become a big part of that, is that instead of determining relevance and optimizing decisions, I think the name of the game will become optimizing the actual business model. You can only do that if a lot of that is centralized and a lot of that is made explicit, but then you can have AI that would say things like, “Oh, if you did a little more of that and a little less of this, and if you put it on that stream and spend more budget here, actually these [??] AI would go up. That is an interesting aspect of where AI is going.
I think on the customer data side, I’m also expecting some pretty radical advances. I think that GDPR makes it easier here, because another GDPR requirement is that everyone can just go to a company and can insist that they send them their data. Which means that a lot of tech companies that could basically bootstrap off of their data, so they can just say, “Oh, let’s have a look at your banking data and do stuff with it.” I think eventually where this may lead is to a world where customers themselves will actually own a lot of their data and essentially rent them back to companies that need that access. So there probably will be some kind of blockchain type algorithm to make that feasible. I see that not immediately, but I see probably that’s where it’s going. And actually, that will solve a lot of compliance issues, so companies themselves may not actually mind that much, or at all.
MY: That’s definitely the dream, customers being able to own their data, rather than their data being owned and monetized by large corporations. How far out do you think that might be? I know you said it’s not immediate, but what’s your dream goal in terms of timeline?
RW: I don’t think it is that far out. And I think GDPR really helps, because I think one of the impediments to it is that companies would say, no, this is my data, and nobody would opt in to do this. Now that they don’t have a choice, it becomes a lot more feasible. And then the other challenge is obviously the scale. If you want to do this in blockchain, and I’m not saying necessarily that’s the case, but I can see that going, it needs to really scale. Because this would be every single customer, it would keep all of their records, and this would go way beyond marketing or risk. It may be your genetic data, it may become your medical data, it could be all sorts of things. But I think the contours of that, we will see probably within the next two to five years. I don’t see it happening before that.
MY: All right, so five years from now, I’m going check in with you, Dr. Rob. We’ll see how we are progressing on this customer-owned data utopia. We’ll see how it goes.
RW: Exactly.
MY: Thank you so much for coming on the AI for Growth series today. We learned so much from you. Thank you so much for your time!
RW: Thank you for having me!
Leave a Reply
You must be logged in to post a comment.