Agile Podcast: AI Live Unbiased

Ep. 6

AI Podcast Ep. 6: Causality and Artificial Intelligence with Arni Steingrimsson

AI Podcast

Episode Description

Dr. Jerry Smith welcomes you to another episode of AI Live and Unbiased to explore the breadth and depth of Artificial Intelligence and to encourage you to change the world, not just observe it!

Dr. Jerry is joined today by Arni Steingrimsson, a data scientist, and machine learning and artificial intelligence expert in the U.S. and Mexico. He is a senior-level data scientist, who comes from a biomedical field. Arni and Dr. Jerry are talking today about Causality and the crucial role it plays in the AI space.

Key Takeaways

  • What is Causality? Why is it important to Artificial Intelligence? 

    • Causality is what is causing the outcome; from a data perspective there are certain features that will be causal to the outcome, but there is no guarantee that you can change the outcome by changing those features

    • Defining causality is less important than knowing what is capable

    • Granger causality is defined as a statistical dependence

    • Judea Pearl proposes three levels of causality: Association, Intervention, and Counterfactual

  • Why it is important to actually know the cause of something? 

    • People who want to be ahead and business leaders need to know how they can influence their decisions and make change, that is why knowing that causality is crucial

  • Why it is important to actually know the cause of something?

    • People who want to be ahead and business leaders need to know how they can influence their decisions and make a change, that is why knowing the causality is crucial

  • Counterfactual Causality explains the connection between x and y, but y does not really change the possibility for x to occur or not to occur

    • What are counterfactuals? They are a comparison of different states in the same world, but how do you quantitatively compute these two states? It is done by holding to a variable

  • Simpson’s paradox: Something observed at a high level is counter to the thing observed at a low level

    • Simpson’s paradox is usually overlooked

    • The study of data is an important part of the causality world

  • Using machine learning in the world of causality:

    • There are some data scientists that didn’t study causality, and they think that they can just use classical machine learning, isolating features, and feature reduction and that means using causality…but that is not the way of “changing the world”; you need to know why certain inputs changed and what caused this change

    • A reported driver is different than a causal driver

  • The application of Evolutionary principles in the AI world:

    • The predictors are the blocks that put those inputs which are causal; this way we know the causal input to then create the machine learning model that will tell what will happen as a result of the given inputs but it does not tell us what we should set those inputs to

    • First, we figure out what is causal and make a model for that, then once we have this model of the world, we tell people what conditions need to be set to get the best chances of achieving your outcome

    • What kinds of tools are used for evolutionary computing? Python and their library called Deep

  • What can be done after simulation? What is next?

    • After simulation, we need to take the inputs that represent causal drivers and put them into action in the field to monitor the chance

    • If you want to improve your product you need to put programs (such as marketing and sales efforts) out and collect the data on them, how they are improving and what are the changes

Transcript [This transcript is auto-generated and may not be completely accurate in its depiction of the English language or rules of grammar.]

Intro: [00:04] You’re listening to AI Live and Unbiased, the podcast where we explore how to apply artificial intelligence and the impact it has on enterprises and society. Now, here’s your host, Dr. Jerry Smith. Dr. Jerry Smith: [00:20] Hello everyone. I’m pleased to have with us today as a guest Arni. Arni runs our data science machine learning and artificial intelligence here in the US and Mexico. He’s a very senior level data scientist, pretty bright guy. Comes from biomedical field, like I do, healthcare area as I do. He’s a real pleasure to work with. He and I are going to talk today about the causality and the role that it plays in this space. Welcome to the podcast, Arni. Arni: [00:52] Thank you. Excited to be here.

Dr. Jerry Smith: [00:54] So tell me a little bit about, first of all, what is causality and why is it important to Artificial intelligence? Arni: [01:00] Well, causality is, well, we’ve talked about this several times in the past. It’s hard to define it. There’s many ways. One is kind of defined by this Granger causality of classical statistical method. But when you and I talk about whether it relates to AI, machine learning, we usually go with Judea Pearls methods and or what he has come up with a mathematical representation of causality, of what is, if you have an outcome, what is actually causing that outcome. So if you look from a data perspective, in machine learning you usually have features and you maybe try to fit a model to it. And usually you’d bring all the features, but if certain features are causal to that outcome, and then if you try to change features that are just correlated, they’re just correlated to it, there’s no guarantee that you’re actually changing that outcome by changing those features. That’s, I guess, the simplest way I could think of it, but it’s much more complicated when we go into the theory behind it.

Dr. Jerry Smith: [02:26] Yeah, you’re actually right. So if you think about Judea Pearl, he actually struggles with this too, right? When people ask Judea, they say, you know, could you define causality? And, he actually punts a lot. Well, he’ll say, well, defining it is less important than understanding what it’s capable of. Right? And we don’t ever want to get into these one of these ideological wars around what’s the meaning of something, because it loses the value, but you actually bring up two really important characteristics. You brought up Granger and you brought up Judea’s interpretation of causality. Granger does interpret it as a sort of a statistical dependence, right? Granger says that Y is statistically causal to, or X is statistically causal to Y if Y can be explained better with X than without X. I mean, that’s fundamentally a Granger approach and in Judea Pearl’s world that would be considered at his three levels of causality. Remember those rings that we talk about, that would be an association level. Totally interesting. Very important. We can get that. Granger is actually on that level one side, right? The intervention side. What are the other two levels? If you believe in a Judea Pearl point of view around causality, what are his other two levels?

Arni: [03:50] So we got the intervention was the one, but then at the top level is the counterfactual. Dr. Jerry Smith: [03:58] Exactly right. So, entering into all this stuff is this sort of associative level, the Granger level, you know. X explains Y if Y can be explained better through X, given X versus not with X, right? The probability of Y given X and Y is better than just the probability of Y. So there’s this fundamental architecture and then you get into intervention, which is what does X tell us about Y. That’s really important. And then the counterfactual says, did X actually cause Y? Did it cause Y? And why is that important to you when you think about playing with causality in the world of developing enterprise application, why is it important to know the actual cause of something? Can I just report on things? Isn’t that good enough?

Arni: [04:51] Well, if I guess, well, I think in industry today, a lot of companies are still there. They still believe that, or they probably just don’t know better, but to you and I, and people that want to be ahead and for business leaders, they it’s not enough. Well, if they don’t know better then they don’t know better. But if you knew that you can actually influence your decisions, you can make a change. They would absolutely want that. Dr. Jerry Smith: [05:28] That’s where I think I’ve heard you talk about this a lot, especially with some of our clients in that, the thing you’ve told me is that, you know, reporting on the world’s okay. You know, if you want to know how many beans you’re going to have tomorrow in your being distribution system, that’s great, but you just pointed out the biggest thing there. If you want to change something, if you want to control your future, change your future, you need to know the causal pieces of it. Right? And that’s the piece that we talk a lot about, which is Granger’s causality, which is a great classical measure. Causality tells us that Y can be explained better if X is included, but, but just because X was included, doesn’t mean that we can control Y and that’s the counterfactual world.

Dr. Jerry Smith: [06:16] And here’s my example. I don’t know if you have an example, think about your example. My example in this space is I can explain the weather by looking at the color of my shoes, right? But my color of my shoes didn’t cause the weather right? When I wear black shoes, galoshes on the outside of my leather shoes, I know that’s old school. People don’t do that anymore. But if when I wear galoshes on the outside of my shoe, I can guarantee you it’s raining. So X can better explain why the rain, my black shoes can better explain why the rain and rain can be explained better, better predicting by taking X into account. But X you can’t sit there and say, Hey, Jerry, Dr. Jerry, can you make it rain tomorrow? Can you wear your black shoes? It just doesn’t work that way. Do you have any examples in your life that you, when you think of causality like that you use? Let me lead the witness here. Fishing. A lot of folks don’t know this about Arni. Arni actually worked on one largest fishing ships out there out of Iceland. When you think about harvesting fish, right. And stuff like that.

Arni: [07:26] Well, let me just give you some background on this. So these commercial fish and ships are,

Dr. Jerry Smith: [07:32] Wait, I got it. We can do it really easier. Does your clothes cause the ship, does your clothes close? Because you wear certain clothes when you’re cleaning fish, right?

Arni: [07:46] Sure. Yeah.

Dr. Jerry Smith: [07:48] So if I were going to predict that you are cleaning fish, right? Your clothes, your clothes being X, would be a, now let’s just leave that. I’m not going to go down that route. And that’s a sort of a, I want you to think about that though, because you need to have your own story. I always use the shoe one all the time.

Arni: [08:08] Well, I was going to say how can you predict that you’re going to get a, I don’t know, a 10 ton catch that day. And then you can have it based on, I don’t know the weather, you know, but it’s really not on the weather you have, you can, it’s tension in the ocean. It’s the current, their feeding area that’s going to control. Dr. Jerry Smith: [08:34] Okay. Now, you’ve hit on something really interesting. Right. Do you harvest fish in extreme weather conditions? Arni: [08:41] Yep.

Dr. Jerry Smith: [08:42] You do? Like really bad seas. Oh wow

Arni: [08:45] One time it was close to a hundred miles per hour wind speeds. Dr. Jerry Smith: [08:52] Oh, wow. Okay. Well anyways, well we won’t go down that route. Let’s get back to the causality train. So, back on track, that’s important. The difference between Granger and Judea understanding of causality is one of, you know, Granger is statistically defined, right? Probabilistically defined. Judea is more narratively defined in terms of its capability. So let’s just jump right up to the top of the wrong and talk about the most important part of Judea Pearl’s causality, which is counterfactuals. Well first, what do you think counterfactuals are and why do you think they’re important?

Arni: [09:40] Judea Pearl talks about medicine a lot and I think that’s well-fitted, I think in the healthcare world that is probably one of the where counterfactual fit really well. And so if we think of a preventive, kind of plan, you’re trying to predict if a medicine is going to work for that patient or not.

Dr. Jerry Smith: [10:09] So Judea talks about counterfactuals being a comparison of states in the same world, right? Comparison of different states in the same world. So I’m comparing a world where, and let’s use your medicine example, let’s use aspirin, right? Because that’s going to be Judea Pearl’s example. Did the aspirin prevent my headache, right? That’s his example. In Judea’s world, we would sit there and say, I have to look at you in the world where you didn’t take the aspirin. And then I have to look at you in the world where you did take the aspirin. Now, unfortunately that’s called a counterfactual world because those two worlds don’t exist unless you’re in the multiverse. Right. So once you take the aspirin, you’ve taken the aspirin, I can’t untake the aspirin and then look at you. So that’s the important, that’s a counterfactual world, which is a comparison of different states in the same world, which is counterfactual. What is the fact? The fact is Arni took the medicine. The counterfact is let’s go back into that world where Arni didn’t take the medicine. Well, it doesn’t happen. So that’s the process that we’re trying to get out of counterfactuals, because that now tells us. And that is, and we represent that as the do operator, right? The do operator, we do X in this particular case, which is really hard. So let’s go back to.

Arni: [11:35] On that point, you raised a great point of like you can’t undo of you already gave that person aspirin, but we also have numerous samples where you can’t treat a certain patient with certain medicine where you can’t treat them with like, I don’t know, we can’t in a different field, you can’t throw a atomic bomb on there. So there are scenarios where you couldn’t just try things out. And that counterfactual will actually, as of well fitted for scenarios, when you can’t just undo it or you can’t actually physically do it.

Dr. Jerry Smith: [12:18] Right. and I’m going to put something onto our brain, the Simpson paradox for us to talk about a few minutes here, because I think it’s important. I actually came across a real life Simpson paradox in talking with people in a financial situation. But going back to that and that’s the hardest part of, sort of a Judea Pearl’s approach to causality is how do you quantitatively compute these two states? And in statistics, we do it through holdouts, right? We hold out, not hold out. I’m sorry. We hold for, or we solve for a variable, right? We’ll come in here and we’ll for example, say I’m going to partition my data to between people who did and did not take an aspirin. Now that isn’t Arni taking and not taking an aspirin, that’s a group of people. So now does the group of people who took an aspirin and not took an aspirin, is that representative of, could we get causality out of that? In the group that took, it were men and women and in the group that didn’t take, it were men and women. But we would ask the question then is gender a factor in whether or not the aspirin can reduce a headache? Well then we’d say, okay. Oh, I got you. Don’t worry. Don’t worry, Dr. Jerry, don’t worry Arni. Here’s what we’re going to do. We’re now going to take a look at the people who didn’t didn’t and we’re also going to control for sex men and women. Right? So now we’re going to have men who took it and men who didn’t, women who took it, women who didn’t you say, okay, all right, now, Arni at least fits into those camps. Right. We can ask you to say, well, Arni is a male. And, he took it. So men who didn’t take it, what does that look like? And then we would sit there and say, well, how old’s Arni, you know, Arni’s a young guy, he’s in his thirties. Right. And, what about the other guys who didn’t take it? Well, you got the older guys like Dr. Jerry, he’s in his sixties. Okay. All right. Well, let’s control for age. So now we got men who didn’t take it and men who did take it, and we’re going to put those in the thirties. And those in the not thirties for all those who want to know, I’ve reached the level in the game of life, the level of 61, I have a lot of restarts in that game, but we have those two levels level, thirties and level sixties in this world. All right. Now, can we do it? Well, then you would sit there and say, well, you know, what about the fact that Arni is from Iceland and Jerry’s from United States. Okay. All right. I get it. So all of a sudden we’re controlling for every possible influencer in this, and that’s where Judea Pearl goes. That’s the problem with Granger’s causality, is you have to identify the probability of an aspirin reducing a headache, given all of these control variables and pretty soon you of control for everything in life. And does that actually answer the question? It doesn’t, it still doesn’t answer the question. So that’s where we need to get into these alternative universes. That’s a big problem. We should actually get into some of that later on how you actually get into do operations in complex systems of causal maps. But I want to hold on, I want to talk about Simpson paradox a little bit, just as a little sidebar here for the folks on here, Simpson’s paradox, right? Simpson paradox says that something observed at a high level is counter to the thing observed in the low level. Is this an area that you like to talk about Arni? I mean, are you a big Simpson paradox guy?

Arni: [15:54] Yeah, I would think so. I think this is a problem where a lot of, gets overlooked a lot in the industry.

Dr. Jerry Smith: [16:06] Do you have any favorite examples?

Arni: [16:09] Well, there’s been COVID reports out there that have been reported and media has run with it without having it tested for Simpson paradox. So you have subgroups where if you sum up the results from the subgroups, they don’t equal just when you have it like as a whole, instead of being in accumulated in the group.

Dr. Jerry Smith: [16:43] Yeah. And that’s a classic example. Right. For those that are listening on this, the Simpson paradox would say, you know, building on Arni’s conversation here is that the vaccine didn’t work for anybody, but yet it works for men and women. And you’re like, wait a second. The vaccine doesn’t work for anybody, but it works for men and women. How can that be? Hence the paradox, right. You know, Dr. Simpson back in the day identified these kind of interesting statistical trends. And if you think about it, you think about taking a population of stuff in the upper, and I’ll post a graphic here maybe later on, but see if I can visualize it. If you take an X, Y graph and you take a bunch of dots that start in the upper, you know, zero portion of X and high portion of Y, and you make a linear diminishing set of dots going all the way down to the far X and the low Y right? You see that sort of linear line of dots going down and, you know, Y is effectiveness and X’s population. And you would say, well, it’s a diminishing performance over the population. So this thing must not work for people. Then if you were to then deconstruct those dots into the two populations, men and women, you would see a cluster of dots that would go from a low Y to a high Y and then another cluster of dots, maybe below it, that would be little bit lower Y to a higher Y, but you’d see these two parallel clusters that are offset from each other, that you would draw lines through for the men and the women you’d say, well, for the men, it’s an increasing performance and the women it’s increasing, but if you take the overall performance through men and women it’s decreasing, and that is Simpson’s paradox, right. That’s an important piece of Simpson’s, paradox. And, so when we think about our world, that can be an important play, right?

Arni: [18:38] Yeah.

Dr. Jerry Smith: [18:39] And that’s why we have data science, right. That’s the study of data. So, which is an important part of the causality world. You know, one of the cool things about the Simpson paradox, it was an observation about this general, counter trend of the overall thing being good or bad, and the underlying things being bad or good. How do we figure those kind of things out? Is there a way for us to figure that out? Arni: [19:04] Yeah. I mean, the lengthy way would be to just take every sub group and like accumulate them up, and then you’re compare them to the totals as a population, as a whole, that’s one way, but then the better ways to do what Judea Pearl talked about using the causality. So do we want to have, bring in a sample on this?

Dr. Jerry Smith: [19:34] I don’t know. I mean, it’s a conversation, right? I mean, what we do as we’re, as we’re talking through, I agree with you, by the way, I mean, there’s the data science approach, right. Which says, this is good or bad. Are there categories, is our observations categorical? Can we group them into clusters of things? And then for those clusters, is our observations still true or are our observations still true? Judea Pearl would say, okay, that’s pretty good. You’re back to Granger’s world. Congratulations. You made a lot of advancements. How about we look at it from a causality perspective? What about using machine learning Arni in the world of causality? What are some options in that area?

Arni: [20:18] Well, classical way, when you’re going through feature reduction, for example, is to measure how effective its feature is. And then you start isolating individual features and you change the values and then you see about the outcome changes. And then machine classical machine learning engineers might say, well, I don’t need to know this causality, but I can just do it with my classical machine learning approach, isolate the features, test the values, change the values and see how effective that feature is to change the outcome. The problem is we’ve already fit it a model using all the features and we’re testing really just the machine learning model and the features how that interact. But we are not taken into account the mutual information theory of testing all individually prior to building a model. And I think that’s what needs to be emphasized or taught to the classical. Well, for the data scientists out there that did not study causality and just went into machine learning and thinking that they can just use classical machine learning and isolating features and future reduction that that’s some sort of representation of the causality.

Dr. Jerry Smith: [21:45] That is an extremely important point, because in, you know, the folks that are doing the machine learning, they create a box, they put stuff in, they get stuff out, right. That is classical Granger approach right there. And what the machine is learning is does the variable X have a, is Y better explain through the variable X or without the variable X? And if it’s explained better with the variable X, guess what we do? we increase the weight of its representation in the matrix, right? And, that’s classical machine learning models, whether it’s linear regression or complex deep neural networks, the deep neural network side, we allow to get features, but you’re absolutely right on that. And that goes back to our point before, which is if you want to change the world, you got to know which X, Y, and Z inputs are things that you can actually change that will cause those changes not represent those change. Reporting’s one thing, right. If I want to know, here’s my example. I had a lab company that did lab results, right. They actually took your blood and they figured out what’s inside them. And one of the challenges they had was every time they’d submit their procedures to the payer, a lot of time they’d get rejected, they’d submit procedure, A, B, and C, and they said, okay, great, it’s rejected. But if they submitted procedures C, B, and A, it was accepted. Same procedure, different order, right. So they said, can you create a model for us so that we can be better educated, you know, know when to put C, B, and A, so we did that and all it did was look at the reporting board. They tried to use it as a causal. The challenge with that is, is they were getting hit and miss results. We’re really good at predicting that this is going to be rejected and this will be accepted, but it wasn’t a causal driver for it. It was a reporting driver out of that. That’s a really an important part of it, but I want to go to another part on the machine learning aspects of it. Let’s talk a little bit about evolution a little bit, right? So as we begin to think about the use of genetics and evolutionary theory in that particular space, you know, what are your thoughts around the application of evolutionary principles in the sort of AI world, in addition to all the causality stuff that we’re talking, right. So this is maybe a little off topic, but I think we’ll eventually bring it back. Do you have any thoughts around where evolution falls into all this?

Arni: [24:20] Yeah, I mean, our goal is always try to help companies take better decisions. How do they change the world? And we want to bring that ability to companies, is to give them the option to actually, I want to increase my revenue, or I want to find the right spot to build my next restaurant, or that is going to bring in this much revenue. Like these are optimization problems given some sort of data. And we have a methodology at AgileThought that we can, a proven methodology, that we not only we can give you the data and then we turn into a digital twin. It becomes like a simulation. And that’s what we do with our predictors.

Dr. Jerry Smith: [25:25] So your predictors are those blocks that took those inputs, which were causal that we talked about earlier. So we know the causal input. So we get rid of all the correlation, and then we create a machine learning model out of that. But that model only predicts, only tells us what’s going to happen to our revenue given these inputs. It doesn’t tell us what we should set those.

Arni: [25:48] Right. And that’s where we put an optimization model on top of it. And we set a specific goal. And then we apply evolutionary computing algorithms on top of it to achieve the most optimal solution. And we’re guaranteed to get optimal solution, not a local optimal, that can sometimes happen with other algorithms outside of evolutionary computing, Dr. Jerry Smith: [26:20] Right? Like in your classic hill climbing model. You know, your hill climbing, your gradient descent. You can get into those sort of sub optimal lows or highs depending upon how you define it. But let’s just think of a hill, a suboptimal valleys that don’t really represent or a suboptimal peak that doesn’t represent the ultimate peak. And there are ways that we can solve that or solve for that. We can do stimulated and kneeling. We can add some randomness to that process that will jerk you off and see if you climb the same hill again. But to your point, evolutionary computing is a little different, right? It’s combining different characteristics, population characteristics out of that. So here’s what I heard you say, we figure out what’s causal, right. We build a model on that. And then that’s the model of the world. Now, our job is to tell the people what conditions do you need to set to get the best chances of achieving your output? And that’s where you’re using evolutionary computing. And what kind of tools do you use for revolutionary computing? Arni: [27:23] Well, we’ve used a really good tool in the Python world, since I’m a big Python. Dr. Jerry Smith: [27:33] I’ve heard you’re a Python fan. You’re a fan boy of Python. Arni: [27:38] And they have a really good library called deep. So that they have already have numerous algorithms that you can take advantage of and don’t have to start from scratch. But, so that’s one example, but there’s more out there.

Dr. Jerry Smith: [27:56] And that’s an open source solution, right? So you can get into that literally now. If you’re a Python person or even in our person, like I am, I’m one of the enlightened people that’s in the world. And, deep is on both sides of the house. You bring it in, you know, you define your goals and you put your inputs and then you can organically use genetic algorithms to manipulate that space.

Arni: [28:17] So I think it’s, it might not be clear to a lot of people that when they think of machine learning model, that it’s a mathematical representation of like, when you say think of, oh you have this data points fit a curve. Okay. In a way, that’s a machine learning model. You fit a regression model. But what does that mean to you? Like from a physical engineer, it’s a tool but sometimes it’s not people don’t think about really what are the uses of having that tool? And they think, oh, I can predict something. I can do this. But to us when we have our methodology as a whole of putting a optimization on top of it, it becomes a digital twin. So if we think outside of machine learning and you go into like physical engineering, like building up, like my previous background, I was working with Siemens deciding wind turbine blades. And in that world, you would in the computer rated program, you would’ve signed the blade, the air force and build up a blade. And then you test it in a simulation and a simulation represent your flow, go over the wing and you can calculate all this stuff. But you also have need to think of our machine learning models as equivalent, but instead of generating data from scratch, synthetic data, we’re actually taking from real world data, and then we fit the curve, which becomes then a simulation model. And I think that’s not clear to a lot of people that are actually using machine learning models to build a simulation world that can fill in the data gaps or all your variables needed, so you can find an optimal solution. And I think that’s a good point to.

Dr. Jerry Smith: [30:20] Think that that’s understood by a lot of people either. I think you’re right about that. And we might as well just go ahead and hit on it while we’re here. We’re still not done with it. Right. So, okay. We took data. We didn’t talk about the whole cognitive services, which I talked about in a different podcast, but we talked about causality, the different between Granger and Judea pearl’s approach, which is fundamental. And we really didn’t get into the nitty gritty of counterfactuals and the importance of it, then we got into building a model that represents this world. Again, it’s not a causal model. It’s only a digital surrogate if it’s built on the causal stuff. And then we optimized it, right. We use Deep, there are other tools out there to figure out what input conditions achieve the best result on the output. Are we done? Is that it? Can you and I go home for the day and say, congratulations, here’s your PowerPoint presentation?

Arni: [31:14] Well, I mean for some if they want to like turn off the business or sell the business after that.

Dr. Jerry Smith: [31:21] But yeah, you’re right. So that’s simulation. What do we do with simulation? We go to the what? What do we do next?

Arni: [31:27] Well, we need to take this results put it into action into the field. And then we monitor the change. Imagine when Steve Jobs came out with his first iPad or something, like he sold it, but then he didn’t like, he wants to monitor how people react to it. Like we, as humans, change. Like look at our life like before cell phones and after cell phones, it’s completely different and our behavior and stuff. So similar way, that’s what is needed. We need to take those results. We need to put them into the field and collect more data and monitor the change.

Dr. Jerry Smith: [32:10] I want to unpack “put it into the field”, because that’s the thing. Right? So, put it into the field is we have these inputs that represent causal drivers, you know, and let’s just take growing revenue. It could be everything from the products to the time they’re sold to the places they’re sold to the conditions they’re sold, all these things, right? These are all inputs. And we’ve optimized that model that says, sell this at this price and this location and this time, right? So we do that, but it’s in the put it into the field world that we have to go to sales and say, sales, see this price, you need to start selling at that price now. Oh, by the way, in this region. So now we have an operational deployment that says this region gets this price and in that region, get that price. So that’s an operational marketing. You go to put marketing material out there now that is regionalized in that. Right? And so there is sales, marketing, operations and product work that goes into putting it into the field. That’s the real work. So, our first part of our journey starts off in the real world, collecting data. We quickly move into that digital world that you’re talking about Arni, where we’re collecting data and we’re looking at causality and building digital surrogates. And then we’re back into the real world again, with real human beings doing things. And I think that’s where companies are missing it. They don’t connect sales, operation, marketing, and product back into the data science and machine learning AI world that we’re talking about. That’s the gap I think is missing as well. Do you think that as well? That’s a gap that I think’s missing as well. Well, you know, what do you think about that?

Arni: [33:57] Well, I think if you want to continue to improve your models or not the model, you improve your product, you need to put those measures out and collect the data on it. I mean, it’s not just measure it’s you put these programs out, like you said, marketing, like sales effort, and you start collecting data on it. How is it improving? What are the changes? And then we run through our cycle, again, of collecting this data. Again, we find the causal reasons for why people are buying your product. We find the causal drivers build predictive models again, and we optimize again to further improve the product like sales, for example.

Dr. Jerry Smith: [34:50] Yep. AI is too important to be left to IT. It’s a business construct. So it should start with business and end with business. Business is the bookends of AI. Right. So I think that’s where we’re going to leave it today. I mean, this is always fun. It’s always fun to sit down and just sort of talk through items with you and, and hopefully the folks on the other side the speakers will appreciate this. Before we go Arni, any last words you want to say, of course we’re going to talk later on a bunch of other subjects, but any last words today on either causality or machine learning or optimization? Arni: [35:30] No, I think we cover a wide range of topics. but I think that I put out my biggest points.

Dr. Jerry Smith: [35:40] All right. Great. Well, thanks a lot, Arni. Well, that’s it for the show, for all those listening, how’d we do? Please send us a note or add it into the show notes on how we can make things better in that. So this is Dr. Jerry, your host, and Arni saying goodbye, I’m your Uber AI driver in life asking you to change the world, not just observe it.

Outro: [36:03] This has been the AI Live and Unbiased podcast brought to you by AgileThought. The views, opinions and information expressed in this podcast are solely those of the hosts and the guests, and do not necessarily represent those of AgileThought. Get the show notes and other helpful tips for this episode and other episodes at agilethought.com/podcast.

Share this content

Transcript

Speakers

Dr. Jerry Smith

(Panelist), Managing Director, Global Analytics & Data Insights, AgileThought

Dr. Smith is a practicing AI & Data Scientist, Thought Leader, Innovator, Speaker, Author, and Philanthropist dedicated to advancing and transforming businesses through evolutionary computing, enterprise AI and data sciences, machine learning, and causal AI.

He’s presented at Gartner Conferences, CIO, SalesForce, DreamForce, and the World Pharma Congress, and is often who leaders turn to for help with developing practical methods for unifying insights and data to solve real problems facing today’s businesses.

Related