what-ai-can-do-modernize-your-business-processes

Webinar | How AI Can Modernize Your Business Processes

On-demand Webinars

Learn how we created an AI strategy to streamline a real-world business process, resulting in saved time and higher productivity. Using IntelAgree as an example, we’ll discuss everything from how to identify a machine learning use case to how to create a successful, results-driven AI solution. You’ll learn how our data scientists applied machine learning and deep learning techniques to help the IntelAgree team eliminate an everyday business problem.

Learn how to:

  • How AI can streamline manual processes while enhancing the customer experience

  • How to identify and frame a machine learning use case

  • Best practices to evaluate AI methods, tools and techniques

Transcript of presentation: JASMINE: [00:00:05] hi welcome everyone and thanks for joining us today. My name is Jasmine Rustogi and I’ll be moderating today’s AI discussion. Today I’m joined by James Parks chief data scientist. Thanks for being here James. Glad to be here. And John Wagner the co-founder and EVP of products and engineering for IntelAgree. Thanks for being here. John my invitation. So before we start a little bit about Agile thought we’re a custom software development consulting company and we work primarily with Fortune 1000 clients to help them envision deliver and maintain software programs that are critical to their businesses and provide value to their customers. A couple of housekeeping items before we began. One if you have any questions during the webcast please type them into the questions tab which can be found on the right hand side of your webcast window. Secondly today we’ll be announcing three lucky winners to receive a complimentary 30 minute consultation with James Parks our chief data scientist. This is a fantastic opportunity for you to dive deeper into your questions about A.I. during today’s discussion. James and John will be sharing strategies and key lessons learned from building a custom solution for IntelAgree. John can tell us a little bit more about what IntelAgrees.

JOHN: [00:01:14] Sure. So IntelAgree is a contract lifecycle management platform and our goal is to try to make it ridiculously easy for companies to create negotiate sign manage and ultimately fully understand the contracts that they have. And to that point about understanding machine learning which we can be talking a lot about today is kind of keyed to be able to take a contract rip through it and understand what’s in there so that a human doesn’t have to spend hours and hours doing that.

JASMINE: [00:01:41] Thanks John. Sure. So now that we know a little bit more about IntelAgree during this discussion you can expect answers to key questions like What is a. How do I identify the right use case for a lie. What processes should I use when implementing a lie. And what are some of the challenges that I can expect along the way. So to levels set I’d like to start off by talking about how we define an agile thought.

JAMES: [00:02:03] James Sure. So A.I. is a really broad field and there’s a lot of buzzwords out there about what it is exactly. So just to clear up the confusion. So A.I. generally is associated with automating activities that we think of as human decision making and two of the specific techniques that are that are really successful in doing that are machine learning and deep learning so that both are involved with mapping inputs to outputs. And the idea is that whereas machine learning AI as a developer have to kind of tell the algorithm what’s important and deep learning. On the other hand the algorithm itself figures out what’s important so I just give it the raw inputs. But both are used for the same purpose. Basically you have inputs and outputs and the algorithm is learning relationships to basically map between them in the middle.

JASMINE: [00:02:53] So what are some examples of business processes that can be used for.

JAMES: [00:02:58] So there’s a few kind of broad categories that I that I put all of the use cases in. So the first being where the problem is a long list of rules. So you can think about like ad targeting. So if I want to place an ad on a web page for example and I have some information about who the who the users are who are going to this page I might use some of that information to decide well you know if this is a teenage girl I should wear this ad or if this person is I know to be into the San Francisco Giants or something that I maybe I show them an ad about that but obviously the more you know about them the more users you serve the longer your lists of rules become. And after a while it just doesn’t make sense anymore to try to maintain that list of rules. The next group is problems where it’s just really too complex for traditional software. So you can think about image recognition. So like a traditional programmer how would you even go about identifying things in an image. How do you even write code for that. So obviously machine learning applies there. The next group is fluctuating environments. So a good example here is spam detection where you as an email client you want to be able to take spam emails and not show them to the user. And the idea classically is that some analysts or data scientists looks at the problem and looks at some spam emails and tries to figure out some patterns themselves and they say OK I’ll write this This rule model like maybe I look for all caps and I count no words that are in all caps. And add a certain like threshold. I say this is spam. But then if you think about it from the spammers perspective if I see that all my all my attempts at spamming people are going to spam folders and I’m not getting any money then I just stop using all caps and then it works from the analyst side. Now I’ve got to go look at the problem again and write new rules. So anytime where an environment on day one or on day two is different from it was on from what it was on day one is a situation where machine learning useful so you don’t have to constantly be doing this analysis to write new rules you just retrain the algorithm and the last use case is facilitating human learning so there’s there’s lots of problems. Fraud Detection being one example where traditionally there’s teams of experts that go and look at some data and they decide that OK this this pattern of behavior is typically as fraudulent. And so then a software team goes out writes rules to say look for these specific types of behaviors and then mark them as fraudulent. The problem with this technique is that data is getting really big problems are getting really complex. People were trying to commit fraud are getting a lot more sophisticated. So how do we adapt and one way is to apply machine learning to this type of problem where you train the algorithm to look at some raw data and find patterns and then the human learning part of it is you say hey human expert here are the patterns that we traditionally think of. Here’s one you may not have thought of that the algorithm has identified for you.

JOHN: [00:06:14] And James I guess the you talked about the rules aspect and that’s exactly what we ran into when we started to create IntelAgree. We thought that we were just going to code this a traditional way. We thought we would understand what all those situations we might encounter in a contract work what patterns are we looking for. What when we see a number what usually follows it or precedes it. And what we quickly realize is there’s just no way that that’s scalable. We’re dealing with. We’re dealing with contract. Contracts are written by humans. They can be written in an infinite number of ways. And so really early on in our in our discovery process and IntelAgree we realized we have to do this differently. We cannot just sit here and try to code this and think we understand all the potential test cases that we need to test for. And so it seemed like it was a good fit to try to go down the machine learning and ultimately the deep learning path. And I think the fit was pretty evident to us early on just seeing what we had done with our own experimentation and traditional RegEx pattern and using things that maybe we’ve done over the last 20 years that they would work for a few documents and worked really well. And then you’d get a new one that we’d never seen before and it wouldn’t find the things that we as humans thought were obvious in there. And that’s where we kind of step back from that problem. So there has to be a different way. We looked really hard at whether a machine learning technique would be useful to us. And that’s when we engaged agile thought to kind of take us forward in that direction.

JASMINE: [00:07:37] So knowing that traditional methods weren’t going to work and you were going to embark on this AI journey. James can you tell us a little bit more about how you went about defining an AI strategy for IntelAgree.

JAMES: [00:07:48] Sure. So broadly speaking the process model we followed is based on CRISP-DM which is really similar to agile so it’s familiar to those who are familiar with Agile. It emphasizes iteration and learning throughout the process learning quickly where it kind of differs from Agile is it’s it’s really geared towards experimentation. Just the nature of data science and machine learning is that you kind of have to accept that I don’t know the answer to this problem, when I start trying to solve the problem I kind of figure out in the process which is why you wanna have fast cycles through this process.

JOHN: [00:08:23] Yeah I guess I would add to that experimentation point, so our team had done a lot of agile development over the years that we were very comfortable working that way. But to your point about experimentation, you know I guess in traditional agile we don’t love the idea of going completely into the unknown and these big black holes that could eat up the entire sprint. And so that was a bit of a transition for us to kind of embrace that idea of experimentation that we could do it in a time box period kind of fit within our sprints. But we had to accept that there were a lot of unknowns and that what we would hope out of each experiment were more knowns and unknowns, and that would refocus us for the next one. So it does, it dovetails really well to do agile methodologies. I think if companies are doing pure waterfall I think embracing some of that agile first before they kind of take that leap into CRISP-DM would be a probably a piece of advice I would give. But it did feel very agile and consistent with what we’ve been doing an agile programming for many years now.

JAMES: [00:09:20] And as far as the process I’d say sort of an unwritten first step zero of CRISP-DM is just getting excited about the project, and I’m specifically thinking about some early meetings with IntelAgree, John and team where they’re talking about their vision for the product and just feeling really excited about it. [00:09:36] In terms of the actual end result to the users like something that’s practical and useful for their users but also from a data science side something that’s like we’re really pushing the limits of the possibilities and it’s really exciting to be on that edge of what’s possible and the nature of data science is that….it’s [18.3] one of the architects at IntelAgree remember saying it’s not all rainbows and unicorns, like sometimes things are tough. And, if you don’t have that excitement going into it it can be tough to sort of push through those times.

JOHN: [00:10:08] I think what’s great though is that the excitement kind of comes naturally because people are seeing this this magic. Right now we have a lot of moments where we look like how did it do that. And here we are. People who traditionally write this stuff and we know it’s going to work and we still are excited about that. The the aha moment in this deep learning world is you see the software do something and you’re kind of thinking to yourself, “I’m not exactly sure how it came up with that insight but that is what I wanted it to do.” So, I think James you’re right that early on it’s actually pretty easy to get everybody excited, [00:10:40] and I’m sure we’ll touch on this later. [1.5] You have to manage that excitement because there are good days and bad days along the journey. Exactly right.

JAMES: [00:10:46] So the first step in CRISP-DM, sort of ties into our AgileThought’s predictive analytics discovery solution where we want to go in and do a business understanding and the data understanding phases of the project and sort of work from there and specifically the business understanding phase is where both sides from the business and data science side need to come to a common understanding of: this is the problem we’re looking to solve.

[00:11:10] And, how do I translate that into something I can optimize mathematically from a data science side. And a lot of what I’m talking about is coming to a common vocabulary. I want to understand what are they talking about when they talk about contract. What does that mean? What types of contracts are we talking about? What is the content of these contracts? What is the end user looking to get out of it. And as John explains that, what I’m doing internally is sort of mapping those problems as I understand them, to some sort of data science solution, some pattern that I can use to actually optimize and solve that problem. And it doesn’t have to go to a level where I’m asking John, “So can you talk to me about the casting gradient descent.” I mean you don’t have to go that deep. It’s more things like precision and recall, like to be able to understand how we measure the performance of these models. And John just has to understand okay, when I talk about precision and recall scores, what does that mean for the end user.

JOHN: [00:12:03] And I think that was important early on in the project that we got on the same page with that vocabulary so that we at least understood when you were talking to us about how these models were evolving and whether they were getting better or worse, and talking to us in a little bit of a technical terminology, that we least understood it. And we did then give you feedback as to where we wanted to go. I guess probably the other piece of key understanding, and I think you’re going to be touched on this next, is when new data was important but there is definitely a maturity that we have to learn about: how much data, what quality of data, what variety of data. And I think that was a definite learning process for us on this deep learning journey with your team.

JAMES: [00:12:45] Yes. For sure. So the next stage in the process is data understanding where we kind of just want to take inventory of what are the resources we have available to solve this problem. So we’ve talked about what the problem is, now, in in terms of IntelAgree what contracts do we have available? And, then also what contracts do we need to solve these types of problems? And then look out in the world and say, what can we reasonably get out of the things I have and need, what else can I add to my repertoire here? And there’s some interesting challenge we found with IntelAgreed.

JOHN: [00:13:15] Yeah. So we knew we’re building a platform that’s going to read contracts. So we need a lot of contracts and I think we didn’t realize what a lot meant, early on, we thought if we had 20 or 50 or 100 fairly unique contracts that would be enough to get moving and in some ways it was, it was enough to prove that this was a solvable problem with the technology that we’re talking about. But to really provide value to our customers and find the things that they truly care about, we needed to get our arms around more data and more data and even more data. And, so, I think what James and team did that was pretty innovative in this process was helping us understand what data sets are out there in the world that others have already curated so we don’t have to do it ourselves from the ground up. And then in our case, you know where do we find contracts. Well you guys kind of came to the S.E.C. Edgar web site and decided what a lot of public companies have filed contracts out there because they have an obligation to do that. And those are in the public domain. We can read those contracts and while that doesn’t represent every contract that we are going to ultimately care about, it was a good representation of what is a contract, what’s technically in there, what can we prove that we can pull out using this this deep learning.

JAMES: [00:14:28] Yeah and from a technical perspective what John is talking about is transfer learning where we want to learn from kind of a problem that’s very similar to your actual problem because just early on with IntelAgree, they didn’t have the scale of the data that they needed. And we knew it would come. It’s just in order to jumpstart the project, what can we do to start building some models today and start building out this infrastructure. Well let’s look and see what data is out there and to John’s point, we found as you SEC Edgar database, grab those contracts label those, and that sort of leads into the next step of the process: the data preparation phase, where we want to take those raw raw resources that we’ve acquired, whether the ones we started with or ones we went out and got from SEC Edgar, and prepare them for the modeling process and specifically, what I mean here, is that you can’t in this use case, you can’t push raw text into a model so machine learning algorithms and even deep learning algorithms, the inputs have to be numbers, the numeric, so, and it’s not really, kind of a non-trivial thing. How do you how do you take this contract, you know 100 pages of text, and turn it into some numbers that are useful for the algorithm to say, “OK I know what’s interesting,” in this contract and also and probably as important is labeling data. So once we have a contract and we have some method for turning it into some numbers that we can feed in so we have our inputs. Now we need our outputs.

JOHN: [00:15:56] And that means the process I’m going through and labeling what exactly is important and this is really where I think the business team and technical team have to come together as a whole, James is talking about how they do these embeddings and turn the words into numbers so the models can can start to evolve. Well we on the business side have an obligation to start with text. A human reading that text and a human indicating what’s important in there. So doing this labeling…. and one of the early challenges we found is everybody labels differently, the way you read a contract and what you mark as important, even if we’re maybe even looking at it from the same business lens, you’re looking for payment terms I’m looking for payment terms, but you highlight the 30 in 30 days and I highlight 30 days, that can lead to, send some misleading signals to the data science team. So I think that was one of our early challenges and things that we had to address to make sure that we were doing it in a consistent way that we had a process, right, this is not just the wild west everybody label however they want, that we have to be doing it the same way together to be providing a data set to the to the team so they can actually do true learning from it.

JASMINE: [00:17:03] So what level of effort to that labeling process taken how did you figure out what a good label was versus about.

JOHN: [00:17:08] Yeah I mean it’s it’s significant, and I don’t want to kind of undersell how much effort has to go into that.

JOHN: [00:17:16] From a tool standpoint, that I guess the good news is, you don’t have to go out and buy some robust labeling tool. I think we’ll talk about how that can evolve over time. But one of the great pieces of advice we got early on, is, you have some tools already today and Word and Excel, which are good enough to be able to go in and mark up data and maybe keep track of it and that’s what we started. We took contracts, we mark them up in Word using comments, and again, as long as we’re consistent, that was a valuable dataset for James and team to use and then we evolved that. Our platform actually got better at doing those data labelings in the tool and then exporting those data sets over to the machine learning side.

JAMES: [00:17:58] And that’s that’s a good point you bring up, you’ll see that theme sort of throughout the CRISP-DM process, where the emphasis on solving problems as they become problems, you want to build quickly, build the minimum viable thing and then move on to the next step. And the idea is, you want to cycle through as many times as possible, not do one cycle where you do every step to completion

JOHN: [00:18:19] And then that is what felt right to us as an agile development team, as we weren’t looking to solve everything on day one. So again there is some alignment to the methodology that James is talking about in our traditional agile software methodology. It fits well together.

JAMES: [00:18:33] And so at this point. So we have an understanding of the problem, we’ve taken inventory of the data we have available and we’ve gone through the trouble of figuring out how we’re gonna take that text and turn it into the numbers, features, that the algorithm can learn from. And John and team have gone through with his team of experts and paralegals and labelled what’s important in the contract. So at this point we have inputs and we have outputs. So now from a data science perspective this is the exciting part where we get the train the thing in the middle that actually learns from the experience curated by John and team. And the important part here that I want to emphasize is, going back to experimentation is we wanted to start with simpler models and sort of get something working and in on day one, honestly, it didn’t matter how how well the model did. So long as we had a model and we were able to use those evaluation metrics that we talked about in business understanding. So I’m able to get a model I’m able to evaluate that model and I can start building out the infrastructure around it, and knowing that OK in a week’s time, I’ll be back at this modeling stage in the process again and I can make it better. And so some of the some of the things specific techniques we used are more traditional machine learning techniques like logistic regression where, there’s a certain point where you’ve you’ve gotten everything you can out of this technique you can think about it like a sponge. So we take that model as the sponge and as a data science team, we squeeze it and we get all the water we can out of that tool, and at some point it’s just there’s not much left and I can keep squeezing, but I’m not really going to make any headway, and those are the points where you hit a wall and you say, OK, we need we need the next big idea. We need to try something a little more ambitious to this problem. And it goes back to solving problems as they become problems. Use simpler techniques until you get a point where you have to change and go from traditional machine learning to more deep learning techniques.

JOHN: [00:20:24] And I guess I’d add onto that this idea that the models get better with more data and that they can continue to evolve even after we put the first one out there was super important to us because you know, we want to be able to take our tool into a new industry where we’ve never seen contracts of that…but that is prevalent in that industry and make sure that as our users are telling us in the platform, what’s important in those contracts, that the model is learning from that and that we’re constantly getting better, so that we’re not just shipping this one-time model that we expect is done and good forever. I mean we have to appreciate that every time somebody in a new industry implements our tool, there is new learning to be done. And so these have to be flexible evolving role models and that’s really the architecture that the AgileThought team put in place for us.

JAMES: [00:21:10] Yeah. And that highlights our holistic approach that you know there are some consultants out there, especially in the data science world, where you say, OK, I produce a model now my job’s done. And it’s on John to figure out all the other things and figure out how do I monitor the models, are the models doing well in production. And at AgileThought, we really focus on end to end, let’s help you from the point where you have an idea, till where every two weeks we’re evaluating our models and improving them over time. So I want to touch a little bit about on the next phase, the evaluation step, where obviously this process in IntelAgree’s case we front loaded it. So we talked about in the business understanding phase, how we’re going to measure success, an idea of what level of performance we would require in order for us to call something deployable. But one thing that definitely happens here, is error analysis, where to John’s point, we train a model, we run a model on some contracts and we just look at the things, look at what actually is the output, what does the model think is important here and try to gather ideas working together with John and team to say, “hey it looks like maybe if we had this other type of contract, or maybe if we had this other type of feature or we labeled a little bit differently, maybe we could improve it.” And so it speaks volumes to the partnership that we developed that we’re able to really get a lot out of that error analysis process

JOHN: [00:22:35] And it can be so frustrating, but it can be challenging. You go through this effort to go find a bunch of data. Right so in the beginning, we hear data is super important. Your models are not going to be very good unless you have a lot of data.

JOHN: [00:22:46] So we scurry off and we try to call everybody we know and say, “Can I have your contracts?” And we gather up all this stuff and we bring a pile of contracts, like that’s got to be good enough, right? So here’s here’s three hundred contracts, go for it and you have to be ready for that feedback, that well, OK. Of those three hundred, about 200 of them were all based on one template. They’re not really unique. I can’t do a lot of learning off that. So it is a partnership because you have to appreciate that the quantity, quality and variety and data is super important and you can’t let that become a personal thing. I did all this work to get this data and my data science team says it’s not good enough, that’s OK. That’s part of the conversation, that that’s part of the journey and I think I think we worked well together.

JAMES: [00:23:31] And so at this stage in the CRISP-DM process. So we’ve got a model. So now let’s actually go put it out in the world and let it interact and do things. And, so we call this the deployment phase, in this is… I would say one of the one of the areas where traditional software really shines. And John will talk a little bit more about it, but the main idea here is we want to make sure that the data science components and the platform itself are decoupled as much as possible. So we don’t want to want them to be intertwined such that, if I make a change it means reengineering the platform or the platform makes a change. And that means all my models now need to be retrained, and I want to point out the importance of scale here. Also obviously deployment, in deployment scale is really important factor and we’re able to scale using Azure machine learning, so we use cloud resources not only to train our models, but also, to deploy them. So we use the cloud hardware to host our models. The platform is actually just calling a Data Science Service, which is totally decoupled from the platform in order to… So they send contracts and we send back predictions.

JOHN: [00:24:44] Yeah that loose coupling is super important and we talked earlier about, these models are always going to evolve. They are always getting smarter. So, if we are bound to one particular version of the model, and we can’t touch it, that stifles the process. We do have to embrace that idea that, that black box that I’m calling, I trust it. I know that it’s going to do the right things to predict back to me what it found in a document, so I know it’s reliable, but I also know that it is changing and is growing, literally in our case every day. That thing is reading new data that’s emerging in our database from humans putting things in and retraining itself and so we have to be ready that those things could move independent of each other and so kind of an API that enables that is super important. And to the DevOps side, you know on our platform side, we’re in Azure we’re using all that the standard Microsoft tools for how we manage our source code and deployments and so we really want to make sure that there was a parallel deployment story in Azure for these models and so that again was a… I think a big win here that not only are they embracing the experimentation, but operationalizing it. It will be great for me to say, “James go go in your lab and come up with theories, but if he can’t operationalize it and he can’t reliably put it in production against our platform. That’s no good to me. So the DevOps thing, I think my advice to people would be make sure you’re not allowing the data science team to just go off and be an experimentation mode all the time. They have to be thinking about operationalizing it.

JASMINE: [00:26:15] So John for you going through this experience can you share some of the biggest lessons that you’ve learned along the way.

JOHN: [00:26:21] Yeah. So we keep saying experiment, experiment, you know embracing those experiments is really important. There are gonna be days where the highs are real high and we’re seeing we’re seeing the model do incredible things. And then there are days where the lows are very low, like we thought we had perfected this particular model and then we’ve run some new contracts through…and crickets. It doesn’t light up anything. And, I think what we’ve learned on that journey is don’t get too high on the highs and too low on the lows. You just stay with the process, get more data. More data, will ultimately, I think kind of smooth all of that out. I think we were surprised at how quickly this could come together in a traditional project. You feel like you’re completely in control and you kind of know what your features are going to be. If a general idea of how you’re going to build it and you can kind of plan that out and say hey six months from now I predict this is where we’re gonna be. We started this. We’re talking about experimentation we’re thinking well we’re going to [00:27:19] be at this you know six months six years. [2.5] You don’t know. In our case, in a year, we went from staring at a whiteboard to thinking about what we want to do to having a platform.

JOHN: [00:27:31] Having it backed by these deep learning models and you know it is achievable if you have a process, you have a good leader in this area. I wouldn’t I wouldn’t advise anybody go off and just try to do this on your own, you know find somebody who knows where these data sets are in the public domain, and knows what these tools are that are good at helping you shape your models and then maybe the last one is just back to that common vocabulary. You know when a when a business and tech team get together to build an app, you know, that’s a pretty proven thing. The business people know what screens are and buttons and they can kind of quickly talk about what what you need and what I can build for you. This is a different vocabulary and I would not suggest going off on your own and just thinking you’re gonna wing it. I mean ground yourself early as James harped on it in the beginning of this, make sure both sides know what you’re doing and why, and how you’re going to talk to each other, and how you’re gonna measure whether it’s working. I think that was a huge win for us to kind of get on the same page early on.

JASMINE: [00:28:29] James similar question for you for people who are just starting out like a guy in their own organizations. What advice do you have for them.

JAMES: [00:28:37] So three things. The first thing I’d say is, it sounds kind of cliche, but to build a winning team and specifically what I mean is having a cross-functional team so it’s not like John tells me the problem and then I go off and I’m the lone person who solves the problem. I really need his expertise in visioning and I also need some of the developers from IntelAgree to actually like, OK , now how do I integrate with them. And also, and particularly important in this case, was experts in the actual domain, so paralegals on the IntelAgree staff where I can go through, train a model and it does something and I’m looking at the error and I think you, know it seems reasonable what the model is doing why is this, why is this not OK? And having someone who’s an expert in the field to say, you know the reason we label it this way is because of this and maybe I get an aha moment from it and say, “Oh that really makes sense.” And now that can inform you know what model or what feature I work on next, based on kind of that that back and forth of the cross-functional team where I’ve got some some individual on the team that covers all the bases product from start to finish. The next thing I would say is having a framework for your project. So it’s really tempting to jump, especially as a data scientist, to jump right in and just start modeling stuff. But it’s it’s really really important. And I’d say, more projects fail for this reason than any other reason is not taking time upfront to do the business understanding, to talk about the vocabulary we’re going to use. How are we going to evaluate our models? How are we going to work together? Should I join their Scrum? Should should should we have our own? How do we actually deploy it? Do I just tell them, “hey it’s ready.” Figuring these things out ahead of time really really pays dividends. And the last thing I’d say is to focus on small wins before big ones. So we talked a little bit about this. Don’t try to solve the whole problem. It’s not like John said oh I want to take any given contract and extract every possible thing out of it like that’s just from a data science perspective. That’s just not attractable problem. So we have a narrow scope and say here’s…So maybe this is my big idea but I want to take this part of this big idea and that’s what I want to solve first. And sort of build momentum so you say, “OK I’ve accomplished a small goal, now can I build on that? Can I do the next step that gets me closer to solving my big problem.

JOHN: [00:31:04] Actually an interesting thing is made me think of is I love I love that idea of us focusing on the small problem with a large amount of data and temptation to say let’s focus on this small thing and so we’ll just kind of bring in a narrow set of variables. And that’s the traditional. So you have to kind of think it a different way that I’m trying to still small solve a small problem but I have to go get a bunch of data to do it and that you Yeah that’s not easy. Yeah so it’s just maybe a lesson learned here is you you have to embrace the both sides of that. Yeah for sure. [30.0]

JASMINE: [00:31:34] Well James and John I want to thank you both for your time and your insights today. A key topic that came up during our discussion was the importance of data for any AI transformation or AI journey. So for those of you who are wondering how you can get started, we actually have an offering, predictive analytics discovery. And James alluded to it earlier… with this offering, our data scientists will work with your team to help you analyze your data, mathematically validate your data quality, and ultimately help you determine your machine learning readiness. So for those of you who are watching the webcast today, we’ll be following up via email with a special offer for you to unlock predictive analytics discovery. So James and John, for people who want to get in touch with you and learn more about your organizations, where can they go?

JAMES: [00:32:17] So for AgileThought, obviously agile thought.com and I’d specifically encourage you to check out our predictive analytics discovery solution on the web site

JOHN: [00:32:25] And for IntelAgree, we’re at IntelAgree.com

[00:32:29] Well that concludes the conversation for us today. We’ll be transitioning shortly to the Q and A portion of our webcast, where we will also announce the three lucky winners of the 30 minute complimentary consultation with James.

Watch the Full Session

00:00

Speakers

News News

Related