Data

Machine Learning

Fraud Detection

Performance Metrics

While some early data science investors—like Amazon, Google and Capital One—have reaped the financial rewards by going all in on their data science initiatives, that’s not always the case for companies. That’s why most companies opt to mitigate product failures or financial risks by starting with a proof of concept (PoC): A lightweight version of a broader project that tests new techniques or technologies while minimizing risks involved.

But what happens after a PoC is complete? For the sake of this blog, let’s say a company has just completed a PoC of a machine learning model that predicts, in real time, if a given transaction is fraudulent. Using this company as an example, we’ll discuss the next best steps you should take to bring your PoC to life:

 

Evaluate the PoC

After your PoC is complete, your first order of business is to evaluate the outcomes based on the baselines, goals and expectations that you established at the beginning of the project. You can compare the original goals with the PoC’s actual outcomes to answer some important questions, such as:

  • Were the goals of the PoC met? For our example, let’s assume the goal was a binary averaged F1 score of 65% on a held-out test set of transactions (the test set used to evaluate how well the model actually performs)—which loosely means that our predictive model identifies transactions as fraudulent and is right about 65% of the time. Now let’s say the PoC’s actual performance is 70%; since the PoC’s performance is higher than what we intended—65%—the goals are met.
  • Did the PoC perform better than a baseline? Let’s assume that only 5% of all transactions are fraudulent, so a simple baseline—a starting point to measure against once improvement efforts have begun—would be to predict that all transactions are not fraudulent. This baseline has a binary averaged F1 of 0%, so a PoC with a 70% performance score is significantly better.
  • What did you learn in the process? Based on the results, you might learn that fraudulent transactions are quite rare, and that a machine learning model can identify them much better than a simple baseline.

After evaluating a PoC, there are essentially three options: scale the completed PoC, spin the innovation wheel again on an adjusted or new PoC, or do both in parallel.

 

Scale the Completed PoC

The reason scaling a PoC is the next step is because a PoC often simplifies a solution by making certain assumptions:

  • Assumption No. 1: Many production components will be out of PoC scope.
  • Assumption No. 2: The PoC will only serve a small subset of real customers, if any at all.
  • Assumption No. 3: The PoC will very loosely integrate with existing infrastructure, if at all.

If a PoC shows enough promise to build a production-grade version, all these assumptions will be falsified in an instant.  To put a machine learning model in production, you’ll need to spend time building components that aren’t  necessary for the PoC, such as an elastic scalable HTTP endpoint (API), monitoring, logging, a process for organizing models, a process of re-training models, etc.  You’ll also need to ensure that the model’s performance on a subset of customers generalizes to the wider population. Finally, you’ll need to integrate your machine learning components with the wider infrastructure, since other services will have to collect inputs—like when and where a transaction took place—make an API request, collect results, and then perform an action based on those results—all while providing feedback to the machine learning service in the process.

 

Spin the Wheel

At the end of a PoC, you may find that the model itself isn’t performing as well as you had hoped.  However, you can still extract value from a PoC in this scenario, especially in terms of customer insights. Using the fraudulence prediction example, let’s say customers respond very negatively to false positives—or fraudulent transactions that aren’t actually fraudulent. This insight could be used in a couple of ways: First, the company could focus its modeling efforts on driving down false positive rates by making the model more accurate.  Second, the company could use these insights to improve business processes. For example, the company could implement a new text messaging service to ask customers if they approve a suspicious transaction instead of forcing them to accept or deny it; implementing this service would not only mitigate the negative response to false positives, but also make it a slightly easier problem for the machine learning model to solve since it has some leeway.

Still unsure what steps to take now that your PoC is complete? As a 2019 Microsoft AI and Machine Learning Partner of the Year Award finalist, we can help you bring your PoC to production. Download our Predictive Analytics Discovery one-pager to learn how we can help you evaluate the viability of your predictive analytics use case.

https://cta-redirect.hubspot.com/cta/redirect/3705093/803ebb04-fcf3-4201-906f-a572acd7b5e3?__hstc=268700078.e0b0d9ad6cb96e4549c218937d109afc.1680719896434.1682478989975.1682524911938.12&__hssc=268700078.1.1682524911938&__hsfp=1273928910

Share this content:

Related