While some early data science investors—like Amazon, Google and Capital One—have reaped the financial rewards by going all in on their data science initiatives, that’s not always the case for companies. That’s why most companies opt to mitigate product failures or financial risks by starting with a proof of concept (PoC): A lightweight version of a broader project that tests new techniques or technologies while minimizing risks involved.
But what happens after a PoC is complete? For the sake of this blog, let’s say a company has just completed a PoC of a machine learning model that predicts, in real time, if a given transaction is fraudulent. Using this company as an example, we’ll discuss the next best steps you should take to bring your PoC to life:
After your PoC is complete, your first order of business is to evaluate the outcomes based on the baselines, goals and expectations that you established at the beginning of the project. You can compare the original goals with the PoC’s actual outcomes to answer some important questions, such as:
After evaluating a PoC, there are essentially three options: scale the completed PoC, spin the innovation wheel again on an adjusted or new PoC, or do both in parallel.
The reason scaling a PoC is the next step is because a PoC often simplifies a solution by making certain assumptions:
If a PoC shows enough promise to build a production-grade version, all these assumptions will be falsified in an instant. To put a machine learning model in production, you’ll need to spend time building components that aren’t necessary for the PoC, such as an elastic scalable HTTP endpoint (API), monitoring, logging, a process for organizing models, a process of re-training models, etc. You’ll also need to ensure that the model’s performance on a subset of customers generalizes to the wider population. Finally, you’ll need to integrate your machine learning components with the wider infrastructure, since other services will have to collect inputs—like when and where a transaction took place—make an API request, collect results, and then perform an action based on those results—all while providing feedback to the machine learning service in the process.
At the end of a PoC, you may find that the model itself isn’t performing as well as you had hoped. However, you can still extract value from a PoC in this scenario, especially in terms of customer insights. Using the fraudulence prediction example, let’s say customers respond very negatively to false positives—or fraudulent transactions that aren’t actually fraudulent. This insight could be used in a couple of ways: First, the company could focus its modeling efforts on driving down false positive rates by making the model more accurate. Second, the company could use these insights to improve business processes. For example, the company could implement a new text messaging service to ask customers if they approve a suspicious transaction instead of forcing them to accept or deny it; implementing this service would not only mitigate the negative response to false positives, but also make it a slightly easier problem for the machine learning model to solve since it has some leeway.
Still unsure what steps to take now that your PoC is complete? As a 2019 Microsoft AI and Machine Learning Partner of the Year Award finalist, we can help you bring your PoC to production. Download our Predictive Analytics Discovery one-pager to learn how we can help you evaluate the viability of your predictive analytics use case.
Contact us to share the challenges you’re facing and learn more about the solutions we offer to help you achieve your goals. Our job is to solve your problems with expertly crafted software solutions and real world training.
For a better experience on the web, please upgrade to a modern browser.