When is your AI system done?

Share it:

When is your AI system done?

In a very short period, and for limited investment, we can build a Proof of Concept for an AI system. With the rise of pre-trained models this can be done in a few weeks for under $100K. But don’t let this fool you – there is still a lot to do. The edge cases that must be covered make this, truly, a case where 20% of the problem takes 80% of the effort.

But the PoC does so much, why can’t we get it deployed?

It is all a matter of risk.  The PoC will work very well for more than 80% of the situations the system will encounter. If getting it wrong 20% of the time is within your risk tolerance, then go right ahead and deploy the PoC. Most systems, however, need to produce the correct response more than 95% of the time to make it into production.

Why is the last 20% so hard to achieve?

The problem is that the last 20% is not just one issue, but hundreds of edge cases that must be addressed. Each edge case will require either a change to the model, which is a really big deal, or a change to the training data, which is a moderately big deal, or a change in software engineering, which is a pretty manageable deal. Testing is important at this stage – we must be sure that a fix to an edge case did not cause problems elsewhere in the system.

So, how do you know when you are done?

That is the $1M, $10M, or $100M question. How much time and money do you have? Like humans, the AI system will never be perfect – how close to perfect is close enough? We can use a quality threshold like: “We need to be 98% correct,” then spend all the time and money needed to achieve that goal. Or we can use a budget, “We will spend $1M over 6 months and then assess the quality.” Having an experienced team, one that knows whether the architecture, data, or software need to change to address the issue, will increase chances for success.

The truth is, you are never really done. As data in the operational environment evolves, re-assessing the model’s performance is required – new edge cases will arise and must be addressed. We often deploy a “model watcher” application to alert us to when new training is needed.

Have more questions about this or other topics? Click here to schedule a complimentary Q&A session with our AI Experts.

Scott Pringle

Scott Pringle is an experienced hands-on technology executive trained in applied mathematics and software systems engineering. He is passionate about using first principles to drive innovation and accelerate time to value.