Monitor Models with Continuous Tests
Continuous Testing monitors your model and alerts on issues such as data drift and performance degradation. When things go wrong, it also offers automated root cause analysis of the underlying driver of performance change.
A production AI service often requires regular maintenance to support model upgrades, new training data, and other improvements. This notebook demonstrates how to update your Continuous Test, configure how tests are run, and update the alerts and notifications settings.
There are multiple ways to load data to the CT instance. You can either manually load data at a cadence, or set up scheduling within Robust Intelligence itself. Below are walkthroughs and guides on loading data into your CT instance for production model monitoring.
Continuous Testing continually checks whether pre-configured tests are failing by creating a monitor over each test. If one such test does fail, the monitor creates an event that signifies continuous periods of abnormal model behavior. For example, if the test for data drift fails, the monitor will create an event indicating that the model’s performance has declined. Further, it will outline the reason for this decline by linking this performance change to the underlying root cause by referencing the concurrently failing tests. Further detail regarding the above abstractions can be found in the links below.