Verta Model Monitoring lets you monitor drift, outlier and model performance metrics.
Get started with monitoring:

1. Register a model

Create a Registered Model and Registered Model Versions (RMVs) to it.
registered_model = client.get_or_create_registered_model(name="census-model")
model_version = registered_model.create_standard_model(
name = "v1",
model_cls = CensusIncomeClassifier,
model_api = ModelAPI(X_train, Y_train_with_confidence),
environment = Python(requirements=["scikit-learn"]),
artifacts = artifacts_dict
Note: ModelAPI captures your model schema that helps the monitoring system automatically define monitoring metrics, dashboards and alerts for features and prediction data. As of Verta Release 2022.04, confidence scores are recommended for classification models in order to accurately compute ROC and PR curves.

2. Log reference data

Upload your reference data as an artifact in your Registered Model Version. This helps facilitate downstream drift monitoring against this reference set.
model_version.log_reference_data(X_train_reference, Y_train_reference)
Note: You do not need to upload your entire training set, but a statistically significant representation that mirrors your training data distribution.

3. Deploy the model

Deploy an endpoint with the model version. When an endpoint is deployed, the monitored model automatically appears in the Monitoring list view in webapp.
endpoint = client.get_or_create_endpoint("Census")
endpoint.update(model_version, wait=True)

4. Send predictions

Start sending input data for prediction. Once the data has been sent to the system, you can navigate to the webapp to view dashboards.
deployed_model = endpoint.get_deployed_model()
id,_ = deployed_model.predict_with_id(input_feature)
Note: The model makes a prediction and assigns a unique UUID for the prediction. Ground truth is then registered with the system using the above UUID
Drift dashboard in webapp.

5. Ingest ground truth

Ingest ground truth and the system will start computing performance metrics like accuracy, precision, confusion matrix etc.
endpoint.log_ground_truth(id, label, "output-class") # id, gt, prediction_col_name
Performance dashboard in webapp.
Last modified 6mo ago