Deploying a XGBoost model

As mention in the deploying models guide, deploying models via Verta Inference is a two step process: (1) first create an endpoint, and (2) update the endpoint with a model.

This tutorial explains how Verta Inference can be used to deploy an XGBoost model.

1. Create an endpoint

Users can create an endpoint using Client.create_endpoint() as follows:

census_endpoint = client.create_endpoint(path="/wine")

2. Updating the endpoint with a RMV

As discussed in the Registry Overview, there are multiple of ways to create an RMV for a XGBoost model.

First, if we are provided with a XGBoost model object, users can use the XGBoost convenience functions to create a Verta Standard Model. Note: XGBoost does require sklearn as a dependency for deployment.

from verta.environment import Python
model_version = registered_model.create_standard_model_from_xgboost(
model, environment=Python(requirements=["xgboost", "sklearn"]), name="v1")

Alternatively, an XGBoost serialized model can be used as an artifact in a model that extends VertaModelBase.

Regardless of how a Registered Model Version has been created, the endpoint defined above can now be upated and we can make predictions against it.

wine_endpoint = client.get_or_create_endpoint("wine")
wine_endpoint.update(model_version, wait=True)
deployed_model = wine_endpoint.get_deployed_model()
deployed_model.predict([x_test[0]])

The full code for this tutorial can be found here.