Verta
Search…
Deploying a XGBoost model
As mention in the deploying models guide, deploying models via Verta Inference is a two step process: (1) first create an endpoint, and (2) update the endpoint with a model.
This tutorial explains how Verta Inference can be used to deploy an XGBoost model.

1. Create an endpoint

Users can create an endpoint using Client.create_endpoint() as follows:
1
census_endpoint = client.create_endpoint(path="/wine")
Copied!

2. Updating the endpoint with a RMV

As discussed in the Registry Overview, there are multiple of ways to create an RMV for a XGBoost model.
First, if we are provided with a XGBoost model object, users can use the XGBoost convenience functions to create a Verta Standard Model. Note: XGBoost does require sklearn as a dependency for deployment.
1
from verta.environment import Python
2
model_version = registered_model.create_standard_model_from_xgboost(
3
model, environment=Python(requirements=["xgboost", "sklearn"]), name="v1")
Copied!
Alternatively, an XGBoost serialized model can be used as an artifact in a model that extends VertaModelBase.
Regardless of how a Registered Model Version has been created, the endpoint defined above can now be upated and we can make predictions against it.
1
wine_endpoint = client.get_or_create_endpoint("wine")
2
wine_endpoint.update(model_version, wait=True)
3
deployed_model = wine_endpoint.get_deployed_model()
4
deployed_model.predict([x_test[0]])
Copied!
The full code for this tutorial can be found here.
Last modified 3mo ago