Deploying a multi-framework model
Within Verta, a "Model" can be any arbitrary function: a traditional ML model (e.g., sklearn, PyTorch, TF, etc); a function (e.g., squaring a number, making a DB function etc.); or a mixture of the above (e.g., pre-processing code, a DB call, and then a model application.
This tutorial provides an example of how to deploy models using multiple frameworks on Verta. In this case, we will consider a model that uses scikit-learn and XGBoost.
The key concept in Verta for model deployment is an Endpoint. An endpoint is a URL where a deployed model becomes available for use. Deploying a model is therefore a 2-step process:
Create an endpoint
Update the endpoint with a model
We'll look at these in turn.
1. Create an endpoint
Users can create an endpoint using Client.create_endpoint()
as follows:
2. Updating the endpoint with a deployed model
To deploy a model utilizing multiple frameworks, we wrap the logic into a class that extends VertaModelBase.
Note that the different frameworks likely expect input/output in different formats and your class needs to account for that.
Once the class has been defined, we create a Registered Model Version with it and update the endpoint.
See the full code for this tutorial here.
Last updated