Deploying a pure function
Within Verta, a "Model" can be any arbitrary function: a traditional ML model (e.g., sklearn, PyTorch, TF, etc); a function (e.g., squaring a number, making a DB function etc.); or a mixture of the above (e.g., pre-processing code, a DB call, and then a model application.
This tutorial provides an example of how to deploy any function on Verta.
As mention in the deploying models guide, deploying models via Verta Inference is a two step process: (1) first create an endpoint, and (2) update the endpoint with a model.
This tutorial explains how Verta Inference can be used to deploy a scikit-learn model.
1. Create an endpoint
Users can create an endpoint using Client.create_endpoint()
as follows:
2. Updating the endpoint with a RMV
To deploy an arbitrary function within Verta, the function must be wrapped into a class that extends VertaModelBase (See this guide).
For example, suppose we have a cubic transform that we want to deploy to Verta. Note that this function could be absolutely anything -- make a database query, call a REST endpoint, use the associated artifacts to do further processing, etc.
Regardless of how a Registered Model Version has been created, the endpoint defined above can now be upated and we can make predictions against it.
The full code for this tutorial can be found here.
Last updated