Deploying a Tensorflow model

As mention in the deploying models guide, deploying models via Verta Inference is a two step process: (1) first create an endpoint, and (2) update the endpoint with a model.

This tutorial explains how Verta Inference can be used to deploy a Tensorflow model.

1. Create an endpoint

Users can create an endpoint using Client.create_endpoint() as follows:

census_endpoint = client.create_endpoint(path="/census")

2. Updating the endpoint with a RMV

As discussed in the Catalog Overview, there are multiple of ways to create an RMV for a Tensorflow model.

First, if we are provided with a Keras-Tensorflow model object, users can use the Keras convenience functions to create a Verta Standard Model.

from verta.environment import Python

model_version_from_obj = registered_model.create_standard_model_from_keras(
    model,
    environment=Python(requirements=["tensorflow"]),
    name="v1")

Alternatively, a Tensorflow saved model can be used as an artifact in a model that extends VertaModelBase.

model.save("mnist.tf_saved_model")
from verta.registry import VertaModelBase

class MNISTModel(VertaModelBase):
    def __init__(self, artifacts):
        import tensorflow as tf
        self.model = tf.keras.models.load_model(
            artifacts["mnist_model"])

    def predict(self, input_data):
        output = []
        for input_data_point in input_data:
            reshaped_data = tf.reshape(input_data_point, (1, 28, 28))
            output.append(self.model(reshaped_data).numpy().tolist())
        return output

model_version_from_cls = registered_model.create_standard_model(
    MNISTModel,
    environment=Python(["tensorflow"]),
    name="v2",
    artifacts={"mnist_model" : "mnist.tf_saved_model/"}
)

Note that the input and output of the predict function must be JSON serializable. For the full list of acceptable data types for model I/O, refer to the VertaModelBase documentation.

Prior to deploy, don't forget to test your model class locally as follows.

# test locally
mnist_model1 = MNISTModel({"mnist_model" : "mnist.tf_saved_model/"})
mnist_model1.predict([x_test[0]])

To ensure that the requirements specified in the model version are in fact adequate, you may build the model container locally or part of a continuous integration system. You may also deploy the model and make test predictions as show below.

Regardless of how a Registered Model Version has been created, the endpoint defined above can now be upated and we can make predictions against it.

mnist_endpoint = client.get_or_create_endpoint("mnist")
mnist_endpoint.update(model_version_from_obj, wait=True)
deployed_model = mnist_endpoint.get_deployed_model()
deployed_model.predict([x_test[0]])

The full code for this tutorial can be found here.

Last updated