Verta
Search…
Deploying a PyTorch model
As mention in the deploying models guide, deploying models via Verta Inference is a two step process: (1) first create an endpoint, and (2) update the endpoint with a model.
This tutorial explains how Verta Inference can be used to deploy a PyTorch model.

1. Create an endpoint

Users can create an endpoint using Client.create_endpoint() as follows:
1
census_endpoint = client.create_endpoint(path="/census")
Copied!

2. Updating the endpoint with a RMV

As discussed in the Registry Overview, there are multiple of ways to create an RMV for a PyTorch model.
First, if we are provided with a PyTorch model object, users can use the PyTorch convenience functions to create a Verta Standard Model.
1
from verta.environment import Python
2
3
model_version = registered_model.create_standard_model_from_torch(
4
model,
5
environment=Python(requirements=["torch", "torchvision"]),
6
name="v1",
7
)
Copied!
Alternatively, a serialized PyTorch saved model can be used as an artifact in a model that extends VertaModelBase.
1
torch.save(model.state_dict(), "model.pth")
2
3
from verta.registry import VertaModelBase
4
5
class FashionMNISTClassifier(VertaModelBase):
6
def __init__(self, artifacts):
7
self.model = NeuralNetwork()
8
model.load_state_dict(torch.load(artifacts["model.pth"]))
9
10
def predict(self, batch_input):
11
results = []
12
for one_input in batch_input:
13
with torch.no_grad():
14
pred = model(x)
15
results.append(pred)
16
return results
17
18
model_version = registered_model.create_standard_model(
19
model_cls=FashionMNISTClassifier,
20
environment=Python(requirements=["torch", "torchvision"]),
21
artifacts={"model.pth" : "model.pth"},
22
name="v2"
23
)
Copied!
Note that the input and output of the predict function must be JSON serializable. For the full list of acceptable data types for model I/O, refer to the VertaModelBase documentation.
Prior to deploy, don't forget to test your model class locally as follows.
1
# test locally
2
mnist_model1 = FashionMNISTClassifier({"model.pth" : "model.pth"})
3
mnist_model1.predict([test_data[0][0]])
Copied!
To ensure that the requirements specified in the model version are in fact adequate, you may build the model container locally or part of a continuous integration system. You may also deploy the model and make test predictions as show below.
Regardless of how a Registered Model Version has been created, the endpoint defined above can now be upated and we can make predictions against it.
1
fashion_mnist_endpoint = client.get_or_create_endpoint("fashion-mnist")
2
fashion_mnist_endpoint.update(model_version, wait=True)
3
deployed_model = fashion_mnist_endpoint.get_deployed_model()
4
deployed_model.predict([test_data[0][0]])
Copied!
The full code for this tutorial can be found here.
Last modified 3mo ago