Quickstart

Models can be deployed using Verta's scalable, configurable endpoints.

Endpoints are easy to create through either the client:

endpoint = client.create_endpoint("classify-income")
endpoint.update(model_ver)

or the webapp:

endpoint.update() can also accept an experiment run rather than a model version. In addition, the wait parameter can be set to True to wait for the deployed model to be ready before executing the next line of code.

endpoint.update(run, wait=True)

Once an endpoint is deployed, predictions can be made against it either through the Python client:

deployed_model = endpoint.get_deployed_model()
for row in data:
    print(deployed_model.predict(row))

or via a REST call:

curl -H "Access-token: 12345678-abcd-1234-abcd-123456789012" -X POST https://app.verta.ai/api/v1/predict/classify-income -d '[1,2,3,4,5,6]' -H "Content-Type: application/json"

Last updated