For faster and safer iteration of the model catalog -> deployment cycle, Verta offers functionality to provide more responsive, targeted feedback if the model doesn't behave as expected. Additionally, tools are provided for validation of the dependencies included in the model's environment.
VertaModelBase.model_test()
The VertaModelBase interface supports a model_test() method for model verification, which can be implemented with any desired calls and checks.
This method is automatically called
when a deployed Verta endpoint is initializing; any raised exceptions will cause the endpoint update to fail, returning the error and preventing any predictions from being made.
Verification during endpoint initialization requires the 2023_03 release of the Verta platform.
Here is an example model that will fail its model test. It calls predict() and checks the expected values of its output and data logs:
from verta.registry import VertaModelBase, verify_iofrom verta import runtimeclassLoudEcho(VertaModelBase):""" Takes a string and makes it LOUD!!! """def__init__(self,artifacts=None):pass@verify_iodefpredict(self,input:str) ->str: runtime.log('model_input', input) echo:str=input+'!!!' runtime.log('model_output', echo)return echodefmodel_test(self):# call predict(), capturing model data logsinput='roar'with runtime.context()as ctx: output = self.predict(input) logs = ctx.logs()# check predict() output expected_output ='ROAR!!!'if output != expected_output:raiseValueError(f"expected output {expected_output}, got {output}")# check model data logs expected_logs ={'model_input':'roar','model_output':'ROAR!!!'}if logs != expected_logs:raiseValueError(f"expected logs {expected_logs}, got {logs}")
When this model is cataloged and deployed, it will encounter an exception—propagating the full error message from model_test():
endpoint.update(model_ver, wait=True)
raises
RuntimeError: endpoint update failed;Error in model container: Using predict as predictor methodTraceback (most recent call last): File "/root/.pyenv/versions/3.10.9/bin/pythonmodelservice", line 33,in<module> sys.exit(load_entry_point('pythonmodelservice==0.1.0', 'console_scripts', 'pythonmodelservice')()) File "/root/.pyenv/versions/3.10.9/lib/python3.10/site-packages/app/__main__.py", line 21,in main runtime =_init_runtime() File "/root/.pyenv/versions/3.10.9/lib/python3.10/site-packages/app/__main__.py", line 13,in _init_runtime runtime =new_runtime() File "/root/.pyenv/versions/3.10.9/lib/python3.10/site-packages/app/runtime/runtime.py", line 22,in new_runtimereturnRuntime() File "/root/.pyenv/versions/3.10.9/lib/python3.10/site-packages/app/runtime/cherrypy/runtime.py", line 27, in __init__
self.model_wrapper.model_test()# test on init File "/root/.pyenv/versions/3.10.9/lib/python3.10/site-packages/app/wrappers/model/abc_model_wrapper.py", line 109, in model_test
self.model.model_test() File "/Users/verta/Documents/model_test.py", line 29,in model_testValueError: expected output ROAR!!!, got roar!!!
Critically, predict() is missing a call to input.upper():
Making this change will allow the endpoint update to pass its model test and become available for predictions.
Validate a Model's Dependencies Locally
Verta provides convenience functions for scanning a provided model class for dependencies and identifying any missing packages in the provided environment. This can help catch problems early in the development process, and avoid time wasted debugging deployment issues.
Requires python client version: verta>=0.22.2
Option 1.
Simply set the check_model_dependencies argument to True when creating a model version with registered_model.create_standard_model()
model_version = registered_model.create_standard_model( model_cls=MyModel, environment=Python([]), # missing model dependency `tensorflow` here check_model_dependencies=True)
If the tensorflow package is used within the model class, the above code will raise an exception, including a list of missing packages:
RuntimeError: the following packages are required by the model but missing from the environment:
tensorflow (installed via ['tensorflow'])
Option 2.
Validate the model's dependencies directly by passing the model class and the environment to verta.registry.check_model_dependencies()
from verta.registry import check_model_dependenciesfrom verta.environment import Pythoncheck_model_dependencies( model_cls=MyModel, environment=Python([]), # missing model dependency `tensorflow` here raise_for_missing=False# defaults to False; if True, raises RuntimeError documented above)
If the tensorflow package is used within the model class, the above code will raise a warning, including a list of missing packages:
RuntimeWarning: the following packages are required by the model but missing from the environment:
tensorflow (installed via ['tensorflow'])