Model verification and testing

For faster and safer iteration of the model catalog -> deployment cycle, Verta offers functionality to provide more responsive, targeted feedback if the model doesn't behave as expected.

VertaModelBase.model_test()

The VertaModelBase interface supports a model_test() method for model verification, which can be implemented with any desired calls and checks.

This method is automatically called

  • when a deployed Verta endpoint is initializing; any raised exceptions will cause the endpoint update to fail, returning the error and preventing any predictions from being made.

Verification during endpoint initialization requires the 2023_03 release of the Verta platform.

Here is an example model that will fail its model test. It calls predict() and checks the expected values of its output and data logs:

from verta.registry import VertaModelBase, verify_io
from verta import runtime

class LoudEcho(VertaModelBase):
    """ Takes a string and makes it LOUD!!! """
    def __init__(self, artifacts=None):
        pass
            
    @verify_io
    def predict(self, input: str) -> str:
        runtime.log('model_input', input)
        echo: str = input + '!!!'
        runtime.log('model_output', echo)
        return echo
        
    def model_test(self):
        # call predict(), capturing model data logs
        input = 'roar'
        with runtime.context() as ctx:
            output = self.predict(input)
        logs = ctx.logs()

        # check predict() output
        expected_output = 'ROAR!!!'
        if output != expected_output:
            raise ValueError(f"expected output {expected_output}, got {output}")
        
        # check model data logs
        expected_logs = {'model_input': 'roar', 'model_output': 'ROAR!!!'}
        if logs != expected_logs:
            raise ValueError(f"expected logs {expected_logs}, got {logs}")

When this model is cataloged and deployed, it will encounter an exception—propagating the full error message from model_test():

endpoint.update(model_ver, wait=True)

raises

RuntimeError: endpoint update failed;
Error in model container: Using predict as predictor method
Traceback (most recent call last):
  File "/root/.pyenv/versions/3.10.9/bin/pythonmodelservice", line 33, in <module>
    sys.exit(load_entry_point('pythonmodelservice==0.1.0', 'console_scripts', 'pythonmodelservice')())
  File "/root/.pyenv/versions/3.10.9/lib/python3.10/site-packages/app/__main__.py", line 21, in main
    runtime = _init_runtime()
  File "/root/.pyenv/versions/3.10.9/lib/python3.10/site-packages/app/__main__.py", line 13, in _init_runtime
    runtime = new_runtime()
  File "/root/.pyenv/versions/3.10.9/lib/python3.10/site-packages/app/runtime/runtime.py", line 22, in new_runtime
    return Runtime()
  File "/root/.pyenv/versions/3.10.9/lib/python3.10/site-packages/app/runtime/cherrypy/runtime.py", line 27, in __init__
    self.model_wrapper.model_test()  # test on init
  File "/root/.pyenv/versions/3.10.9/lib/python3.10/site-packages/app/wrappers/model/abc_model_wrapper.py", line 109, in model_test
    self.model.model_test()
  File "/Users/verta/Documents/model_test.py", line 29, in model_test
ValueError: expected output ROAR!!!, got roar!!!

Critically, predict() is missing a call to input.upper():

    @verify_io
    def predict(self, input: str) -> str:
        runtime.log('model_input', input)
        echo: str = input.upper() + '!!!'
        runtime.log('model_output', echo)
        return echo

Making this change will allow the endpoint update to pass its model test and become available for predictions.

Last updated