Test file locally with Docker
Users may want to build a version of the container used for inference locally. Common uses are debugging and satisfying operations procedures.
Verta supports fetching the whole context necessary for Docker and building locally.
Fetching the Docker context
Any class within the Verta platform that has a download_docker_context()
method, such as an RegisteredModelVersion
, automatically supports fetching the Docker context that can be used to build an image.
For example, you can run
and our client will save a file named context.tgz
in your folder with all the contents for the build.
The Docker context can also be downloaded from a ExperimentRun
:
Building the Docker image
Docker on its own doesn't allow you to use a packaged context in your build directly. To unpack the context you can run
which will save the contents to context_folder
.
That folder contains all the information required to build an image. You can now run
to build the image locally.
Please contact Verta at help@verta.ai for any necessary permissions to access the verified base images used for Docker.
Running the Docker container
After the image has been built, it can be run locally
which starts serving the model:
Making predictions
HTTP requests can then be made against the model—similar to those one would make through the Verta platform—at the /predict_json
path. Input data for the model must be passed as a JSON value under the key "input"
:
The Python model's predict()
method will receive
and the container will return a JSON object containing the prediction result in "output"
and model data logs in "kv"
:
Last updated