Test file locally with Docker

Users may want to build a version of the container used for inference locally. Common uses are debugging and satisfying operations procedures.

Verta supports fetching the whole context necessary for Docker and building locally.

Fetching the Docker context

Any class within the Verta platform that has a download_docker_context() method, such as an RegisteredModelVersion, automatically supports fetching the Docker context that can be used to build an image.

For example, you can run

model_version.download_docker_context("context.tgz", self_contained=True)

and our client will save a file named context.tgz in your folder with all the contents for the build.

The Docker context can also be downloaded from a ExperimentRun:

run.download_docker_context("context.tgz", self_contained=True)

Building the Docker image

Docker on its own doesn't allow you to use a packaged context in your build directly. To unpack the context you can run

mkdir -p context_folder && tar -C context_folder -xvzf context.tgz

which will save the contents to context_folder.

That folder contains all the information required to build an image. You can now run

docker build -t mymodel:latest context_folder/

to build the image locally.

Please contact Verta at help@verta.ai for any necessary permissions to access the verified base images used for Docker.

Running the Docker container

After the image has been built, it can be run locally

docker run -p 5000:5000 mymodel:latest pythonmodelservice

which starts serving the model:

[31/Aug/2022:18:00:00] ENGINE Bus STARTING
[31/Aug/2022:18:00:00] ENGINE Started monitor thread 'Autoreloader'.
[31/Aug/2022:18:00:00] ENGINE Serving on http://0.0.0.0:5000
[31/Aug/2022:18:00:00] ENGINE Bus STARTED

Making predictions

HTTP requests can then be made against the model—similar to those one would make through the Verta platform—at the /predict_json path. Input data for the model must be passed as a JSON value under the key "input":

curl -X POST "http://localhost:5000/predict_json" \
    -H "Content-Type: application/json" \
    -d '{"input": ["my", "input", "data"]}'

The Python model's predict() method will receive

["my", "input", "data"]

and the container will return a JSON object containing the prediction result in "output" and model data logs in "kv":

{
  "outputs": ["my", "output", "data"],
  "kv": {}
}

Last updated