Concepts

Verta Experiment Management system helps organize your work, using projects, experiments, experiment runs, tags, descriptions, and anything else you need to navigate multiple development fronts and never lose anything. With a rich search and filter option you can filter on any characteristic of the run, including metrics and hyperparameters (via UI or our clients).

Here are few key concepts that you should be familiar with.

Project

A project consists of a set of experiments that can be logged and compared together. Each project can have multiple experiments and all the experiment runs associated with those experiments. A project has its dedicated charts and dashboard views with support to filter and compare specific experiment runs or groups of experiment runs. You can manage user permissions of project in its settings section.

Experiments

Experiments allow you to group different models into the same class to accomplish a specific project goal. For example within a project you may have two different experiments - "Logistic Regression" and "Convolutional Neural Network with tf-idf". You can compare experiment runs within and across experiments.

Experiment Runs

An experiment run corresponds to one execution of a modeling script or run, and represents a particular configuration of an Experiment. In most cases, an Experiment Run produces a single model as its end result. Every experiment run has its own detailed view the track model code version, dataset, metrics, hyperparameters and all the meta data that has been logged.

Attributes

Attributes let you log and visualize rich are descriptive metadata about your model performance, model type, dataset feature distribution and other information such as the team responsible for this model or the expected training time. Attributes can be key-value pair, strings, dictionaries and other data types like histograms, confusion matrix etc.

Metrics

Metrics are unique performance metadata, such as accuracy or loss on the full training set and you can log one or more performance merics for each experiment run. You can plot various charts to track your performance metrics across experiment runs and filter runs.

Hyperparameters

Hyperparameters are model configuration metadata, such as the loss function, batch size, number of epochs or the regularization penalty. Typically you will use a different combination of hyperparameter set for each experiment run. You can log hyperparameter configuration for each run to visualize and compare impact of model performance metrics (e.g. accuracy) over different hyperparamters.

Observations

Observations are recurring metadata that are repeatedly measured over time, such as batch losses over an epoch or memory usage. Observation charts are generated for each experiment run.

Tags

Tags are short textual labels used to help identify a specific entity (e.g. project, experiment run or dataset version) such as its purpose, status or its environment. Its a free form text string and so you can use it in anyway that can help you group and filter your data. You can filter experiment runs using tags. Tags can be easily added and updated both via web UI and client.

Artifacts

An artifact is any binary or blob-like information. This may include the weights of a model, model checkpoints, charts produced during training, etc. In Verta, artifacts can be associated with a variety of entities including Projects and ExperimentRuns (most common).

Metadata

Metadata is extra (or "meta") data about any of entities in the system such as Projects, Experiments, ExperimentRuns, and Models. Examples of metadata include:

EntityExamples ofmetadata

Project

Tags, owner,

date created

Experiment

Tags, owner,

date created

ExperimentRun

Metrics, AUC

curves, tags, owner

Model

Name, tags,

lifecycle stage

Last updated