Verta
Search…
How to capture training loss over time?
Observations are recurring metadata that are repeatedly measured over time, such as batch losses over an epoch or memory usage. Observation charts are generated for each experiment run.
Here is the example with a Random Forest with Grid Search (XGBoost) that showcases how to capture observations:
1
#Prepare Data
2
3
data = datasets.load_wine()
4
5
X = data['data']
6
y = data['target']
7
8
dtrain = xgb.DMatrix(X, label=y)
Copied!
1
df = pd.DataFrame(np.hstack((X, y.reshape(-1, 1))),
2
columns=data['feature_names'] + ['species'])
3
4
df.head()
Copied!
1
#Prepare Hyperparameters
2
3
grid = model_selection.ParameterGrid({
4
'eta': [0.5, 0.7],
5
'max_depth': [1, 2, 3],
6
'num_class': [10],
7
})
Copied!
1
#Log experiment runs and observations
2
3
def run_experiment(hyperparams):
4
run = client.set_experiment_run()
5
6
# log hyperparameters
7
run.log_hyperparameters(hyperparams)
8
9
# run cross validation on hyperparameters
10
cv_history = xgb.cv(hyperparams, dtrain,
11
nfold=5,
12
metrics=("merror", "mlogloss"))
13
14
# log observations from each iteration
15
for _, iteration in cv_history.iterrows():
16
for obs, val in iteration.iteritems():
17
run.log_observation(obs, val)
18
19
# log error from final iteration
20
final_val_error = iteration['test-merror-mean']
21
run.log_metric("val_error", final_val_error)
22
print("{} Mean error: {:.4f}".format(hyperparams, final_val_error))
23
24
# NOTE: run_experiment() could also be defined in a module, and executed in parallel
25
for hyperparams in grid:
26
run_experiment(hyperparams)
Copied!
Observations can be visualized in the web UI in experiment run detail view.
Last modified 3mo ago
Copy link