causallib.evaluation.scoring module
Scoring functions that operate on the evaluation results objects.
These functions depend on the causallib.evalutation results objects and are less reusable than the functions in metrics.py.
- causallib.evaluation.scoring.score_cv(predictions, X, a, y, cv, metrics_to_evaluate='defaults')[source]
Evaluate the prediction against the true data using evaluation score metrics.
- Parameters
X (pd.DataFrame) – Covariates.
a (pd.Series) – Treatment assignment.
y (pd.Series) – Outcome.
cv (list[tuples]) – list the number of folds containing tuples of indices: (train_idx, validation_idx)
metrics_to_evaluate (dict | "defaults") – key: metric’s name, value: callable that receives true labels, prediction and sample_weights (the latter is allowed to be ignored). If “defaults”, default metrics are selected.
- Returns
DataFrame whose columns are different metrics and each row is a product of phase x fold x strata. PropensityEvaluatorScores also has a covariate-balance result in a DataFrame.
- Return type
pd.DataFrame | WeightEvaluatorScores