causallib.evaluation.predictions module

Predictions from single folds.

Predictions are generated by predictors for causal models. They contain the estimates for single folds and are combined in the EvaluationResults objects for further analysis.

class causallib.evaluation.predictions.OutcomePredictions(prediction, prediction_event_prob=None)[source]

Bases: object

Data structure to hold outcome-model predictions

evaluate_metrics(a, y, metrics_to_evaluate)[source]

Evaluate metrics for this model prediction.

Parameters
  • a (pd.Series) – treatment assignment

  • y (pd.Series) – ground truth outcomes

  • metrics_to_evaluate (Dict[str,Callable]) – key: metric’s name, value: callable that receives true labels, prediction and sample_weights (the latter may be ignored). If not provided, defaults from causallib.evaluation.metrics are used.

Returns

evaluated metrics

Return type

pd.DataFrame

get_prediction_by_treatment(a: pandas.core.series.Series)[source]

Get proba if available else prediction

get_proba_by_treatment(a: pandas.core.series.Series)[source]

Get proba of prediction

class causallib.evaluation.predictions.PropensityEvaluatorScores(prediction_scores, covariate_balance)

Bases: tuple

Create new instance of PropensityEvaluatorScores(prediction_scores, covariate_balance)

covariate_balance

Alias for field number 1

prediction_scores

Alias for field number 0

class causallib.evaluation.predictions.PropensityPredictions(weight_by_treatment_assignment, weight_for_being_treated, treatment_assignment_pred, propensity, propensity_by_treatment_assignment)[source]

Bases: causallib.evaluation.predictions.WeightPredictions

Data structure to hold propensity-model predictions

evaluate_metrics(X, a_true, metrics_to_evaluate)[source]

Evaluate metrics on prediction.

Parameters
  • X (pd.DataFrame) – Covariates.

  • a_true (pd.Series) – ground truth treatment assignment

  • metrics_to_evaluate (dict | None) – key: metric’s name, value: callable that receives true labels, prediction and sample_weights (the latter may be ignored).

Returns

Object with two data attributes: “predictions”

and “covariate_balance”

Return type

WeightEvaluatorScores

class causallib.evaluation.predictions.WeightPredictions(weight_by_treatment_assignment, weight_for_being_treated)[source]

Bases: object

Data structure to hold weight-model predictions

evaluate_metrics(X, a_true, metrics_to_evaluate)[source]

Evaluate covariate balancing of the weight model

Parameters
  • X (pd.DataFrame) – Covariates.

  • a_true (pd.Series) – ground truth treatment assignment

  • metrics_to_evaluate (dict | None) – IGNORED.

Returns

a covariate_balance dataframe

Return type

pd.DataFrame