causallib.evaluation.results module
Evaluation results objects for plotting and further analysis.
These objects are generated by the evaluate method.
- class causallib.evaluation.results.BinaryOutcomeEvaluationResults(evaluated_metrics: Union[pandas.core.frame.DataFrame, causallib.evaluation.predictions.PropensityEvaluatorScores], models: Union[List[causallib.estimation.base_weight.WeightEstimator], List[causallib.estimation.base_estimator.IndividualOutcomeEstimator], List[causallib.estimation.base_weight.PropensityEstimator]], predictions: Dict[str, List[Union[causallib.evaluation.predictions.PropensityPredictions, causallib.evaluation.predictions.WeightPredictions, causallib.evaluation.predictions.OutcomePredictions]]], cv: List[Tuple[List[int], List[int]]], X: pandas.core.frame.DataFrame, a: pandas.core.series.Series, y: pandas.core.series.Series)[source]
Bases:
causallib.evaluation.results.EvaluationResults
,causallib.evaluation.plots.mixins.ClassificationPlotterMixin
,causallib.evaluation.plots.mixins.PlotAllMixin
Data structure to hold evaluation results including cross-validation.
- Attrs:
evaluated_metrics (Union[pd.DataFrame, PropensityEvaluatorScores, None]): models (dict[str, Union[list[WeightEstimator], list[IndividualOutcomeEstimator]):
Models trained during evaluation. May be dict or list or a model directly.
- predictions (dict[str, List[SingleFoldPredictions]]): dict with keys
“train” and “valid” (if produced through cross-validation) and values of the predictions for the respective fold
- cv (list[tuple[list[int], list[int]]]): the cross validation indices,
used to generate the results, used for constructing plots correctly
X (pd.DataFrame): features data a (pd.Series): treatment assignment data y (pd.Series): outcome data
- evaluated_metrics: Union[pandas.core.frame.DataFrame, causallib.evaluation.predictions.PropensityEvaluatorScores]
- models: Union[List[causallib.estimation.base_weight.WeightEstimator], List[causallib.estimation.base_estimator.IndividualOutcomeEstimator], List[causallib.estimation.base_weight.PropensityEstimator]]
- class causallib.evaluation.results.ContinuousOutcomeEvaluationResults(evaluated_metrics: Union[pandas.core.frame.DataFrame, causallib.evaluation.predictions.PropensityEvaluatorScores], models: Union[List[causallib.estimation.base_weight.WeightEstimator], List[causallib.estimation.base_estimator.IndividualOutcomeEstimator], List[causallib.estimation.base_weight.PropensityEstimator]], predictions: Dict[str, List[Union[causallib.evaluation.predictions.PropensityPredictions, causallib.evaluation.predictions.WeightPredictions, causallib.evaluation.predictions.OutcomePredictions]]], cv: List[Tuple[List[int], List[int]]], X: pandas.core.frame.DataFrame, a: pandas.core.series.Series, y: pandas.core.series.Series)[source]
Bases:
causallib.evaluation.results.EvaluationResults
,causallib.evaluation.plots.mixins.ContinuousOutcomePlotterMixin
,causallib.evaluation.plots.mixins.PlotAllMixin
Data structure to hold evaluation results including cross-validation.
- Attrs:
evaluated_metrics (Union[pd.DataFrame, PropensityEvaluatorScores, None]): models (dict[str, Union[list[WeightEstimator], list[IndividualOutcomeEstimator]):
Models trained during evaluation. May be dict or list or a model directly.
- predictions (dict[str, List[SingleFoldPredictions]]): dict with keys
“train” and “valid” (if produced through cross-validation) and values of the predictions for the respective fold
- cv (list[tuple[list[int], list[int]]]): the cross validation indices,
used to generate the results, used for constructing plots correctly
X (pd.DataFrame): features data a (pd.Series): treatment assignment data y (pd.Series): outcome data
- evaluated_metrics: Union[pandas.core.frame.DataFrame, causallib.evaluation.predictions.PropensityEvaluatorScores]
- models: Union[List[causallib.estimation.base_weight.WeightEstimator], List[causallib.estimation.base_estimator.IndividualOutcomeEstimator], List[causallib.estimation.base_weight.PropensityEstimator]]
- class causallib.evaluation.results.EvaluationResults(evaluated_metrics: Union[pandas.core.frame.DataFrame, causallib.evaluation.predictions.PropensityEvaluatorScores], models: Union[List[causallib.estimation.base_weight.WeightEstimator], List[causallib.estimation.base_estimator.IndividualOutcomeEstimator], List[causallib.estimation.base_weight.PropensityEstimator]], predictions: Dict[str, List[Union[causallib.evaluation.predictions.PropensityPredictions, causallib.evaluation.predictions.WeightPredictions, causallib.evaluation.predictions.OutcomePredictions]]], cv: List[Tuple[List[int], List[int]]], X: pandas.core.frame.DataFrame, a: pandas.core.series.Series, y: pandas.core.series.Series)[source]
Bases:
abc.ABC
Data structure to hold evaluation results including cross-validation.
- Attrs:
evaluated_metrics (Union[pd.DataFrame, PropensityEvaluatorScores, None]): models (dict[str, Union[list[WeightEstimator], list[IndividualOutcomeEstimator]):
Models trained during evaluation. May be dict or list or a model directly.
- predictions (dict[str, List[SingleFoldPredictions]]): dict with keys
“train” and “valid” (if produced through cross-validation) and values of the predictions for the respective fold
- cv (list[tuple[list[int], list[int]]]): the cross validation indices,
used to generate the results, used for constructing plots correctly
X (pd.DataFrame): features data a (pd.Series): treatment assignment data y (pd.Series): outcome data
- property all_plot_names
Available plot names.
- evaluated_metrics: Union[pandas.core.frame.DataFrame, causallib.evaluation.predictions.PropensityEvaluatorScores]
- static make(evaluated_metrics: Union[pandas.core.frame.DataFrame, causallib.evaluation.predictions.PropensityEvaluatorScores], models: Union[List[causallib.estimation.base_weight.WeightEstimator], List[causallib.estimation.base_estimator.IndividualOutcomeEstimator], List[causallib.estimation.base_weight.PropensityEstimator]], predictions: Dict[str, List[Union[causallib.evaluation.predictions.PropensityPredictions, causallib.evaluation.predictions.WeightPredictions, causallib.evaluation.predictions.OutcomePredictions]]], cv: List[Tuple[List[int], List[int]]], X: pandas.core.frame.DataFrame, a: pandas.core.series.Series, y: pandas.core.series.Series)[source]
Make EvaluationResults object of correct type.
This is a factory method to dispatch the initializing data to the correct subclass of EvaluationResults. This is the only supported way to instantiate EvaluationResults objects.
- Parameters
evaluated_metrics (Union[pd.DataFrame, WeightEvaluatorScores]) – evaluated metrics
(Union[ (models) – List[WeightEstimator], List[IndividualOutcomeEstimator], List[PropensityEstimator], ]): fitted models
predictions (Dict[str, List[SingleFoldPrediction]]) – predictions by phase and fold
cv (List[Tuple[List[int], List[int]]]) – cross validation indices
X (pd.DataFrame) – features data
a (pd.Series) – treatment assignment data
y (pd.Series) – outcome data
- Raises
ValueError – raised if invalid estimator is passed
- Returns
object with results of correct type
- Return type
- models: Union[List[causallib.estimation.base_weight.WeightEstimator], List[causallib.estimation.base_estimator.IndividualOutcomeEstimator], List[causallib.estimation.base_weight.PropensityEstimator]]
- predictions: Dict[str, List[Union[causallib.evaluation.predictions.PropensityPredictions, causallib.evaluation.predictions.WeightPredictions, causallib.evaluation.predictions.OutcomePredictions]]]
- class causallib.evaluation.results.PropensityEvaluationResults(evaluated_metrics: Union[pandas.core.frame.DataFrame, causallib.evaluation.predictions.PropensityEvaluatorScores], models: Union[List[causallib.estimation.base_weight.WeightEstimator], List[causallib.estimation.base_estimator.IndividualOutcomeEstimator], List[causallib.estimation.base_weight.PropensityEstimator]], predictions: Dict[str, List[Union[causallib.evaluation.predictions.PropensityPredictions, causallib.evaluation.predictions.WeightPredictions, causallib.evaluation.predictions.OutcomePredictions]]], cv: List[Tuple[List[int], List[int]]], X: pandas.core.frame.DataFrame, a: pandas.core.series.Series, y: pandas.core.series.Series)[source]
Bases:
causallib.evaluation.results.EvaluationResults
,causallib.evaluation.plots.mixins.ClassificationPlotterMixin
,causallib.evaluation.plots.mixins.WeightPlotterMixin
,causallib.evaluation.plots.mixins.PlotAllMixin
Data structure to hold evaluation results including cross-validation.
- Attrs:
evaluated_metrics (Union[pd.DataFrame, PropensityEvaluatorScores, None]): models (dict[str, Union[list[WeightEstimator], list[IndividualOutcomeEstimator]):
Models trained during evaluation. May be dict or list or a model directly.
- predictions (dict[str, List[SingleFoldPredictions]]): dict with keys
“train” and “valid” (if produced through cross-validation) and values of the predictions for the respective fold
- cv (list[tuple[list[int], list[int]]]): the cross validation indices,
used to generate the results, used for constructing plots correctly
X (pd.DataFrame): features data a (pd.Series): treatment assignment data y (pd.Series): outcome data
- evaluated_metrics: Union[pandas.core.frame.DataFrame, causallib.evaluation.predictions.PropensityEvaluatorScores]
- models: Union[List[causallib.estimation.base_weight.WeightEstimator], List[causallib.estimation.base_estimator.IndividualOutcomeEstimator], List[causallib.estimation.base_weight.PropensityEstimator]]
- class causallib.evaluation.results.WeightEvaluationResults(evaluated_metrics: Union[pandas.core.frame.DataFrame, causallib.evaluation.predictions.PropensityEvaluatorScores], models: Union[List[causallib.estimation.base_weight.WeightEstimator], List[causallib.estimation.base_estimator.IndividualOutcomeEstimator], List[causallib.estimation.base_weight.PropensityEstimator]], predictions: Dict[str, List[Union[causallib.evaluation.predictions.PropensityPredictions, causallib.evaluation.predictions.WeightPredictions, causallib.evaluation.predictions.OutcomePredictions]]], cv: List[Tuple[List[int], List[int]]], X: pandas.core.frame.DataFrame, a: pandas.core.series.Series, y: pandas.core.series.Series)[source]
Bases:
causallib.evaluation.results.EvaluationResults
,causallib.evaluation.plots.mixins.WeightPlotterMixin
,causallib.evaluation.plots.mixins.PlotAllMixin
Data structure to hold evaluation results including cross-validation.
- Attrs:
evaluated_metrics (Union[pd.DataFrame, PropensityEvaluatorScores, None]): models (dict[str, Union[list[WeightEstimator], list[IndividualOutcomeEstimator]):
Models trained during evaluation. May be dict or list or a model directly.
- predictions (dict[str, List[SingleFoldPredictions]]): dict with keys
“train” and “valid” (if produced through cross-validation) and values of the predictions for the respective fold
- cv (list[tuple[list[int], list[int]]]): the cross validation indices,
used to generate the results, used for constructing plots correctly
X (pd.DataFrame): features data a (pd.Series): treatment assignment data y (pd.Series): outcome data
- evaluated_metrics: Union[pandas.core.frame.DataFrame, causallib.evaluation.predictions.PropensityEvaluatorScores]
- models: Union[List[causallib.estimation.base_weight.WeightEstimator], List[causallib.estimation.base_estimator.IndividualOutcomeEstimator], List[causallib.estimation.base_weight.PropensityEstimator]]