causallib.evaluation.metrics module
Apply machine learning metrics to causal models for evaluation.
- causallib.evaluation.metrics.evaluate_metrics(metrics_to_evaluate, y_true, y_pred=None, y_pred_proba=None, sample_weight=None)[source]
Evaluates the metrics against the supplied predictions and labels.
Note that some metrics operate on proba predictions (y_pred_proba) and others on direct predictions. The function will select the correct input based on the name of the metric, if it knows about the metric. Otherwise it defaults to using the direct prediction (y_pred).
- Parameters
metrics_to_evaluate (dict) – key: metric’s name, value: callable that receives true labels, prediction and sample_weights (the latter is allowed to be ignored).
y_true (pd.Series) – True labels
y_pred_proba (pd.Series) – continuous output of predictor, as in predict_proba or decision_function.
y_pred (pd.Series) – label (i.e., categories, decisions) predictions.
sample_weight (pd.Series | None) – weight of each sample.
- Returns
name of metric as index and the evaluated score as value.
- Return type
pd.Series
- causallib.evaluation.metrics.get_default_binary_metrics(only_numeric_metric=False)[source]
Get default metrics for evaluating binary models.
- Parameters
only_numeric_metric (bool) – If metrics_to_evaluate not provided and default is used, whether to use only numerical metrics. Ignored if metrics_to_evaluate is provided. Non-numerical metrics are for example roc_curve, that returns vectors and not scalars).
- Returns
- metrics dict with key: metric’s name, value: callable that receives
true labels, prediction and sample_weights (the latter is allowed to be ignored).
- Return type