log_mmm_evaluation_metrics#
- pymc_marketing.mlflow.log_mmm_evaluation_metrics(y_true, y_pred, metrics_to_calculate=None, hdi_prob=0.94, prefix='')[source]#
Log evaluation metrics produced by
pymc_marketing.mmm.evaluation.compute_summary_metrics()to MLflow.- Parameters:
- y_true
npt.NDArray|pd.Series The true values of the target variable.
- y_pred
npt.NDArray|xr.DataArray The predicted values of the target variable.
- metrics_to_calculate
listofstrorNone, optional List of metrics to calculate. If None, all available metrics will be calculated. Options include:
r_squared: Bayesian R-squared.rmse: Root Mean Squared Error.nrmse: Normalized Root Mean Squared Error.mae: Mean Absolute Error.nmae: Normalized Mean Absolute Error.mape: Mean Absolute Percentage Error.
- hdi_prob
float, optional The probability mass of the highest density interval. Defaults to 0.94.
- prefix
str, optional Prefix to add to the metric names. Defaults to “”.
- y_true
Examples
Log in-sample evaluation metrics for a PyMC-Marketing MMM model:
import mlflow from pymc_marketing.mmm import MMM mmm = MMM(...) mmm.fit(X, y) predictions = mmm.sample_posterior_predictive(X) with mlflow.start_run(): log_mmm_evaluation_metrics(y, predictions["y"])