SelectByTargetMeanPerformance#
- class feature_engine.selection.SelectByTargetMeanPerformance(variables=None, bins=5, strategy='equal_width', scoring='roc_auc', cv=3, threshold=None, regression=False, confirm_variables=False)[source]#
SelectByTargetMeanPerformance() uses the mean value of the target per category or per interval(if the variable is numerical), as proxy for target estimation. With this proxy, the selector determines the performance of each feature based on a metric of choice, and then selects the features based on this performance value.
SelectByTargetMeanPerformance() can evaluate numerical and categorical variables, without much prior manipulation. In other words, you don’t need to encode the categorical variables or transform the numerical variables to assess their importance if you use this transformer.
SelectByTargetMeanPerformance() requires that the dataset is complete, without missing data.
SelectByTargetMeanPerformance() determines the performance of each variable with cross-validation. More specifically:
For each categorical variable:
Determines the mean target value per category in the training folds.
Replaces the categories by the target mean values in the test folds.
Determines the performance of the transformed variables in the test folds.
For each numerical variable:
Discretises the variable into intervals of equal width or equal frequency.
Determines the mean value of the target per interval in the training folds.
Replaces the intervals by the target mean values in the test fold.
Determines the performance of the transformed variable in the test fold.
Finally, it selects the features which performance is bigger than the indicated threshold. If the threshold if left to None, it selects features which performance is bigger than the mean performance of all features.
All the steps are performed with cross-validation. That means, that intervals and target mean values per interval or category are determined in a certain portion of the data, and evaluated in a left-out sample. The performance metric per variable is the average across the cross-validation folds.
More details in the User Guide.
- Parameters
- variables: list, default=None
The list of variables to evaluate. If None, the transformer will evaluate all variables in the dataset (except datetime).
- bins: int, default = 5
If the dataset contains numerical variables, the number of bins into which the values will be sorted.
- strategy: str, default = ‘equal_width’
Whether the bins should of equal width (‘equal_width’) or equal frequency (‘equal_frequency’).
- scoring: str, default=’roc_auc’
Metric to evaluate the performance of the estimator. Comes from sklearn.metrics. See the model evaluation documentation for more options: https://scikit-learn.org/stable/modules/model_evaluation.html
- threshold: float, int, default = 0.01
The value that defines whether a feature will be selected. Note that for metrics like the roc-auc, r2, and the accuracy, the threshold will be a float between 0 and 1. For metrics like the mean squared error and the root mean squared error, the threshold can take any number. The threshold must be defined by the user. With bigger thresholds, fewer features will be selected.
- cv: int, cross-validation generator or an iterable, default=3
Determines the cross-validation splitting strategy. Possible inputs for cv are:
None, to use cross_validate’s default 5-fold cross validation
int, to specify the number of folds in a (Stratified)KFold,
CV splitter: (https://scikit-learn.org/stable/glossary.html#term-CV-splitter)
An iterable yielding (train, test) splits as arrays of indices.
For int/None inputs, if the estimator is a classifier and y is either binary or multiclass, StratifiedKFold is used. In all other cases, KFold is used. These splitters are instantiated with
shuffle=False
so the splits will be the same across calls. For more details check Scikit-learn’scross_validate
’s documentation.- regression: boolean, default=True
Indicates whether the target is one for regression or a classification.
- confirm_variables: bool, default=False
If set to True, variables that are not present in the input dataframe will be removed from the list of variables. Only used when passing a variable list to the parameter
variables
. See parameter variables for more details.
- Attributes
- variables_:
The variables that will be considered for the feature selection procedure.
- feature_performance_:
Dictionary with the performance of each feature.
- features_to_drop_:
List with the features that will be removed.
- feature_names_in_:
List with the names of features seen during
fit
.- n_features_in_:
The number of features in the train set used in fit.
See also
Notes
Replacing the categories or intervals by the target mean is the equivalent to target mean encoding.
References
- 1
Miller, et al. “Predicting customer behaviour: The University of Melbourne’s KDD Cup report”. JMLR Workshop and Conference Proceeding. KDD 2009 http://proceedings.mlr.press/v7/miller09/miller09.pdf
Methods
fit:
Find the important features.
fit_transform:
Fit to data, then transform it.
get_feature_names_out:
Get output feature names for transformation.
get_params:
Get parameters for this estimator.
set_params:
Set the parameters of this estimator.
transform:
Reduce X to the selected features.
- fit(X, y)[source]#
Find the important features.
- Parameters
- X: pandas dataframe of shape = [n_samples, n_features]
The input dataframe.
- y: array-like of shape (n_samples)
Target variable. Required to train the estimator.
- fit_transform(X, y=None, **fit_params)[source]#
Fit to data, then transform it.
Fits transformer to
X
andy
with optional parametersfit_params
and returns a transformed version ofX
.- Parameters
- Xarray-like of shape (n_samples, n_features)
Input samples.
- yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
- **fit_paramsdict
Additional fit parameters.
- Returns
- X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
- get_feature_names_out(input_features=None)[source]#
Get output feature names for transformation.
- input_features: None
This parameter exists only for compatibility with the Scikit-learn pipeline, but has no functionality. You can pass a list of feature names or None.
- Returns
- feature_names_out: list
The feature names.
- :rtype:py:class:
~typing.List
- get_params(deep=True)[source]#
Get parameters for this estimator.
- Parameters
- deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
- Returns
- paramsdict
Parameter names mapped to their values.
- set_params(**params)[source]#
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as
Pipeline
). The latter have parameters of the form<component>__<parameter>
so that it’s possible to update each component of a nested object.- Parameters
- **paramsdict
Estimator parameters.
- Returns
- selfestimator instance
Estimator instance.
- transform(X)[source]#
Return dataframe with selected features.
- Parameters
- X: pandas dataframe of shape = [n_samples, n_features].
The input dataframe.
- Returns
- X_new: pandas dataframe of shape = [n_samples, n_selected_features]
Pandas dataframe with the selected features.
- :rtype:py:class:
~pandas.core.frame.DataFrame