MRMR#

class feature_engine.selection.MRMR(variables=None, method='MIQ', max_features=None, discrete_features='auto', n_neighbors=3, scoring='roc_auc', cv=3, param_grid=None, regression=False, confirm_variables=False, random_state=None, n_jobs=None)[source]#

MRMR() selects features using the Minimum Redundancy and Maximum Relevance (MRMR) framework. With MRMR, we select features that have a strong relationship with the target variable (relevance), but weak relationship with other predictor variables (redundance).

Relevance is determined by calculating the mutual information or the F-statistic (from ANOVA or correlation) between each predictor and target. The relevance can also be determined as the random forest derived feature importance.

Redundancy is calculated as the mean correlation or mean mutual information of each feature to other predictor variables.

An importance score is then calculated as the difference or the ratio between relevance and redundance.

MRMR is an iterative algorithm. It first determines the relevance of all features and selects the one whose value is the highest.

In the second round, it determines the redundance of all features respect to the selected one, calculates the importance score, and selects the one with the highest value.

After that, it repeats the procedure from the second step, this time taking the average redundance of the remaining features to those already selected.

Relevance and Redundance values can be combined as follows:


Method

Relevance

Redundance

Scheme

‘MID’

Mutual information

Mutual information

Difference

‘MIQ’

Mutual information

Mutual information

Ratio

‘FCD’

F-Statistic

Correlation

Difference

‘FCQ’

F-Statistic

Correlation

Ratio

‘RFCQ’

Random Forests

Correlation

Ratio


More details in the User Guide.

Parameters
variables: list, default=None

The list of variables to evaluate. If None, the transformer will evaluate all numerical variables in the dataset.

method: str, default = ‘MIQ’

How to estimate the relevance, redundance and relation between the two. Check table above for more details.

max_features: int, default = None

The number of features to select. If None, it defaults to 20% of the features seen during fit().

discrete_features: bool, str, array, default=’auto’

If bool, then determines whether to consider all features discrete or continuous. If array, then it should be a boolean mask with shape (n_features,). Ensure that the array matches the discrete features passed in variables if not None, or in X.columns otherwise. If ‘auto’, it is assigned to False for dense X and to True for sparse X. Only used when method is 'MIQ' or 'MID'.

n_neighbors: int, default=3

Number of neighbors to use for MI estimation for continuous variables. Higher values reduce variance of the estimation, but could introduce a bias. Only used when method is 'MIQ' or 'MID'.

scoring: str, default=’roc_auc’

Metric to evaluate the performance of the estimator. Comes from sklearn.metrics. See the model evaluation documentation for more options: https://scikit-learn.org/stable/modules/model_evaluation.html. Only used when method = 'RFCQ'.

cv: int, cross-validation generator or an iterable, default=3

Determines the cross-validation splitting strategy. Possible inputs for cv are:

For int/None inputs, if the estimator is a classifier and y is either binary or multiclass, StratifiedKFold is used. In all other cases, KFold is used. These splitters are instantiated with shuffle=False so the splits will be the same across calls. For more details check Scikit-learn’s cross_validate’s documentation. Only used when method = 'RFCQ'.

param_grid: dictionary, default=None

The hyperparameters to optimize for the random forest through a grid search. param_grid can contain any of the permitted hyperparameters for Scikit-learn’s RandomForestRegressor() or RandomForestClassifier(). If None, then param_grid will optimize the ‘max_depth’ over [1, 2, 3, 4]. Only used when method is 'RFCQ'.

regression: boolean, default=True

Indicates whether the target is one for regression or a classification.

confirm_variables: bool, default=False

If set to True, variables that are not present in the input dataframe will be removed from the list of variables. Only used when passing a variable list to the parameter variables. See parameter variables for more details.

random_state: int, default=None

Seed for reproducibility. Used when method is one of 'RFCQ', 'MIQ', or 'MID' as seed for scikit-learn’s mutual_info_classif, mutual_info_regression or random forest model.

n_jobs: int, default=None

The number of jobs to use for computing the mutual information. The parallelization is done on the columns of X. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. Used when method is one of 'RFCQ', 'MIQ', or 'MID' for scikit-learn’s mutual_info_classif, mutual_info_regression or random forest model.

Attributes
variables_:

The variables that will be considered for the feature selection procedure.

relevance_:

Array with the mutual information, f-statistic or random forest derived importance for each feature respect to the target.

features_to_drop_:

List with the features that will be removed.

feature_names_in_:

List with the names of features seen during fit.

n_features_in_:

The number of features in the train set used in fit.

References

1

Zhao, et al. “Maximum Relevance and Minimum Redundancy Feature Selection Methods for a Marketing Machine Learning Platform”. 2019 https://arxiv.org/abs/1908.05376

Examples

>>> from sklearn.datasets import fetch_california_housing
>>> from feature_engine.selection import MRMR
>>> X, y = fetch_california_housing(return_X_y=True, as_frame=True)
>>> X.drop(labels=["Latitude", "Longitude"], axis=1, inplace=True)
>>> mrmr_sel = MRMR(method="MIQ", regression=True, random_state=3)
>>> X_t = mrmr_sel.fit_transform(X, y)
>>> print(X_t.head())
   MedInc  AveOccup
0  8.3252  2.555556
1  8.3014  2.109842
2  7.2574  2.802260
3  5.6431  2.547945
4  3.8462  2.181467

Methods

fit:

Find the important features.

fit_transform:

Fit to data, then transform it.

get_feature_names_out:

Get output feature names for transformation.

get_params:

Get parameters for this estimator.

set_params:

Set the parameters of this estimator.

get_support:

Get a mask, or integer index, of the features selected.

transform:

Reduce X to the selected features.

fit(X, y)[source]#

Find the important features.

Parameters
X: pandas dataframe of shape = [n_samples, n_features]

The input dataframe.

y: array-like of shape (n_samples)

Target variable. Required to train the estimator.

fit_transform(X, y=None, **fit_params)[source]#

Fit to data, then transform it.

Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X.

Parameters
Xarray-like of shape (n_samples, n_features)

Input samples.

yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None

Target values (None for unsupervised transformations).

**fit_paramsdict

Additional fit parameters.

Returns
X_newndarray array of shape (n_samples, n_features_new)

Transformed array.

get_feature_names_out(input_features=None)[source]#

Get output feature names for transformation. In other words, returns the variable names of transformed dataframe.

Parameters
input_featuresarray or list, default=None

This parameter exits only for compatibility with the Scikit-learn pipeline.

  • If None, then feature_names_in_ is used as feature names in.

  • If an array or list, then input_features must match feature_names_in_.

Returns
feature_names_out: list

Transformed feature names.

rtype

List[Union[str, int]] ..

get_metadata_routing()[source]#

Get metadata routing of this object.

Please check User Guide on how the routing mechanism works.

Returns
routingMetadataRequest

A MetadataRequest encapsulating routing information.

get_params(deep=True)[source]#

Get parameters for this estimator.

Parameters
deepbool, default=True

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns
paramsdict

Parameter names mapped to their values.

get_support(indices=False)[source]#

Get a mask, or integer index, of the features selected.

Parameters
indicesbool, default=False

If True, the return value will be an array of integers, rather than a boolean mask.

Returns
supportarray

An index that selects the retained features from a feature vector. If indices is False, this is a boolean array of shape [# input features], in which an element is True if its corresponding feature is selected for retention. If indices is True, this is an integer array of shape [# output features] whose values are indices into the input feature vector.

set_params(**params)[source]#

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters
**paramsdict

Estimator parameters.

Returns
selfestimator instance

Estimator instance.

transform(X)[source]#

Return dataframe with selected features.

Parameters
X: pandas dataframe of shape = [n_samples, n_features].

The input dataframe.

Returns
X_new: pandas dataframe of shape = [n_samples, n_selected_features]

Pandas dataframe with the selected features.

rtype

DataFrame ..