class feature_engine.selection.SmartCorrelatedSelection(variables=None, method='pearson', threshold=0.8, missing_values='ignore', selection_method='missing_values', estimator=None, scoring='roc_auc', cv=3, confirm_variables=False)[source]#

SmartCorrelatedSelection() finds groups of correlated features and then selects, from each group, a feature following certain criteria:

  • Feature with the least missing values.

  • Feature with the highest cardinality (greatest number of unique values).

  • Feature with the highest variance.

  • Feature with the highest importance according to an estimator.

SmartCorrelatedSelection() returns a dataframe containing from each group of correlated features, the selected variable, plus all the features that were not correlated to any other.

Correlation is calculated with pandas.corr().

SmartCorrelatedSelection() works only with numerical variables. Categorical variables will need to be encoded to numerical or will be excluded from the analysis.

More details in the User Guide.

variables: str or list, default=None

The list of variables to evaluate. If None, the transformer will evaluate all numerical features in the dataset.

method: string or callable, default=’pearson’

Can take ‘pearson’, ‘spearman’, ‘kendall’ or callable. It refers to the correlation method to be used to identify the correlated features.

  • ‘pearson’: standard correlation coefficient

  • ‘kendall’: Kendall Tau correlation coefficient

  • ‘spearman’: Spearman rank correlation

  • callable: callable with input two 1d ndarrays and returning a float.

For more details on this parameter visit the pandas.corr() documentation.

threshold: float, default=0.8

The correlation threshold above which a feature will be deemed correlated with another one and removed from the dataset.

missing_values: str, default=ignore

Whether the missing values should be raised as error or ignored when determining correlation. Takes values ‘raise’ and ‘ignore’.

selection_method: str, default= “missing_values”

Takes the values “missing_values”, “cardinality”, “variance” and “model_performance”.

“missing_values”: keeps the feature from the correlated group with the least missing observations.

“cardinality”: keeps the feature from the correlated group with the highest cardinality.

“variance”: keeps the feature from the correlated group with the highest variance.

“model_performance”: trains a machine learning model using each of the features in a correlated group and retains the feature with the highest importance.

estimator: object

A Scikit-learn estimator for regression or classification.

scoring: str, default=’roc_auc’

Metric to evaluate the performance of the estimator. Comes from sklearn.metrics. See the model evaluation documentation for more options:

cv: int, cross-validation generator or an iterable, default=3

Determines the cross-validation splitting strategy. Possible inputs for cv are:

For int/None inputs, if the estimator is a classifier and y is either binary or multiclass, StratifiedKFold is used. In all other cases, KFold is used. These splitters are instantiated with shuffle=False so the splits will be the same across calls. For more details check Scikit-learn’s cross_validate’s documentation.

confirm_variables: bool, default=False

If set to True, variables that are not present in the input dataframe will be removed from the list of variables. Only used when passing a variable list to the parameter variables. See parameter variables for more details.


Groups of correlated features. Each list is a group of correlated features.

correlated_feature_dict_: dict

Dictionary containing the correlated feature groups. The key is the feature against which all other features were evaluated. The values are the features correlated with the key. Key + values should be the same as the set found in correlated_feature_groups. We introduced this attribute in version 1.17.0 because from the set, it is not easy to see which feature will be retained and which ones will be removed. The key is retained, the values will be dropped.


The correlated features to remove from the dataset.


The variables that will be considered for the feature selection procedure.


List with the names of features seen during fit.


The number of features in the train set used in fit.


For brute-force correlation selection, check Feature-engine’s DropCorrelatedFeatures().


>>> import pandas as pd
>>> from feature_engine.selection import SmartCorrelatedSelection
>>> X = pd.DataFrame(dict(x1 = [1,2,1,1],
>>>                 x2 = [2,4,3,1],
>>>                 x3 = [1, 0, 0, 0]))
>>> scs = SmartCorrelatedSelection(threshold=0.7)
>>> scs.fit_transform(X)
   x2  x3
0   2   1
1   4   0
2   3   0
3   1   0

It is also possible to use alternative selection methods. Here, we select those features with the higher variance:

>>> X = pd.DataFrame(dict(x1 = [2,4,3,1],
>>>                 x2 = [1000,2000,1500,500],
>>>                 x3 = [1, 0, 0, 0]))
>>> scs = SmartCorrelatedSelection(threshold=0.7, selection_method="variance")
>>> scs.fit_transform(X)
     x2  x3
0  1000   1
1  2000   0
2  1500   0
3   500   0



Find best feature from each correlated group.


Fit to data, then transform it.


Get output feature names for transformation.


Get parameters for this estimator.


Set the parameters of this estimator.


Get a mask, or integer index, of the features selected.


Return selected features.

fit(X, y=None)[source]#

Find the correlated feature groups. Determine which feature should be selected from each group.

X: pandas dataframe of shape = [n_samples, n_features]

The training dataset.

y: pandas series. Default = None

y is needed if selection_method == ‘model_performance’.

fit_transform(X, y=None, **fit_params)[source]#

Fit to data, then transform it.

Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X.

Xarray-like of shape (n_samples, n_features)

Input samples.

yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None

Target values (None for unsupervised transformations).


Additional fit parameters.

X_newndarray array of shape (n_samples, n_features_new)

Transformed array.


Get output feature names for transformation. In other words, returns the variable names of transformed dataframe.

input_featuresarray or list, default=None

This parameter exits only for compatibility with the Scikit-learn pipeline.

  • If None, then feature_names_in_ is used as feature names in.

  • If an array or list, then input_features must match feature_names_in_.

feature_names_out: list

Transformed feature names.


List[Union[str, int]] ..


Get metadata routing of this object.

Please check User Guide on how the routing mechanism works.


A MetadataRequest encapsulating routing information.


Get parameters for this estimator.

deepbool, default=True

If True, will return the parameters for this estimator and contained subobjects that are estimators.


Parameter names mapped to their values.


Get a mask, or integer index, of the features selected.

indicesbool, default=False

If True, the return value will be an array of integers, rather than a boolean mask.


An index that selects the retained features from a feature vector. If indices is False, this is a boolean array of shape [# input features], in which an element is True if its corresponding feature is selected for retention. If indices is True, this is an integer array of shape [# output features] whose values are indices into the input feature vector.


Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.


Estimator parameters.

selfestimator instance

Estimator instance.


Return dataframe with selected features.

X: pandas dataframe of shape = [n_samples, n_features].

The input dataframe.

X_new: pandas dataframe of shape = [n_samples, n_selected_features]

Pandas dataframe with the selected features.


DataFrame ..