DropHighPSIFeatures#
- class feature_engine.selection.DropHighPSIFeatures(split_col=None, split_frac=0.5, split_distinct=False, cut_off=None, switch=False, threshold=0.25, bins=10, strategy='equal_frequency', min_pct_empty_bins=0.0001, missing_values='raise', variables=None, confirm_variables=False)[source]#
DropHighPSIFeatures drops features which Population Stability Index (PSI) value is above a given threshold. The PSI of a numerical feature is an indication of the shift in its distribution; a feature with high PSI could therefore be considered unstable.
A bigger PSI value indicates a bigger shift in the feature distribution.
Different thresholds can be used to assess the magnitude of the distribution shift according to the PSI value. The most commonly used thresholds are:
Below 10%, the variable has not experienced a significant shift.
Above 25%, the variable has experienced a major shift.
Between those two values, the shift is intermediate.
To compute the PSI the DropHighPSIFeatures splits the dataset in two:
First and foremost, the user should enter one variable which will be used to guide the data split. This variable can be of any data type. If the user does not enter a variable name, DropHighPSIFeatures will use the dataframe index.
Second, the user has the option to specify a proportion of observations to put in each data set, or alternatively, provide a cut-off value.
If the user specifies a proportion through the
split_frac
parameter, the data will be sorted to accommodate that proportion. Ifsplit_frac
is 0.5, 50% of the observations will go to either basis or test sets. Ifsplit_frac
is 0.6, 60% of the samples will go to the basis data set and the remaining 40% to the test set.If
split_distinct
is True, the data will be sorted considering unique values in the selected variables. Check the parameter below for more details.If the user defines a numeric cut-off value or a specific date using the
cut_off
parameter, the observations with value <= cut-off will go to the basis data set and the remaining ones to the test set. For categorical values this means they are sorted alphabetically and cut accordingly.If the user passes a list of values in the
cut-off
, the observations with the values in the list, will go to the basis set, and the remaining ones to the test set.More details in the User Guide.
- Parameters
- split_col: string or int, default=None.
The variable that will be used to split the dataset into the basis and test sets. If None, the dataframe index will be used.
split_col
can be a numerical, categorical or datetime variable. Ifsplit_col
is a categorical variable, and the splitting criteria is given bysplit_frac
, it will be assumed that the labels of the variable are sorted alphabetically.- split_frac: float, default=0.5.
The proportion of observations in each of the basis and test dataframes. If
split_frac
is 0.6, 60% of the observations will be put in the basis data set.If
split_distinct
is True, the indicated fraction may not be achieved exactly. See parametersplit_distinct
for more details.If
cut_off
is not None,split_frac
will be ignored and the data split based on thecut_off
value.- split_distinct: boolean, default=False.
If True,
split_frac
is applied to the vector of unique values insplit_col
instead of being applied to the whole vector of values. For example, if the values insplit_col
are [1, 1, 1, 1, 2, 2, 3, 4] andsplit_frac
is 0.5, we have the following: -split_distinct=False
splits the vector in two equally sized parts: [1, 1, 1, 1] and [2, 2, 3, 4]. This involves that 2 dataframes with 4 observations each are used for the PSI calculations. -split_distinct=True
computes the vector of unique values insplit_col
([1, 2, 3, 4]) and splits that vector in two equal parts: [1, 2] and [3, 4]. The number of observations in the two dataframes used for the PSI calculations is respectively 6 ([1, 1, 1, 1, 2, 2]) and 2 ([3, 4]).- cut_off: int, float, date or list, default=None
Threshold to split the dataset based on the
split_col
variable. If int, float or date, observations where thesplit_col
values are <= threshold will go to the basis data set and the rest to the test set. Ifcut_off
is a list, the observations where thesplit_col
values are within the list will go to the basis data set and the remaining observations to the test set. Ifcut_off
is not None, this parameter will be used to split the data andsplit_frac
will be ignored.- switch: boolean, default=False.
If True, the order of the 2 dataframes used to determine the PSI (basis and test) will be switched. This is important because the PSI is not symmetric, i.e., PSI(a, b) != PSI(b, a)).
- threshold: float, default = 0.25.
The threshold to drop a feature. If the PSI for a feature is >= threshold, the feature will be dropped. The most common threshold values are 0.25 (large shift) and 0.10 (medium shift).
- bins: int, default = 10
Number of bins or intervals. For continuous features with good value spread, 10 bins is commonly used. For features with lower cardinality or highly skewed distributions, lower values may be required.
- strategy: string, default=’equal_frequency’
If the intervals into which the features should be discretized are of equal size or equal number of observations. Takes values “equal_width” for equally spaced bins or “equal_frequency” for bins based on quantiles, that is, bins with similar number of observations.
- min_pct_empty_bins: float, default = 0.0001
Value to add to empty bins or intervals. If after sorting the variable values into bins, a bin is empty, the PSI cannot be determined. By adding a small number to empty bins, we can avoid this issue. Note, that if the value added is too large, it may disturb the PSI calculation.
- missing_values: str, default=’raise’
Whether to perform the PSI feature selection on a dataframe with missing values. Takes values ‘raise’ or ‘ignore’. If ‘ignore’, missing values will be dropped when determining the PSI for that particular feature. If ‘raise’ the transformer will raise an error and features will not be selected.
- variables: str or list, default=None
The list of variables to evaluate. If None, the transformer will evaluate all numerical features in the dataset.
- confirm_variables: bool, default=False
If set to True, variables that are not present in the input dataframe will be removed from the list of variables. Only used when passing a variable list to the parameter
variables
. See parameter variables for more details.
- Attributes
- features_to_drop_:
List with the features that will be dropped.
- variables_:
The variables that will be considered for the feature selection procedure.
- psi_values_:
Dictionary containing the PSI value per feature.
- cut_off_:
Value used to split the dataframe into basis and test. This value is computed when not given as parameter.
- feature_names_in_:
List with the names of features seen during
fit
.- n_features_in_:
The number of features in the train set used in fit.
See also
References
https://scholarworks.wmich.edu/cgi/viewcontent.cgi?article=4249&context=dissertations
Methods
fit:
Find features with high PSI values.
fit_transform:
Fit to data, then transform it.
get_feature_names_out:
Get output feature names for transformation.
get_params:
Get parameters for this estimator.
set_params:
Set the parameters of this estimator.
transform:
Remove features with high PSI values.
- fit(X, y=None)[source]#
Find features with high PSI values.
- Parameters
- Xpandas dataframe of shape = [n_samples, n_features]
The training dataset.
- ypandas series. Default = None
y is not needed in this transformer. You can pass y or None.
- fit_transform(X, y=None, **fit_params)[source]#
Fit to data, then transform it.
Fits transformer to
X
andy
with optional parametersfit_params
and returns a transformed version ofX
.- Parameters
- Xarray-like of shape (n_samples, n_features)
Input samples.
- yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
- **fit_paramsdict
Additional fit parameters.
- Returns
- X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
- get_feature_names_out(input_features=None)[source]#
Get output feature names for transformation.
- input_features: None
This parameter exists only for compatibility with the Scikit-learn pipeline, but has no functionality. You can pass a list of feature names or None.
- Returns
- feature_names_out: list
The feature names.
- :rtype:py:class:
~typing.List
- get_params(deep=True)[source]#
Get parameters for this estimator.
- Parameters
- deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
- Returns
- paramsdict
Parameter names mapped to their values.
- set_params(**params)[source]#
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as
Pipeline
). The latter have parameters of the form<component>__<parameter>
so that it’s possible to update each component of a nested object.- Parameters
- **paramsdict
Estimator parameters.
- Returns
- selfestimator instance
Estimator instance.
- transform(X)[source]#
Return dataframe with selected features.
- Parameters
- X: pandas dataframe of shape = [n_samples, n_features].
The input dataframe.
- Returns
- X_new: pandas dataframe of shape = [n_samples, n_selected_features]
Pandas dataframe with the selected features.
- :rtype:py:class:
~pandas.core.frame.DataFrame