DropHighPSIFeatures#
- class feature_engine.selection.DropHighPSIFeatures(split_col=None, split_frac=0.5, split_distinct=False, cut_off=None, switch=False, threshold=0.25, bins=10, strategy='equal_frequency', min_pct_empty_bins=0.0001, missing_values='raise', variables=None, confirm_variables=False, p_value=0.001)[source]#
DropHighPSIFeatures() drops features which Population Stability Index (PSI) is above a given threshold.
The PSI is used to compare distributions. Higher PSI values mean greater changes in a feature’s distribution. Therefore, a feature with high PSI can be considered unstable.
To compute the PSI, DropHighPSIFeatures() splits the dataset in two: a basis and a test set. Then, it compares the distribution of each feature between those sets.
To determine the PSI, continuous features are sorted into discrete intervals, and then, the number of observations per interval are compared between the 2 distributions.
The PSI is calculated as:
PSI = sum ( (test_i - basis_i) x ln(test_i/basis_i) )
where
basis
andtest
are the 2 datasets,i
refers to each interval, and then,test_i
andbasis_i
are the number of observations in interval i in each data set.The PSI has traditionally been used to assess changes in distributions of continuous variables.
In version 1.7, we extended the functionality of DropHighPSIFeatures() to calculate the PSI for categorical features as well. In this case,
i
is each unique category, andtest_i
andbasis_i
are the number of observations in category i.Threshold
Different thresholds can be used to assess the magnitude of the distribution shift according to the PSI value. The most commonly used thresholds are:
Below 10%, the variable has not experienced a significant shift.
Above 25%, the variable has experienced a major shift.
Between those two values, the shift is intermediate.
Data split
To compute the PSI, DropHighPSIFeatures() splits the dataset in two: a basis and a test set. Then, it compares the distribution of each feature between those sets.
There are various options to split a dataset:
First, you can indicate which variable should be used to guide the data split. This variable can be of any data type. If you do not enter a variable name, DropHighPSIFeatures() will use the dataframe index.
Next, you need to specify how that variable (or the index) should be used to split the data. You can specify a proportion of observations to be put in each data set, or alternatively, provide a cut-off value.
If you specify a proportion through the
split_frac
parameter, the data will be sorted to accommodate that proportion. Ifsplit_frac
is 0.5, 50% of the observations will go to either basis or test sets. Ifsplit_frac
is 0.6, 60% of the samples will go to the basis data set and the remaining 40% to the test set.If
split_distinct
is True, the data will be sorted considering unique values in the selected variables. Check the parameter below for more details.If you define a numeric cut-off value or a specific date using the
cut_off
parameter, the observations with value <= cut-off will go to the basis data set and the remaining ones to the test set. If the variable used to guide the split is categorical, its values are sorted alphabetically and cut accordingly.If you pass a list of values in the
cut-off
, the observations with the values in the list, will go to the basis set, and the remaining ones to the test set.More details in the User Guide.
- Parameters
- split_col: string or int, default=None.
The variable that will be used to split the dataset into the basis and test sets. If None, the dataframe index will be used.
split_col
can be a numerical, categorical or datetime variable. Ifsplit_col
is a categorical variable, and the splitting criteria is given bysplit_frac
, it will be assumed that the labels of the variable are sorted alphabetically.- split_frac: float, default=0.5.
The proportion of observations in each of the basis and test dataframes. If
split_frac
is 0.6, 60% of the observations will be put in the basis data set.If
split_distinct
is True, the indicated fraction may not be achieved exactly. See parametersplit_distinct
for more details.If
cut_off
is not None,split_frac
will be ignored and the data split based on thecut_off
value.- split_distinct: boolean, default=False.
If True,
split_frac
is applied to the vector of unique values insplit_col
instead of being applied to the whole vector of values. For example, if the values insplit_col
are [1, 1, 1, 1, 2, 2, 3, 4] andsplit_frac
is 0.5, we have the following:split_distinct=False
splits the vector in two equally sized parts:[1, 1, 1, 1] and [2, 2, 3, 4]. This involves that 2 dataframes with 4 observations each are used for the PSI calculations.
split_distinct=True
computes the vector of unique values insplit_col
([1, 2, 3, 4]) and splits that vector in two equal parts: [1, 2] and [3, 4]. The number of observations in the two dataframes used for the PSI calculations is respectively 6 ([1, 1, 1, 1, 2, 2]) and 2 ([3, 4]).
- cut_off: int, float, date or list, default=None
Threshold to split the dataset based on the
split_col
variable. If int, float or date, observations where thesplit_col
values are <= threshold will go to the basis data set and the rest to the test set. Ifcut_off
is a list, the observations where thesplit_col
values are within the list will go to the basis data set and the remaining observations to the test set. Ifcut_off
is not None, this parameter will be used to split the data andsplit_frac
will be ignored.- switch: boolean, default=False.
If True, the order of the 2 dataframes used to determine the PSI (basis and test) will be switched. This is important because the PSI is not symmetric, i.e., PSI(a, b) != PSI(b, a)).
- threshold: float, str, default = 0.25.
The threshold to drop a feature. If the PSI for a feature is >= threshold, the feature will be dropped. The most common threshold values are 0.25 (large shift) and 0.10 (medium shift). If ‘auto’, the threshold will be calculated based on the size of the basis and test dataset and the number of bins as:
threshold = χ2(q, B−1) × (1/N + 1/M)
where:
q = quantile of the distribution (or 1 - p-value),
B = number of bins/categories,
N = size of basis dataset,
M = size of test dataset.
See formula (5.2) from reference [1].
- bins: int, default = 10
Number of bins or intervals. For continuous features with good value spread, 10 bins is commonly used. For features with lower cardinality or highly skewed distributions, lower values may be required.
- strategy: string, default=’equal_frequency’
If the intervals into which the features should be discretized are of equal size or equal number of observations. Takes values “equal_width” for equally spaced bins or “equal_frequency” for bins based on quantiles, that is, bins with similar number of observations.
- min_pct_empty_bins: float, default = 0.0001
Value to add to empty bins or intervals. If after sorting the variable values into bins, a bin is empty, the PSI cannot be determined. By adding a small number to empty bins, we can avoid this issue. Note, that if the value added is too large, it may disturb the PSI calculation.
- missing_values: str, default=’raise’
Whether to perform the PSI feature selection on a dataframe with missing values. Takes values ‘raise’ or ‘ignore’. If ‘ignore’, missing values will be dropped when determining the PSI for that particular feature. If ‘raise’ the transformer will raise an error and features will not be selected.
- p_value: float, default = 0.001
The p-value to test the null hypothesis that there is no feature drift. In that case, the PSI-value approximates a random variable that follows a chi-square distribution. See [1] for details. This parameter is used only if
threshold
is set to ‘auto’.- variables: int, str, list, default = None
The list of variables to evaluate. If
None
, the transformer will evaluate all numerical variables in the dataset. If"all"
the transformer will evaluate all categorical and numerical variables in the dataset. Alternatively, the transformer will evaluate the variables indicated in the list or string.- confirm_variables: bool, default=False
If set to True, variables that are not present in the input dataframe will be removed from the list of variables. Only used when passing a variable list to the parameter
variables
. See parameter variables for more details.
- Attributes
- features_to_drop_:
List with the features that will be dropped.
- variables_:
The variables that will be considered for the feature selection procedure.
- psi_values_:
Dictionary containing the PSI value per feature.
- cut_off_:
Value used to split the dataframe into basis and test. This value is computed when not given as parameter.
- feature_names_in_:
List with the names of features seen during
fit
.- n_features_in_:
The number of features in the train set used in fit.
See also
References
- 1
Yurdakul B. “Statistical properties of population stability index”. Western Michigan University, 2018. https://scholarworks.wmich.edu/dissertations/3208/
Examples
>>> import pandas as pd >>> from feature_engine.selection import DropHighPSIFeatures >>> X = pd.DataFrame(dict( >>> x1 = [1,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0], >>> x2 = [32,87,6,32,11,44,8,7,9,0,32,87,6,32,11,44,8,7,9,0], >>> )) >>> psi = DropHighPSIFeatures() >>> psi.fit_transform(X) x2 0 32 1 87 2 6 3 32 4 11 5 44 6 8 7 7 8 9 9 0 10 32
Methods
fit:
Find features with high PSI values.
fit_transform:
Fit to data, then transform it.
get_feature_names_out:
Get output feature names for transformation.
get_params:
Get parameters for this estimator.
set_params:
Set the parameters of this estimator.
get_support:
Get a mask, or integer index, of the features selected.
transform:
Remove features with high PSI values.
- fit(X, y=None)[source]#
Find features with high PSI values.
- Parameters
- Xpandas dataframe of shape = [n_samples, n_features]
The training dataset.
- ypandas series. Default = None
y is not needed in this transformer. You can pass y or None.
- fit_transform(X, y=None, **fit_params)[source]#
Fit to data, then transform it.
Fits transformer to
X
andy
with optional parametersfit_params
and returns a transformed version ofX
.- Parameters
- Xarray-like of shape (n_samples, n_features)
Input samples.
- yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
- **fit_paramsdict
Additional fit parameters.
- Returns
- X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
- get_feature_names_out(input_features=None)[source]#
Get output feature names for transformation. In other words, returns the variable names of transformed dataframe.
- Parameters
- input_featuresarray or list, default=None
This parameter exits only for compatibility with the Scikit-learn pipeline.
If
None
, thenfeature_names_in_
is used as feature names in.If an array or list, then
input_features
must matchfeature_names_in_
.
- Returns
- feature_names_out: list
Transformed feature names.
- get_metadata_routing()[source]#
Get metadata routing of this object.
Please check User Guide on how the routing mechanism works.
- Returns
- routingMetadataRequest
A
MetadataRequest
encapsulating routing information.
- get_params(deep=True)[source]#
Get parameters for this estimator.
- Parameters
- deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
- Returns
- paramsdict
Parameter names mapped to their values.
- get_support(indices=False)[source]#
Get a mask, or integer index, of the features selected.
- Parameters
- indicesbool, default=False
If True, the return value will be an array of integers, rather than a boolean mask.
- Returns
- supportarray
An index that selects the retained features from a feature vector. If
indices
is False, this is a boolean array of shape [# input features], in which an element is True if its corresponding feature is selected for retention. Ifindices
is True, this is an integer array of shape [# output features] whose values are indices into the input feature vector.
- set_params(**params)[source]#
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as
Pipeline
). The latter have parameters of the form<component>__<parameter>
so that it’s possible to update each component of a nested object.- Parameters
- **paramsdict
Estimator parameters.
- Returns
- selfestimator instance
Estimator instance.