Feature Selection in Machine Learning with Python
Over 20 methods to select the most predictive features and build simpler, faster, and more reliable machine learning models.
Feature selection in machine learning - ebook
Learn how to implement various feature selection methods in just a few lines of code to train faster, simpler, and more reliable machine learning models.
Using Python open-source libraries, you will learn how to identify the most predictive features from your data through filter, wrapper, embedded, and other feature selection methods.
You'll learn the advantages and limitations of each method, and be ready to choose the best one based on your data and the model you want to train.
👉 epub and pdf copies
👉 155 pages
👉 English
Can't afford it? Get in touch.
30 days money back guarantee
If you're disappointed for whatever reason, you'll get a full refund.
So you can buy with confidence.
What our readers say
Soledad Galli, PhD
Sole is a lead data scientist, instructor, and developer of open source software. She created and maintains the Python library Feature-engine, which allows us to impute data, encode categorical variables, transform, create, and select features. Sole is also the author of the"Python Feature Engineering Cookbook," published by Packt.
More about Sole on LinkedIn.
Table of Content
- Chapter 1: Feature Selection Overview
- What is feature selection?
- Why do we select features?
- Feature selection methods
- Filter methods
- Wrapper methods
- Embedded methods
- Other methods
- Chapter 2: Basic Feature Selection Methods
- Constant features
- Quasi-constant features
- Duplicated features
- Chapter 3: Correlation of Predictors
- Correlation coefficients
- Visualizing correlated features
- Remove correlated features: retain first, remove the rest
- Remove correlated features: retain best feature, remove the rest
- Correlation of categorical variables
- Chapter 4: Statistical Methods
- Chi-square
- Anova
- Correlation
- Mutual information
- Chapter 5: Univariate Feature Selection
- Single feature model
- Target encoding
- Chapter 6: Wrapper Methods
- Exhaustive search
- Forward feature selection
- Backward feature elimination
- Chapter 7: Embedded Methods
- Lasso
- Feature importance from decision trees
- Recursive feature elimination by feature importance
- Chapter 8: Other Methods
- Recursive feature addition
- Recursive feature elimination
- Feature shuffling
- Probe features
- MRMR
Description
Feature selection is the process of selecting a subset of features from the variables in a data set to train machine learning models.
Feature selection is key for developing simpler, faster, and robust machine learning models and can help to avoid overfitting.
The aim of any feature selection algorithm is to create classifiers or regression models that run faster and whose outputs are easier to understand by their users.
In this book, you will find the most widely used feature selection methods to select the best subsets of predictor variables from your data. You will learn about filter, wrapper, and embedded methods for feature selection. Then, you will discover methods designed by computer science professionals or used in data science competitions that are faster or more scalable.
First, we will discuss the use of statistical tests and univariate algorithms for feature selection. Next, we will cover methods that select features through optimization of the model performance. We will move on to feature selection algorithms that are baked into the machine learning techniques. And finally, we will discuss additional methods designed by data scientists specifically for applied predictive modeling.
In this book, you will find out how to:
- Remove useless and redundant features by examining variability and correlation.
- Choose features based on statistical tests such as ANOVA, chi-square, and mutual information.
- Select features by using Lasso regularization or decision tree based feature importance, which are embedded in the machine learning modeling process.
- Select features by recursive feature elimination, addition, or value permutation.
Each chapter fleshes out various methods for feature selection that share common characteristics.
First, you will learn the fundamentals of the feature selection method, and next you will find a Python implementation.
The book comes with an accompanying Github repository with the full source code that you can download, modify, and use in your own data science projects and case studies.
Feature selection methods differ from dimensionality reduction methods in that feature selection techniques do not alter the original representation of the variables, but merely select a reduced number of features from the training data that produce performant machine learning models.
Using the Python libraries Scikit-learn, MLXtend, and Feature-engine, you’ll learn how to select the best numerical and categorical features for regression and classification models in just a few lines of code. You will also learn how to make feature selection part of your machine learning workflow.