Feature Engineering for Machine Learning: Principles and Techniques for Data Scientists


Price: $65.99 – $43.75
(as of Dec 24,2024 23:31:08 UTC – Details)


From the brand

oreilly

oreilly

Explore our collection

Oreilly

Oreilly

Sharing the knowledge of experts

O’Reilly’s mission is to change the world by sharing the knowledge of innovators. For over 40 years, we’ve inspired companies and individuals to do new things (and do them better) by providing the skills and understanding that are necessary for success.

Our customers are hungry to build the innovations that propel the world forward. And we help them do just that.

Publisher ‏ : ‎ O’Reilly Media; 1st edition (May 8, 2018)
Language ‏ : ‎ English
Paperback ‏ : ‎ 215 pages
ISBN-10 ‏ : ‎ 1491953241
ISBN-13 ‏ : ‎ 978-1491953242
Item Weight ‏ : ‎ 11.2 ounces
Dimensions ‏ : ‎ 7 x 0.4 x 9.1 inches

Customers say

Customers find the book easy to read and well-written. They appreciate the clear coding examples included for practice. The book provides useful hints and is a great reference, though some feel it’s incomplete.

AI-generated from the text of customer reviews


Feature Engineering for Machine Learning: Principles and Techniques for Data Scientists

Feature engineering is a crucial aspect of machine learning that can greatly impact the performance of models. In this post, we will explore the principles and techniques of feature engineering that data scientists should be familiar with.

1. Understanding the Data: The first step in feature engineering is to thoroughly understand the data that is being used for model training. This includes identifying the types of features, their distributions, and any potential relationships between them.

2. Feature Selection: Not all features are created equal, and it is important to select the most relevant features for model training. This can involve techniques such as correlation analysis, feature importance ranking, and domain knowledge.

3. Feature Transformation: Sometimes, features may need to be transformed in order to better suit the model. This can include techniques such as normalization, scaling, and log transformations.

4. Feature Creation: In some cases, new features may need to be created from existing ones in order to capture important relationships in the data. This can involve techniques such as polynomial features, interaction terms, and encoding categorical variables.

5. Handling Missing Values: Missing values in features can have a significant impact on model performance. Data scientists should be familiar with techniques such as imputation, deletion, and using models to predict missing values.

6. Feature Encoding: Categorical variables need to be encoded in a numerical format for machine learning models to work properly. Techniques such as one-hot encoding, label encoding, and target encoding can be used for this purpose.

7. Feature Scaling: Features should be scaled appropriately in order to ensure that all features contribute equally to the model. Techniques such as standardization and normalization can be used for this purpose.

In conclusion, feature engineering is a critical aspect of machine learning that can greatly impact the performance of models. By understanding the principles and techniques of feature engineering, data scientists can create more effective and accurate models for a wide range of applications.
#Feature #Engineering #Machine #Learning #Principles #Techniques #Data #Scientists

Comments

Leave a Reply

Chat Icon