Price: $54.07
(as of Dec 24,2024 07:24:51 UTC – Details)
From the Publisher
Chris Kuo
Chris Kuo
Chris Kuo has been a quantitative professional for more than 20 years. During that time, he contributed various data science solutions to industrial operations including customer analytics, risk segmentation, insurance, underwriting, claims, workers’ compensation, fraud detection, and litigation. He is the inventor of a U.S. patent. He has worked at the Hartford Insurance Group (HIG), the American International Group (AIG), Liberty Mutual Insurance, the BJ’s Wholesale Club, and Brookstone Inc.
Chris Kuo is a passionate educator. He has been an adjunct professor at Columbia University, Boston University, University of New Hampshire, and Liberty University since 2001. He published articles in economic and management journals and was a journal reviewer for related journals. He is the author of The eXplainable A.I., Modern Time Series Anomaly Detection: With Python & R Code Examples, and Transfer Learning for Image Classification: With Python Examples. He is known as Dr. Dataman on Medium.com.
He received his undergraduate degree in Nuclear Engineering from TsingHua National University in Taiwan, and his Ph.D. in Economics from the State University of New York at Stony Brook. He lives in New York City with his wife France.
Books by Chris Kuo
Modern Time Series Anomaly Detection
This book is for data science professionals who want to get hands-on practice with cutting-edge techniques in time series modeling, forecasting, and anomaly detection. This book is certainly suitable for students who want to advance in time series modeling and fraud detection.
Transfer Learning for Image Classification
Transfer learning techniques have gained popularity in recent years. It builds image classification models effectively for any specialized use cases. This subject is not necessarily easy to navigate because it utilizes much knowledge developed in the past two decades. This book explains transfer learning, its relationships to other fields in Computer Vision (CV), the development of pre-trained models, and the application of transfer learning to pre-trained models. This book guides you to build your own transfer learning models. You will apply the transfer learning techniques to your image classification model successfully.
The eXplainable A.I.
How AI systems make decisions is not known to most people. Many of the algorithms, though achieving a high level of precision, are not easily understandable for how a recommendation is made. This is especially the case in a deep learning model. As humans, we must be able to fully understand how decisions are made so we can trust the decisions of AI systems. We need ML models to function as expected, to produce transparent explanations, and to be visible in how they work. Explainable AI (XAI) is important research and has been guiding the development of AI. It enables humans to understand the models so as to manage effectively the benefits that AI systems provide, while maintaining a high level of prediction accuracy. Explainable AI answers the following questions to build the trusts of users for the AI systems:
● Why does the model predict that result?
● What are the reasons for a prediction?
● What is the prediction interval?
● How does the model work?
ASIN : B0B4F98MN6
Publication date : June 16, 2022
Language : English
File size : 6343 KB
Text-to-Speech : Enabled
Screen Reader : Supported
Enhanced typesetting : Enabled
X-Ray : Not Enabled
Word Wise : Not Enabled
Print length : 132 pages
The eXplainable A.I.: With Python examples
Artificial Intelligence (A.I.) has revolutionized the way we interact with technology, allowing machines to perform complex tasks that were once thought to be exclusive to human intelligence. However, as A.I. systems become more sophisticated, concerns about their transparency and interpretability have grown. This has led to the development of eXplainable A.I. (XAI), which aims to make A.I. systems more understandable and trustworthy.
Python, with its simplicity and versatility, has become a popular choice for developing A.I. applications. In this post, we will explore how Python can be used to create eXplainable A.I. models, with examples to demonstrate the process.
- Interpretable Machine Learning Models: One way to make A.I. systems more explainable is to use interpretable machine learning models. Decision trees, for example, are easy to interpret as they represent a series of if-then rules. In Python, you can use libraries like scikit-learn to build decision tree models and visualize them using tools like Graphviz.
from sklearn import datasets<br /> from sklearn.tree import DecisionTreeClassifier<br /> from sklearn.model_selection import train_test_split<br /> from sklearn.metrics import accuracy_score<br /> import graphviz<br /> <br /> # Load the Iris dataset<br /> iris = datasets.load_iris()<br /> X = iris.data<br /> y = iris.target<br /> <br /> # Split the data into training and test sets<br /> X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)<br /> <br /> # Create a decision tree classifier<br /> clf = DecisionTreeClassifier()<br /> clf.fit(X_train, y_train)<br /> <br /> # Visualize the decision tree<br /> dot_data = tree.export_graphviz(clf, out_file=None, feature_names=iris.feature_names, class_names=iris.target_names, filled=True, rounded=True)<br /> graph = graphviz.Source(dot_data)<br /> graph.render("iris_tree")<br /> ```<br /> <br />
- Local Interpretable Model-agnostic Explanations (LIME): LIME is a technique that explains the predictions of any machine learning model by approximating it locally with an interpretable model. In Python, you can use the lime library to generate explanations for individual predictions.
from lime import lime_tabular<br /> explainer = lime_tabular.LimeTabularExplainer(X_train, feature_names=iris.feature_names, class_names=iris.target_names, discretize_continuous=True)<br /> <br /> # Explain the first test instance<br /> exp = explainer.explain_instance(X_test[0], clf.predict_proba, num_features=3)<br /> exp.show_in_notebook()<br /> ```<br /> <br /> By incorporating eXplainable A.I. techniques like interpretable models and LIME into your Python-based A.I. projects, you can enhance the transparency and trustworthiness of your models. This not only improves the understanding of how A.I. systems make decisions but also helps identify and mitigate potential biases or errors. As A.I. continues to play a crucial role in various industries, the importance of eXplainable A.I. cannot be overstated.
#eXplainable #A.I #Python #examples
Leave a Reply