3 books on Machine Learning [PDF]
September 15, 2025 | 23 |
Books on Machine Learning are covering supervised and unsupervised learning, reinforcement learning of neural networks and other models (Random Forest, SVM, k-NN, Logistic Regression...). They describe the process of algorithm design, feature engineering and model evaluation.
1. Machine Learning Hero: Master Data Science with Python Essentials
2025 by Cuantum Technologies LLC

Machine learning is the #1 technology for data science and Python is the #1 language for data science due to the availability of ready-made libraries for using machine learning. These libraries are described in this book. NumPy is a library for mathematical operations on multidimensional arrays and matrices. For example, it can quickly multiply matrices, which is a basic operation for obtaining output signals in a multilayer neural network. NumPy is also the basis for two other libraries - Pandas and TensorFlow. Pandas is an add-on for analyzing and cleaning large (structured) data. It operates mainly with two types of data structures: Series - a one-dimensional list and DataFrame - a set of such lists that form objects. TensorFlow is an add-on that allows you to operate entire layers of a neural network. Besides, Matplotlib and Seaborn - libraries for data visualization - are also described in this book. The book is GPT-written but human-revised.
Download PDF
2. Machine Learning For Dummies
2021 by John Paul Mueller, Luca Massaron

In this book, along with the basic fundamentals of machine learning, you will find such interesting thoughts as the fact that theку is an ML master algorithm that is based on five main techniques. 1 - Symbolic reasoning - building an associative chain of thoughts in the direction of induction or deduction. Induction opens up new areas for research, while deduction explores these areas. 2 - Modeling brain neurons. Each neuron (created as an algorithm simulating the real world) solves a small part of the problem and using many neurons in parallel solves the problem as a whole. Using backpropagation of errors, it is possible to determine under what conditions errors should be eliminated from the neural network by changing the weights and biases in the neurons. 3 - Evolutionary algorithms - this strategy is based on natural selection (removing all solutions that do not correspond to the desired result). Using a tree structure, it searches the best solution based on the winner function. The winner of each level of evolution gets the right to create functions of the next level. 4 - Bayesian Inference - uses various statistical methods to solve problems. Given that statistical methods can produce more than one correct solution, choosing a function becomes a matter of identifying the one that has the highest probability of success. 5 - Analogy Learning - this approach involves measuring the similarity or difference between objects and using this to classify or predict.
Download PDF
3. Interpretable Machine Learning
2020 by Christoph Molnar

One of the main problems with machine learning systems is their lack of explainability, i.e. a machine-trained neural network is a black box for an operator. This is normal and natural (by design) because in an ML neural network virtual parameters are created that do not correspond to any real-life objects or phenomena and the neural network's prediction ultimately depends on them. However, this book is devoted to creating interpretable machine learning models and explaining their decisions. However, the author is not talking about neural networks for language generation or computer vision, but about simple, interpretable models such as decision trees, decision rules and linear regression. There are indeed universal methods for interpreting "black boxes" for these methods, such as feature importance, accumulated local effects and explaining individual predictions using Shapley values and LIME. The author says that the new methods for interpreting machine learning models are published at an incredible speed. It is simply impossible to keep track of everything. Therefore, in this book you will not find the latest and "fashionable" methods of interpretability, but only proven approaches and basic concepts. This basic knowledge will prepare you to build interpretable models and better understand and evaluate any new papers on interpretability published on arxiv.org.
Download PDF
How to download PDF:
1. Install Gooreader
2. Enter Book ID to the search box and press Enter
3. Click "Download Book" icon and select PDF*
* - note that for yellow books only preview pages are downloaded
1. Machine Learning Hero: Master Data Science with Python Essentials
2025 by Cuantum Technologies LLC

Machine learning is the #1 technology for data science and Python is the #1 language for data science due to the availability of ready-made libraries for using machine learning. These libraries are described in this book. NumPy is a library for mathematical operations on multidimensional arrays and matrices. For example, it can quickly multiply matrices, which is a basic operation for obtaining output signals in a multilayer neural network. NumPy is also the basis for two other libraries - Pandas and TensorFlow. Pandas is an add-on for analyzing and cleaning large (structured) data. It operates mainly with two types of data structures: Series - a one-dimensional list and DataFrame - a set of such lists that form objects. TensorFlow is an add-on that allows you to operate entire layers of a neural network. Besides, Matplotlib and Seaborn - libraries for data visualization - are also described in this book. The book is GPT-written but human-revised.
Download PDF
2. Machine Learning For Dummies
2021 by John Paul Mueller, Luca Massaron

In this book, along with the basic fundamentals of machine learning, you will find such interesting thoughts as the fact that theку is an ML master algorithm that is based on five main techniques. 1 - Symbolic reasoning - building an associative chain of thoughts in the direction of induction or deduction. Induction opens up new areas for research, while deduction explores these areas. 2 - Modeling brain neurons. Each neuron (created as an algorithm simulating the real world) solves a small part of the problem and using many neurons in parallel solves the problem as a whole. Using backpropagation of errors, it is possible to determine under what conditions errors should be eliminated from the neural network by changing the weights and biases in the neurons. 3 - Evolutionary algorithms - this strategy is based on natural selection (removing all solutions that do not correspond to the desired result). Using a tree structure, it searches the best solution based on the winner function. The winner of each level of evolution gets the right to create functions of the next level. 4 - Bayesian Inference - uses various statistical methods to solve problems. Given that statistical methods can produce more than one correct solution, choosing a function becomes a matter of identifying the one that has the highest probability of success. 5 - Analogy Learning - this approach involves measuring the similarity or difference between objects and using this to classify or predict.
Download PDF
3. Interpretable Machine Learning
2020 by Christoph Molnar

One of the main problems with machine learning systems is their lack of explainability, i.e. a machine-trained neural network is a black box for an operator. This is normal and natural (by design) because in an ML neural network virtual parameters are created that do not correspond to any real-life objects or phenomena and the neural network's prediction ultimately depends on them. However, this book is devoted to creating interpretable machine learning models and explaining their decisions. However, the author is not talking about neural networks for language generation or computer vision, but about simple, interpretable models such as decision trees, decision rules and linear regression. There are indeed universal methods for interpreting "black boxes" for these methods, such as feature importance, accumulated local effects and explaining individual predictions using Shapley values and LIME. The author says that the new methods for interpreting machine learning models are published at an incredible speed. It is simply impossible to keep track of everything. Therefore, in this book you will not find the latest and "fashionable" methods of interpretability, but only proven approaches and basic concepts. This basic knowledge will prepare you to build interpretable models and better understand and evaluate any new papers on interpretability published on arxiv.org.
Download PDF
How to download PDF:
1. Install Gooreader
2. Enter Book ID to the search box and press Enter
3. Click "Download Book" icon and select PDF*
* - note that for yellow books only preview pages are downloaded


