I’ve recently launched Homemade Machine Learning repository that contains examples of popular machine learning algorithms and approaches (like linear/logistic regressions, K-Means clustering, neural networks) implemented in Python with mathematics behind them being explained. Each algorithm has interactive Jupyter Notebook demo that allows you to play with training data, algorithms configurations and immediately see the results, charts, and predictions right in your browser. In most cases, the explanations are based on this great machine learning course by Andrew Ng.
The purpose of the repository was not to implement machine learning algorithms by using 3rd party library “one-liners” but rather to practice implementing these algorithms from scratch and get a better understanding of the mathematics behind each algorithm. That’s why all algorithms implementations are called “homemade”.
The main Python libraries that are used there are NumPy and Pandas. These two are used for efficient matrix operations and for loading/parsing CSV datasets. When it comes to Jupyter Notebook demos then such libraries as Matplotlib and Plotly are being used for data visualizations.
Currently, the following topics have been covered:
Supervised Learning
In supervised learning we have a set of training data as an input and a set of labels or "correct answers" for each training set as an output. Then we're training our model (machine learning algorithm parameters) to map the input to the output correctly (to do correct prediction). The ultimate purpose is to find such model parameters that will successfully continue correct input→output mapping (predictions) even for new input examples.
Regression
In regression problems we do real value predictions. Basically we try to draw a line/plane/n-dimensional plane along the training examples.
Usage examples: stock price forecast, sales analysis, dependency of any number, etc.
🤖 Linear Regression
- 📗 Math | Linear Regression - theory and links for further readings
- ⚙️ Code | Linear Regression - implementation example
- ▶️ Demo | Univariate Linear Regression - predict
country happiness
score byeconomy GDP
- ▶️ Demo | Multivariate Linear Regression - predict
country happiness
score byeconomy GDP
andfreedom index
- ▶️ Demo | Non-linear Regression - use linear regression with polynomial and sinusoid features to predict non-linear dependencies.
Classification
In classification problems we split input examples by certain characteristic.
Usage examples: spam-filters, language detection, finding similar documents, handwritten letters recognition, etc.
🤖 Logistic Regression
- 📗 Math | Logistic Regression - theory and links for further readings
- ⚙️ Code | Logistic Regression - implementation example
- ▶️ Demo | Logistic Regression (Linear Boundary) - predict Iris flower
class
based onpetal_length
andpetal_width
- ▶️ Demo | Logistic Regression (Non-Linear Boundary) - predict microchip
validity
based onparam_1
andparam_2
- ▶️ Demo | Multivariate Logistic Regression - recognize handwritten digits from
28x28
pixel images. - ▶️ Demo | Multivariate Logistic Regression | Fashion MNIST - recognize clothes types from
28x28
pixel images.
Unsupervised Learning
Unsupervised learning is a branch of machine learning that learns from test data that has not been labeled, classified or categorized. Instead of responding to feedback, unsupervised learning identifies commonalities in the data and reacts based on the presence or absence of such commonalities in each new piece of data.
Clustering
In clustering problems we split the training examples by unknown characteristics. The algorithm itself decides what characteristic to use for splitting.
Usage examples: market segmentation, social networks analysis, organize computing clusters, astronomical data analysis, image compression, etc.
🤖 K-means Algorithm
- 📗 Math | K-means Algorithm - theory and links for further readings
- ⚙️ Code | K-means Algorithm - implementation example
- ▶️ Demo | K-means Algorithm - split Iris flowers into clusters based on
petal_length
andpetal_width
Anomaly Detection
Anomaly detection (also outlier detection) is the identification of rare items, events or observations which raise suspicions by differing significantly from the majority of the data.
Usage examples: intrusion detection, fraud detection, system health monitoring, removing anomalous data from the dataset etc.
🤖 Anomaly Detection using Gaussian Distribution
- 📗 Math | Anomaly Detection using Gaussian Distribution - theory and links for further readings
- ⚙️ Code | Anomaly Detection using Gaussian Distribution - implementation example
- ▶️ Demo | Anomaly Detection - find anomalies in server operational parameters like
latency
andthreshold
Neural Network (NN)
The neural network itself isn't an algorithm, but rather a framework for many different machine learning algorithms to work together and process complex data inputs.
Usage examples: as a substitute of all other algorithms in general, image recognition, voice recognition, image processing (applying specific style), language translation, etc.
🤖 Multilayer Perceptron (MLP)
- 📗 Math | Multilayer Perceptron - theory and links for further readings
- ⚙️ Code | Multilayer Perceptron - implementation example
- ▶️ Demo | Multilayer Perceptron | MNIST - recognize handwritten digits from
28x28
pixel images. - ▶️ Demo | Multilayer Perceptron | Fashion MNIST - recognize the type of clothes from
28x28
pixel images.
— — — — — — — — —
I hope you’ll find the repository useful. Either by playing with demos or by reading the math sections or by simply exploring the source code. Happy coding!