LibraryCommon ML algorithms: Linear Regression, Logistic Regression, K-Nearest Neighbors

Common ML algorithms: Linear Regression, Logistic Regression, K-Nearest Neighbors

Learn about Common ML algorithms: Linear Regression, Logistic Regression, K-Nearest Neighbors as part of Python Mastery for Data Science and AI Development

Foundational Machine Learning Algorithms in Python

Welcome to the core of machine learning! In this module, we'll explore three fundamental algorithms that form the bedrock of many data science and AI applications: Linear Regression, Logistic Regression, and K-Nearest Neighbors (KNN). Understanding these algorithms will provide a strong foundation for tackling more complex problems.

Linear Regression: Predicting Continuous Values

Linear Regression is used for regression tasks, where the goal is to predict a continuous numerical output. It models the relationship between a dependent variable (the one you want to predict) and one or more independent variables (the features) by fitting a linear equation to the observed data. The simplest form is a straight line: ( y = mx + c ), where ( y ) is the dependent variable, ( x ) is the independent variable, ( m ) is the slope, and ( c ) is the y-intercept.

Linear regression finds the best-fitting straight line through data points to predict continuous outcomes.

Imagine plotting data points on a graph. Linear regression draws a single straight line that best represents the trend of these points. This line can then be used to estimate the output for new, unseen input values.

The algorithm works by minimizing the difference between the actual observed values and the values predicted by the linear model. This difference is typically measured using a cost function, such as the Mean Squared Error (MSE). The algorithm iteratively adjusts the slope (m) and intercept (c) to find the line that results in the lowest MSE. In multiple linear regression, we extend this to include more independent variables, forming a hyperplane in higher dimensions.

What type of problem does Linear Regression solve?

Regression problems, where the goal is to predict a continuous numerical output.

Logistic Regression: Predicting Probabilities and Classifications

Despite its name, Logistic Regression is primarily used for classification tasks, particularly binary classification (predicting one of two outcomes, e.g., yes/no, spam/not spam). It predicts the probability that a given input belongs to a particular class. It uses a sigmoid (or logistic) function to squash the output of a linear equation into a range between 0 and 1, representing a probability.

Logistic regression uses the sigmoid function to output probabilities for classification.

Unlike linear regression, which outputs any number, logistic regression outputs a probability between 0 and 1. This probability is then used to assign the input to a specific class (e.g., if probability > 0.5, assign to class 1).

The core of logistic regression is the sigmoid function: ( \sigma(z) = \frac{1}{1 + e^{-z}} ), where ( z ) is the output of a linear combination of input features and their weights. This function maps any real-valued number to a value between 0 and 1. The algorithm learns the weights that best separate the classes by minimizing a cost function called the log-loss (or cross-entropy loss). For multi-class classification, extensions like One-vs-Rest or Softmax regression are used.

What is the primary output of Logistic Regression, and what is it used for?

It outputs a probability (between 0 and 1), which is then used for classification tasks.

The sigmoid function, also known as the logistic function, is a mathematical function that maps any real-valued number to a value between 0 and 1. It has an 'S' shape. In logistic regression, the linear combination of input features and weights is passed through this function to produce a probability. The formula is ( \sigma(z) = \frac{1}{1 + e^{-z}} ). This allows the model to estimate the likelihood of an instance belonging to a particular class.

📚

Text-based content

Library pages focus on text content

K-Nearest Neighbors (KNN): Instance-Based Learning

K-Nearest Neighbors (KNN) is a simple, non-parametric, and instance-based learning algorithm. It can be used for both classification and regression tasks. The core idea is that similar things exist in close proximity. For classification, it assigns a new data point to the class that is most common among its 'k' nearest neighbors in the feature space. For regression, it predicts the average value of its 'k' nearest neighbors.

KNN classifies or predicts based on the majority class or average value of its 'k' nearest neighbors.

To classify a new data point, KNN looks at the 'k' data points in the training set that are closest to it. It then assigns the new point to the class that appears most frequently among those 'k' neighbors. The choice of 'k' is crucial and affects the model's performance.

The 'nearest' neighbors are determined using a distance metric, most commonly the Euclidean distance. The value of 'k' is a hyperparameter that needs to be tuned. A small 'k' can lead to overfitting (sensitive to noise), while a large 'k' can lead to underfitting (oversimplified decision boundary). The algorithm requires no explicit training phase; it simply stores the training data. During prediction, it computes distances to all training points and selects the 'k' closest ones.

What is the key concept behind the K-Nearest Neighbors algorithm?

Similar data points tend to be close to each other in the feature space.

AlgorithmPrimary TaskOutputCore Mechanism
Linear RegressionRegressionContinuous ValueFitting a linear equation to data
Logistic RegressionClassificationProbability (0-1)Sigmoid function on linear combination
K-Nearest Neighbors (KNN)Classification/RegressionClass label / Average valueProximity to 'k' nearest neighbors

Choosing the right algorithm depends on your problem: predict a number? Use regression. Predict a category or probability? Use classification. KNN offers flexibility for both.

Learning Resources

Scikit-learn Linear Regression Documentation(documentation)

Official documentation for implementing Linear Regression in Python using the popular scikit-learn library, including parameters and usage examples.

Scikit-learn Logistic Regression Documentation(documentation)

Detailed documentation for Logistic Regression in scikit-learn, covering its application in classification and available solver options.

Scikit-learn K-Nearest Neighbors Documentation(documentation)

Comprehensive guide to using KNN for classification with scikit-learn, explaining distance metrics and the 'k' parameter.

Introduction to Linear Regression - StatQuest with Josh Starmer(video)

A highly visual and intuitive explanation of Linear Regression, breaking down the concepts of slope, intercept, and error.

Logistic Regression Explained - StatQuest with Josh Starmer(video)

An excellent video explaining Logistic Regression, including the sigmoid function and its role in classification.

K-Nearest Neighbors (KNN) Explained - StatQuest with Josh Starmer(video)

A clear and concise explanation of the K-Nearest Neighbors algorithm, covering its use in classification and regression.

Understanding the Sigmoid Function(documentation)

TensorFlow's documentation on the sigmoid function, providing its mathematical definition and use in neural networks and machine learning.

Machine Learning for Beginners: An Introduction(blog)

A beginner-friendly blog post that introduces fundamental ML concepts and algorithms, including regression and classification.

A Visual Introduction to Machine Learning(blog)

An interactive and visual exploration of machine learning concepts, including how algorithms like KNN work.

Machine Learning Algorithms: A Quick Overview(tutorial)

A tutorial providing a quick overview of various machine learning algorithms, including brief explanations of regression and KNN.