Lecture

Normalization vs Standardization, When to Use What?

Normalization adjusts values to a 0-1 range to ensure that no single value has an outsized influence on the result. Standardization scales data so that it has a mean of 0 and a standard deviation of 1, often resulting in a normal distribution.

In this lesson, you'll learn the differences between normalization and standardization and the appropriate use cases for each.


How Do Normalization and Standardization Differ?

CriterionNormalizationStandardization
Transformation MethodAdjustment to 0-1 range (Min-Max Scaling)Adjustment to mean 0, standard deviation 1 (Z-score Scaling)
Sensitivity to OutliersSensitive to outliersLess sensitive to outliers
Use CasesImage processing, deep learningStatistical analysis, regression, PCA
ApplicabilityWhen the range of values mattersWhen data follows a normal distribution

How to Choose Between Normalization and Standardization

Both techniques have their own advantages and should be selected based on the data’s characteristics and the analysis goal.

Here’s how to decide between normalization and standardization in practical scenarios.


If You Are Doing Deep Learning

Normalization is generally more favorable (neurons learn more stably within the 0-1 range)

If You Are Performing Statistical Analysis

Standardization is suitable (adjusts data to distribute around the mean)

If There Are Many Outliers in the Data

Standardization is preferable (extreme values have less impact)

If Maintaining the Range of Data Is Important

Normalization is used (maintains the minimum-maximum range)


In the next lesson, we'll explore categorical data encoding.

Quiz
0 / 1

Which word best completes the sentence?

When a dataset contains many outliers, is generally more appropriate.
Normalization
Standardization
Outlier removal
Data augmentation

Lecture

AI Tutor

Design

Upload

Notes

Favorites

Help