Lecture

Normalization vs Standardization, When to Use What?

Normalization is a method of adjusting values to a 0-1 range, ensuring no specific value disproportionately influences the result. Standardization adjusts data to have a mean of 0 and a standard deviation of 1, aligning it to follow a normal distribution.

In this lesson, you'll learn the differences between normalization and standardization and the appropriate use cases for each.


How Do Normalization and Standardization Differ?

CriterionNormalizationStandardization
Transformation MethodAdjustment to 0-1 range (Min-Max Scaling)Adjustment to mean 0, standard deviation 1 (Z-score Scaling)
Sensitivity to OutliersSensitive to outliersLess sensitive to outliers
Use CasesImage processing, deep learningStatistical analysis, regression, PCA
ApplicabilityWhen the range of values mattersWhen data follows a normal distribution

How to Choose Between Normalization and Standardization in Practice

Normalization and standardization have their respective advantages and should be chosen based on the characteristics of the data and the purpose of the analysis.

Here’s how to decide between normalization and standardization in practical scenarios.


If You Are Doing Deep Learning

Normalization is generally more favorable (neurons learn more stably within the 0-1 range)

If You Are Performing Statistical Analysis

Standardization is suitable (adjusts data to distribute around the mean)

If There Are Many Outliers in the Data

Standardization is preferable (extreme values have less impact)

If Maintaining the Range of Data Is Important

Normalization is used (maintains the minimum-maximum range)


In the next lesson, we'll explore categorical data encoding.

Mission
0 / 1

다음 중 빈칸에 들어갈 가장 적절한 단어는 무엇인가요?

일반적으로 데이터에 이상치가 많다면 가 더 적합합니다.
정규화
표준화
이상치 제거
데이터 증강

Lecture

AI Tutor

Design

Upload

Notes

Favorites

Help