Variance stabilizing transformations in machine learning

You’ve probably heard that before training machine learning models, data scientists transform random variables to change their distribution into something closer to the normal distribution.

But, why do we do this? Which variables should we transform? Which transformations should we use? And, do we need to transform variables to train any machine learning algorithm?

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store