Variance stabilizing transformations in machine learning
You’ve probably heard that before training machine learning models, data scientists transform random variables to change their distribution into something closer to the normal distribution.
But, why do we do this? Which variables should we transform? Which transformations should we use? And, do we need to transform variables to train any machine learning algorithm?