That's an incredibly good visualization! Both beautiful and clear. Next time I'm explaining machine learning to somebody who's smart but not a computer scientist or statistician, this is where I'll start.
Do you think a future tutorial could demystify deep learning and neural networks? Many people are confused by the backpropgation algorithm, but the way it works is simple and intuitive if you can get beyond the mathematical notation. The way that errors propagate backwards through the neural network is not all that dissimilar from the data flow in your "Growing a tree" section.
Andrej Karpathy has a great library that trains neural networks directly in the browser:
cs.stanford.edu/people/karpathy/convnetjs/
really nice visual, it'd be cool to see similar illustrations for areas of ML that do more than just separate classes based on a set of features, rather work with your data in more interesting ways (e.g., CNNs learning features, semi-supervised methods for training with unlabelled data, and subspace techniques like CCA that work on a projected version of your data).
Thanks! Yeah absolutely. We are working our way up to those. Our tentative plan is to tackle bias-variance trade-off next, then random forests, then neural networks.
The decision tree model was generated from http://scikit-learn.org in Python. The JS is a complete mess, but I'll try to write up how it works in the next couple days.
Do you think a future tutorial could demystify deep learning and neural networks? Many people are confused by the backpropgation algorithm, but the way it works is simple and intuitive if you can get beyond the mathematical notation. The way that errors propagate backwards through the neural network is not all that dissimilar from the data flow in your "Growing a tree" section.
Andrej Karpathy has a great library that trains neural networks directly in the browser: cs.stanford.edu/people/karpathy/convnetjs/