#DeepLearning and #NeuralNets
Posted on May 16th, 2016
05/16/2016 @Qplum, 185 Hudson Street, Suite 1620 Plaza 5, Jersey City, NJ
#Raghavendra Boomaraju @ Columbia described the math behind neural nets and how back propagation is used to fit models.
Observations on deep learning include:
- Universal approximation theory says you can fit any model with one hidden layer, provided the layer has a sufficient number of levels. But multiple hidden layers work better. The more layers, the fewer levels you need in each layer to fit the data.
- To optimize the weights, back-propagate the loss function. But one does not need to optimize the g() function since g()’s are designed to have a very general shape (such as the logistic)
- Traditionally, fitting has been done by changing all inputs simultaneously (deterministic) or changing one input at a time during optimization (stochastic inputs) . More recently, researchers are changing subsets of the inputs (minibatches).
- Convolution operators are used to standardize inputs by size and orientation by rotating and scaling.
- There is a way to Visualize within a neural network – see http://colah.github.io/
- The gradient method used to optimize weights needs to be tuned so it is neither too aggressive nor too slow. Adaptive learning (Adam algorithm) is used to determine the optimal step size.
- Deep learning libraries include Theano, café, Google tensor flow, torch.
- To do unsupervised deep learning – take inputs through a series of layers that at some point have fewer levels than the number of inputs. The ensuing layers expand so the number of points on the output layer matches that of the input layer. Optimize the net so that the inputs match the outputs. The layer with the smallest number of point s describes the features in the data set.