Interactive and commodity #MachineLearning
Posted on April 18th, 2016
04/18/2016 @Rise, 43 W 23rd St, NY
Daniel Hsu @ Columbia University and Andreas Mueller @ NYU presented current research in machine learning.
Daniel Hsu differentiated between non-interactive learning (supervised machine learning in which inputs and output labels are presented to a program to learn a prediction function) versus #InteractiveMachineLearning in which
- Learning agent interacts with the world
- Learning agent has some objective in mind
- Data available to learner depends on learner’s decisions
- State of the world depends on earner’s decisions – this is optional depending on the problem
Interactive learning is used when it is expensive to determine the output label, so only some tests are performed. They key is balancing the algorithm’s ability to exploit its knowledge versus it’s need to explore the space (statistically consistent active learning). This optimal method is to assign a probably that an output label is queried based on the specific input.
Daniel talked about some algorithms to heuristically assign probabilities using an inverse propensity weight to overcome sampling bias.
Next, Andreas Mueller (who is part of the #sci-kit learn team) talked about using Bayesian optimization to get the best model for a given parameter set. A variety of parameters sets are optimized and meta-learning is applied to find a global optimal via a machine learning algorithm.