New York Tech Journal
Tech news from the Big Apple

NYAI#7: #DataScience to Operationalize #ML (Matthew Russell) & Computational #Creativity (Dr. Cole)

Posted on November 22nd, 2016

#NYAI

11/22/2016 Risk, 43 West 23rd Street, NY 2nd floor

img_20161122_1918271 img_20161122_2039491

Speaker 1: Using Data Science to Operationalize Machine Learning – (Matthew Russell, CTO at Digital Reasoning)

Speaker 2: Top-down vs. Bottom-up Computational Creativity  – (Dr. Cole D. Ingraham DMA, Lead Developer at Amper Music, Inc.)

Matthew Russell @DigitalReasoning  spoke about understanding language using NLP,  relationships among entities, and temporal relationship. For human language understanding he views technologies such as knowledge graphs and document analysis is becoming commoditized. The only way to get an advantage is to improve the efficiency of using ML: KPI for data analysis is the number of experiments (tests an hypothesis) that can be run per unit time. The key is to use tools such as:

  1. Vagrant – allow an environmental setup.
  2. Jupyter Notebook – like a lab notebook
  3. Git – version control
  4. Automation –

He wants highly repeatable experiments. The goal is to speed up the number of experiments that can be conducted per unit time.

He then talked about using machines to read medical report and determine the issues. Negatives can be extracted, but issues are harder to find. Uses an ontology to classify entities.

He talked about experiments on models using ontologies. The use of a fixed ontology depends on the content: the ontology of terms for anti-terrorism evolves over time and needs to be experimentally adjusted over time. Medical ontology is probably most static.

In the second presentation, Cole D. Ingraham @Ampermusic talked about top-down vs bottom-up creativity in the composition of music. Music differs from other audio forms since it has a great deal of very large structure as well as the smaller structure. ML does well at generating good audio on a small time frame, but Cole thinks it is better to apply theories from music to create the larger whole. This is a combination of

Top-down: novel&useful, rejects previous ideas – code driven, “hands on”, you define the structure

Bottom-up: data driven – data driven, “hands off”, you learn the structure

He then talked about music composition at the intersection of Generation vs. analysis (of already composed music) – can do one without the other or one before the other

To successfully generate new and interesting music, one needs to generate variance. Composing music using a purely probabilistic approach is problematic as there is a lack of structure. He likes the approach similar to replacing words with their synonyms which do not fundamentally change the meaning of the sentence, but still makes it different and interesting.

It’s better to work on deterministically defined variance than it is to weed out undesired results from nondeterministic code.

As an example he talked about Wavenet (google deepmind project) which input raw audio and output are raw audio. This approach works well for improving speech synthesis, but less well for music generation as there is no large scale structural awareness.

Cole then talked about Amper, as web site that lets users create music with no experience required: fast, believable, collaborative

They like a mix of top-down and bottom-up approaches:

  1. Want speed, but neural nets are slow
  2. Music has a lot of theory behind it, so it’s best to let the programmers code these rules
  3. Can change different levels of the hierarchical structure within music: style, mood, can also adjust specific bars

Runtime written in Haskell – functional language so its great for music

posted in:  AI, Big data, data analysis, Data science, NewYorkAI, Programming    / leave comments:   No comments yet

NYAI#5: Neural Nets (Jason Yosinski) & #ML For Production (Ken Sanford)

Posted on August 24th, 2016

#NYAI, New York #ArtificialIntelligence

08/24/2016 @Rise 43 West 23rd Street, NY, 2nd floorPreview Changes

IMG_20160824_200640[1] IMG_20160824_203211[1]

Jason Yosinski@GeometricTechnology spoke about his work on #NeuralNets to generate pictures. He started by talking about machine learning with feedback to train a robot to move more quickly and using feedback to computer-generate pictures that are appealing to humans.

Jason next talked about AlexNet, based on work by Krizhevsky et al 2012, to classify images using a neural net with 5 convolutional layers (interleaved with max pooling and contrast layers) plus 3 fully connected layers at the end. The net with 60 million parameters was training on ImageNet which contains over 1mm images. His image classification Code is available on http://Yosinski.com.

Jason talked about how the classifier thinks about categories when it is not being trained to identify that category. For instance, the network may learn about faces even though there is no human category since it helps the system detect things such as hats (above a face) to give it context. It also identifies text to give it context on other shapes it is trying to identify.

He next talked about generating images by inputting random noise and randomly changing pixels. Some changes will cause the goal (such as a ‘lions’) to increase in confidence. Over many random moves, the goal increases in its confidence level. Jason showed many random images that elicited high levels of confidence, but the images often looked like purple-green slime. This is probably because the network, while learning, immediately discards the overall color of the image and is therefore insensitive to aberrations from normal colors.  (See Erhan et al 2009)

[This also raises the question of how computer vision is different from human vision. If presented with a blue colored lion, the first reaction of a human might be to note how the color mismatches objects in the ‘lion’ category. One experiment would be to present the computer model with the picture of a blue lion and see how it is classified. Unlike computers, humans encode information beyond their list of items they have learned and this encoding includes extraneous information such as color or location. Maybe the difference is that humans incorporate a semantic layer that considers not only the category of the items, but other characteristics that define ‘lion-ness’.  Color may be more central to human image processing as it has been conjectured that we have color vision so we can distinguish between ripe and rotten fruits. Our vision also taps into our expectation to see certain objects within the world and we are primed to see those objects in specific contexts, so we have contextual information beyond what is available to the computer when classifying images.]

To improve the generated pictures of ‘lions’, he next used a generator to create pictures and change them until they get a picture which has high confidence of being a ‘lion’. The generator is designed to create identifiable images. The generator can even produce pictures on objects that it has not been trained to paint. (Need to apply regularization to get better pictures for the target.)

Slides at http://s.yosinski.com/nyai.pdf

In the second talk, Ken Sanford @Ekenomics and H20.AI talked about the H2O open source project. H2O is a machine learning engine that can run in R, Python,Java, etc.

Ken emphasized how H2O (a multilayer feed forward neural network) provides a platform that uses the Java Score Code engine. This easies the transition from the model developed in training and the model used to score inputs in a production environment.

He also talked about the Deep Water project which aims to allow other open source tools, such as MXNET, Caffe, Tensorflow,… (CNN, RNN, … models) to run in the H2O environment.

posted in:  AI, Big data, Data science, NewYorkAI, Open source    / leave comments:   No comments yet

#Unsupervised Learning (Soumith Chintala) & #Music Through #ML (Brian McFee)

Posted on July 26th, 2016

#NYAI

07/25/2016 @Rise, 28 West 24rd Street, NY, 2nd floor

IMG_20160725_192101[1] IMG_20160725_192826[1] IMG_20160725_193256[1] IMG_20160725_200916[1] IMG_20160725_201046[1]

Two speakers spoke about machine learning

In the first presentation, Brian McFee @NYU spoke about using ML to understanding the patterns of beats in music. He graphs beats identified by Mel-frequency cepstral coefficients (#MFCCs)

Random walk theory combines two representations of points in the graph.

  1. Local: In the graph, each point is a beat, edge connect adjacent beats. Weight edges by MFCC .
  2. Repetition: Link k-nearest neighbor by repetition = same sound – look for beats. Weight by similarity (k is set to the square root of the number of beats)
  3. Combination: A = mu * local + (1-mu)*repetition; optimize mu for a balanced random walk , so probability of a local move – probability of a repetition move over all vertices. Use a least squares optimization to find mu so the two parts of the equation make equal contributions across all points to the value of A.

The points are then partitioned by spectral clustering: normalize Laplacian – take bottom eigenvectors which encode component membership for each beat; cluster the eigenvectors Y of L to reveal the structure. Gives hierchical decomposition of the time series. m=1, the entire song. m=2 gets the two components of the song. As you add more eigenvectors, the number of segments within the song increases.

Brain then showed how this segmentation can create compelling visualizations of the structure of a song.

The Python code used for this analysis is available in the msaf library.

He has worked on convolutional neural nets, but find them to be better at handing individual notes within the song  (by contrast, rhythm is over a longer time period)

In the second presentation, Soumith Chintala talked about #GenerativeAdversarialNetworks (GAN).

Generative networks consist of a #NeuralNet “generator” that produces an image. It takes as input a high dimensional matrix (100 dimensions) of random noise. In a Generative Adversarial Networks a generator creates an image which is optimized over a loss function which evaluates “does it look real”. The decision of whether the image looks real is determined by a second neural net “discriminator” that tries to pick the fake image from a set of other real images plus the output of the generator.

Both the generator and discriminator NN’s are trained by gradient descent to optimize their individual performance: Generator = max game; discriminator = min game. The process optimizes Jensen-Shannon divergence.

Soumith then talked about extensions to GAN. These include

Class-conditional GANS – take noise + class of samples as input to the generator.

Video prediction GANS –predict what happens next given the previous 2 or 3 frames. Added a MSE loss (in addition to the discriminator classification loss) which compares what happened to what is predicted

Deep Convolution GAN – try to make the learning more stable by using a CNN.

Text-conditional GAN – input =noise + text. Use LSTM model on the text input. Generate images

Disentangling representations – InfoGAN – input random noise + categorical variables.

GAN is still unstable especially for larger images, so work to improve it includes

  1. Feature matching – take groups of features instead of just the whole image.
  2. Minibatch learning

No one has successfully used GAN for text-in to text-out

The meeting was concluded by a teaser for Watchroom – crowd funded movie on AI and VR.

posted in:  AI, data analysis, NewYorkAI    / leave comments:   No comments yet

Automatically scalable #Python & #Neuroscience as it relates to #MachineLearning

Posted on June 28th, 2016

#NYAI: New York Artificial Intelligence

06/28/2016 @Rise, 43 West 23rd Street, NY, 2nd floor

IMG_20160628_192420[1] IMG_20160628_200539[1] IMG_20160628_201905[1]

Braxton McKee (@braxtonmckee ) @Ufora first spoke about the challenges of creating a version of Python (#Pyfora) that naturally scales to take advantage of the hardware to handle parallelism as the problem grows.

Braxton presented an example in which we compute the minimum distance from target points a larger universe of points base on their Cartesian coordinates. This is easily written for small problems, but the computation needs to be optimized when computing this value across many cpu’s.

However, the allocation across cpu’s depends on the number of targets relative to the size of the point universe. Instead of trying to solve this analytically, they use a #Dynamicrebalancing strategy that splits the task and adds resources to the subtasks creating bottlenecks.

This approach solves many resource allocation problems, but still faces challenges

  1. nested parallelism. They look for parallelism within the code and look for bottlenecks at the top level of parallelism and split the task into subtasks at that level, …
  2. the data do not fit in memory. They break tasks into smaller tasks. They also have each task know which other caches hold data, so they can be accessed directly without going to slower main memory
  3. different types of architectures (such as gpu’s) require different types of optimization
  4. the optimizer cannot look inside python packages, so cannot optimize a bottleneck within a package.

Pyfora

  1. is a just-in-time compiler that moves stack frames from machine-to-machine and senses how to take advantage of parallelism
  2. tracks what data a thread is using
  3. dynamically schedules threads and data
  4. takes advantage of mutability which allows the compiler to assume that functions do no change over time so the compiler can look inside the function when optimizing execution
  5. is written on top of another language which allows for the possibility of porting the method to other languages

In the second presentation, Jeremy Freeman @Janelia.org spoke about the relationship between neuroscience research and machine learning models. He first talking about the early works on understanding the function of the visual cortex.

Findings by Hubel & Wiesel in1959 have set the foundation for visual processing models for the past 40 years. They found that Individual neurons in the V1 area of the visual cortex responded to the orientation of lines in the visual field. These inputs fed neurons that detect more complex features, such as edges, moving lines, etc.

Others also considered systems which have higher level recognition and how to train a system. These include

Perceptrons by Rosenblatt, 1957

Neocognitrons by Fukushima, 1980

Hierarchical learning machines, Lecun, 1985

Back propagation by Rumelhart, 1986

His doctoral research looked at the activity of neurons in V2 area. They found they could generate high order patterns that some neurons discriminate among.

But in 2012, there was a jump in performance of neural nets – U. of Toronto

By 2014, some of the neural network algos perform better than humans and primates, especially in the area of image processing. This has lead to many advances such as Google deepdream which combines images and texture to create an artistic hybrid image.

Recent scientific research allows one to look at thousands of neurons simultaneously. He also talked about some of his current research which uses “tactile virtual reality” to examine the neural activity as a mouse explores a maze (the mouse walks on a ball that senses it’s steps as it learns the maze).

Jeremy also spoke about Model-free episodic control for complex sequential tasks requiring memory and learning. ML research has created models such as LSTM and Neural Turing Nets which retain state representations. Graham Taylor has looked at neural feedback modulation using gates.

He also notes that there are similar functionalities between the V1 area in the visual cortex, the A1 auditory area, and the S1, tactile area.

To find out more, he suggested visiting his github site: Freeman-lab and looking the web site neurofinder.codeneuro.org.

posted in:  AI, data analysis, Data science, NewYorkAI, Open source, Python    / leave comments:   No comments yet

#TensorFlow and Cloud Machine Learning

Posted on June 7th, 2016

#GDGNewark

06/06/2016 @Audible Inc, 1 Washington Place, Newark, NJ 15th floor

20160606_191056[1] 20160606_194147[1]

Joshua Gordon @Google talked about #MachineLearning and the TensorFlow package. TensorFlow is an open source library of machine learning programs. Using the library you can manipulate tensors by defining graphs that are functions operating on the multivariate structures.

The library runs on Linux and OSx. The library runs on Windows using Docker. Support for Android is on the way. Joshua showed several applications including one that repaints an image in van Gogh’s style by merging levels from the network identifying colors from the original image with layers from a second network trained on the painter’s style.

Next, Yufeng Guo @Google talked about out-of-the-box machine learning APIs to classify images. Google has a cloud vision API and will shortly release a speech API.

The vision API imports a jpg file and outputs a description in JSON format including items identified and the confidence that the items are correctly identified. It also gives the coordinates of items identified and links to the full description of the items in Google’s database. The face detection routine also outputs information such as the rollAngle, JoyLikelihood, etc. The service is free for up to 1000 requests per month.

posted in:  AI, data analysis, GDG, Open source    / leave comments:   No comments yet

#DataDrivenNYC: #FaultTolerant #Web sites, #Finance, Predicting #B2B buying behavior, training #DeepLearning

Posted on May 18th, 2016

#DataDrivenNYC

05/18/2016 @AXA auditorium, 787 7th  Avenue, NY

20160518_182345[1] 20160518_184230[1] 20160518_191103[1] 20160518_193745[1]

Four speakers presented:

First, Nicolas Dessaigne @Algolia (Subscription service to access a search API) talked about the challenges building a highly fault-tolerant world-wide service. The steps resulted from their understanding of points of failure within their systems and the infrastructure their systems depend on.

Initially, they concentrated on their software development process including failed updates.  To overcome these problems, they update one server at a time (with a rack of servers), do partial updates, use Chef to automate deployment.

Then they migrated their DNS provider from .io to .net TLD to avoid slow response times they had seen intermittently in Asia. This was followed by the upgrades:

Feb 2015. Set up clusters of servers world-wide , so users have a server in their region:  lower latency

March 2015. Physically separate server clusters within a region to different providers

May 2015. Create fallback DNS servers

July 2015. Put a third data center online to make indexing robust

April 2016. Implement  a 1 second granularity for their system monitoring

Next, Matt Turck interviewed Louis DiModugno @AXA . In the US, AXA’s main focus is on predictive underwriting of insurance process. They also have projects to incorporate sensors into products and correctly route queries to call centers based on the demographics of the customer. World-wide they have three analysis hubs: France, US, Singapore (coming online).

Louis oversees both data and analytics in the U.S. and both he and the CTO report to the CIO.  They are interested in expanding their capabilities in areas such as creating unstructured databases from life insurance data that are currently on microfiche.

In the third presentation, Amanda Kahlow @6Sense talked about their business model  to provide information to customers in B2B commerce. They analyze business searches, customer web sites, visits to publisher’s (e.g. Forbes) web sites. Their goal is to determine the timing of customer purchases.

B2B purchases are different from B2C purchases since

  1. Businesses research their purchases online before they buy
  2. The research takes time (long sales cycle)
  3. The decision to buy involves multiple people within the company

So, there are few impulse buys and buyer behavior signals that a purchase is imminent.

The main CMO question is when (not who).

6sense ties data across searches (anonymous data). The goal is to identify when companies are in a specific part of the buying cycle, so sales can approach them now. (Example: show click-to-chat when the analytics says that the customer is ready to buy)

Lastly, Peter Brodsky @HyperScience  spoke about tools they are developing to speed machine learning. These include

  1. Tools to make it easier to add new data sets
  2. need to match fields, such as date which may be in different formats
  3. what to do with missing data
  4. need labeled data – lots of examples
  5. Speed up training time

The speed up is done by identifying subnets within the larger neural network. The subnets perform distinct functions. To determine if two subnets (in different networks) are equivalent, move one subnet from one network to replace another subnet in another network and see if the function is unchanged: Freeze the weights within the subnet and outside the subnet. Retrain the interface between the net and the subnet.

This creates building blocks which can be combined into larger blocks. These blocks can be applied to jump start the training process.

 

posted in:  AI, applications, Big data, data analysis, Data Driven NYC, startup    / leave comments:   No comments yet

Teaching computers to be more creative than humans through #games

Posted on April 19th, 2016

#ACM New York City

04/19/2016 @NYU Courant Institute, 251 Mercer Street, NY

20160419_193404[1] 20160419_194149[1] 20160419_195923[1] 20160419_200935[1]

Julian Togelius @NYU spoke about #AI, #games and #creativity? He talked about how game playing has been part of the development of AI and how AI can change the creation of games for humans.

Julian first talked about how algos play board games better than the best human, starting with #Chess and finally #Go in 2016. But he feels that board games are only a minute part of the universe of games played by humans. He explained how programs tackled 3 video games

  1. #Starcraft
  2. #SuperMario
  3. Car racing

Of the three, Starcraft appears to offer the biggest challenge for computers due to the size and complexity of the game playing universe. The first level of SuperMario can be easily solved with a simple algorithm, but higher levels require more sophistication to get around overhangs. Car racing is simple when there is only a single car, but competitor levels require an understanding of competitor’s strategies. However, the code to successfully solve one of these games does not immediately generalize to solutions for the other games.

He next opined that intelligence is more than solving specific problems, but implies the ability to solve a wide range of problem. This can be summarized by the Legg and Hutter formula which is a sum of skills playing all games weighted by the game complexity.

In competitions of algos across a variety of games, the Monte Carlo Tree Search (a statistical tree search algorithm that uses a forest of trees) appears to do best.

Julian next talked about how AI can be used to create better games for humans: PCG (Procedural Generated Content)

  1. Save development time
  2. Non-human creativity – most humans just copy other humans
  3. Create endless games
  4. Create player-adaptive games
  5. Study game design by formalizing it

He talked about using evolution to search for good content using

  1. Combinatorial creativity = combine lots of ideas to search a space
  2. Transformational creativity = change the problem definition to come up with new ideas

He proposed a collaboration of humans and algos. One tool to do this is the LUDI game description developed by Cameron Browne. Using the game descriptions of many games, one can use a genetic algorithm to combine the rules to create other games, some of which are interesting to humans. The game #Yavalath was created using this process. He also showed pictures of a collaborative tool for creating versions of the Cut-The-Rope game in which a human places objects in the space and the algo solves it.

Other research looks at humans playing specific games to develop an algorithm that predicts which aspects of the game create user interest and predict whether other individuals with different skill levels will find a game (level of a game) interesting.

posted in:  ACM, AI, psychology    / leave comments:   No comments yet

Interactive and commodity #MachineLearning

Posted on April 18th, 2016

#NewYorkArtificalIntelligence

04/18/2016 @Rise, 43 W 23rd St, NY

20160418_192510[1] 20160418_192711[1] 20160418_194339[1] 20160418_195620[1]

Daniel Hsu @ Columbia University and Andreas Mueller @ NYU presented current research in machine learning.

Daniel Hsu differentiated between non-interactive learning (supervised machine learning in which inputs and output labels are presented to a program to learn a prediction function) versus #InteractiveMachineLearning in which

  1. Learning agent interacts with the world
  2. Learning agent has some objective in mind
  3. Data available to learner depends on learner’s decisions
  4. State of the world depends on earner’s decisions – this is optional depending on the problem

Interactive learning is used when it is expensive to determine the output label, so only some tests are performed. They key is balancing the algorithm’s ability to exploit its knowledge versus it’s need to explore the space (statistically consistent active learning).  This optimal method is to assign a probably that an output label is queried based on the specific input.

Daniel talked about some algorithms to heuristically assign probabilities using an inverse propensity weight to overcome sampling bias.

Next, Andreas Mueller (who is part of the #sci-kit learn team) talked about using Bayesian optimization to get the best model for a given parameter set. A variety of parameters sets are optimized and meta-learning is applied to find a global optimal via a machine learning algorithm.

posted in:  AI, NewYorkAI    / leave comments:   No comments yet

#IBM #Watson and #FacialRecognition

Posted on November 16th, 2015

NUI Central

11/16/2015 @Wework, 69 Charlton St, NY

20151116_192744[1] 20151116_193251[1]

Before the main presentation, Roberto Valenti @Sightcorp talked about his company’s development of face analysis technology. The technology can extract information from up to 50 faces simultaneously including age, gender, mood – facial expression, ethnicity, attention

Future applications could include: home automation, gaming (map to avatar, or use as input), medical, interactive ads in a public spaces.

In the main presentation, Michael Karasick @ IBMWatson talked about the applications and APIs currently offered by Watson:

  1. Personality API which correlates word usage in one’s writing with the author’s personality.
  2. Analyze tone of works (email) to target a demographic.
  3. Respond to questions over the phone
  4. Control emotional expressions for Pepper, a robot from Softbank
  5. Vision diagnosis of melanoma
  6. Chef Watson interprets recipes incorporating your food preferences
  7. Watson Stories summarizes stories using natural language analysis. Currently it is being refined using supervised learning under the guidance of an internal team at Watson: System receives feedback on the tone, etc.

posted in:  AI, Natural User Interface, NUI Central    / leave comments:   No comments yet

Self-Driving vehicle: #ComputerVision and #ControlSystems

Posted on July 28th, 2015

#SelfDrivingVehicle

07/28/2015 @EqualSpace, 89 Market St., Newark, NJ

20150728_195125[1]

Parth, Praful, and Ak updated the group on their progress toward a demo at Bell Works on September 24. Last month the emphasis was on detecting things in 3-d. This time, the emphasis was on actuation on their current platform: a drive by wire, Baja go Kart. A video showed their program on controlling the brakes and steering using an Arduino serial interface. Their system will eventually control a MellowCab, 2 passenger electric vehicle.

They next talked about their work on computer vision using histogram clustering. They receive gray scale images (to be replaced eventually by an RBG input) and divide the pixels into rectangular blocks. They generate a histogram of the gray scale levels within each block and group together contiguous blocks with similar histograms (as measured by a chi2 distance measure). The groups of contiguous blocks are then used to detect the road. The system will eventually be supplemented by a depth camera which can measure to 20 meters.

Their main hardware in support of computer vision are

  1. Nvidia Jetson Tk1 – dev board. 2GB, Quad-core, supports Cuda
  2. Stereolabs Zed – stereo camera

Their code is posted on https://github.com/DriveAI/DriveAI-Platform

posted in:  AI, hardware, Programming, Self-driving vehicle    / leave comments:   No comments yet