Applications of #DeepLearning in #Healthcare
Posted on March 28th, 2017
03/28/2017 @NYU Courant Institute (251 Mercer St, New York, NY)
Sumit Chopra, the head of A.I. Research @Imagen Technologies, introduced the topic by saying that the two areas in our lives that will be most affect by AI are healthcare and driverless cars.
Healthcare data can be divided into
- Other – cell phones, etc.
Payer data – from insurance provider
Clinical data – incomplete since hospitals don’t share their datasets; digital form with privacy concerns
Payer data more complete unless the patient switches the payer, less detail.
He focuses on medical imaging – mainly diagnostic radiology – 600mm studies in the U.S., but shortage of skilled radiologists. Prevalence of errors. The images are very large size, high resolution, low contrast, highly subtle cues => radiology is hard to do well
Possible solution: pre-train a standard model: Alexnet/VGG/… on a small number of images, but this might not work since the signal is subtle.
Also radiology reports, which could be used for supervised training, are unstructured and it’s hard to tell what the report tells you. => weak labels at best
Much work has been done on this problem, usually using deep convolutional neural nets.
First step: image registration = rotate & crop.
Train a deep convolutional network (registration network) , the send to a detection network for binary segmentation.
Could use generative models for images to train doctors
Leverage different modalities of data
Sumit has round that a random search of hyperparameter space works better than either grid search or optimizer search.
Intro to #DeepLearning using #PyTorch
Posted on February 21st, 2017
02/21/2017 @ NYU Courant Institute (251 Mercer St, New York, NY)
Soumith Chintala @Facebook first talked about trends in the cutting edge of machine learning. His main point was that the world is moving from fixed agents to dynamic neural nets in which agents restructure themselves over time. Currently, the ML world is dominated by static datasets + static model structures which learn offline and do not change their structure without human intervention.
He then talked about PyTorch which is the next generation of ML tools after Lua #Torch. In creating PyTorch they wanted to keep the best features of LuaTorch, such as performance and extensibility while eliminating rigid containers and allowing for execution on multiple-GPU systems. PyTorch is also designed so programmers can create dynamic neural nets.
Other features include
- Kernel fusion – take several objects and fuse them into a single object
- Order of execution – reorder objects for faster execution
- Automatic work placement when you have multiple GPUs
PyTorch is available for download on http://pytorch.org and was released Jan 18, 2017.
Currently, PyTorch runs only on Linux and OSX.
#ComputerScience and #DigitalHumanities
Posted on December 8th, 2016
PRINCETON #ACM / #IEEE-CS CHAPTERS DECEMBER 2016 JOINT MEETING
12/08/2016 @Princeton University Computer Science Building, Small Auditorium, Room CS 105, Olden and William Streets, Princeton NJ
Brian Kernighan @Princeton University spoke about how computers can assist in understanding research topics in the humanities.
He started by presenting examples of web sites with interactive tools for exploring historical material
- Explore a northern and a southern town during the Civil War: http://valley.lib.virginia.edu/
- Expedia for a traveler across ancient Roman: http://orbis.stanford.edu/
- The court records in London from 1674-1913: https://www.oldbaileyonline.org/
- Hemingway and other literary stars in Paris from the records of Sylvia Beach
Brian then talked about the challenges of converting the archival data: digitize, meta tag, store, query, present results, make available to the public
In preparation for teaching a class this fall on digital humanities, he talked about his experience extracting information from a genealogy based on the descendents of Nicholas Cady (https://archive.org/details/descendantsofnic01alle) in the U.S. from 1645 to 1910. He talked about the challenges of standard OCR transcription of page images to text: dropped characters and misplaced entries. There were then the challenges of understanding the abbreviations in the birth and death dates for individuals and the limitations of off-the-shelf software to highlight important relations in the data.
Brian highlighted some facts derived from the data:
- Mortality in the first five years of life was very high
- Names of children within a family were often recycled if an earlier child had died very young
Teaching computers to be more creative than humans through #games
Posted on April 19th, 2016
04/19/2016 @NYU Courant Institute, 251 Mercer Street, NY
Julian Togelius @NYU spoke about #AI, #games and #creativity? He talked about how game playing has been part of the development of AI and how AI can change the creation of games for humans.
Julian first talked about how algos play board games better than the best human, starting with #Chess and finally #Go in 2016. But he feels that board games are only a minute part of the universe of games played by humans. He explained how programs tackled 3 video games
- Car racing
Of the three, Starcraft appears to offer the biggest challenge for computers due to the size and complexity of the game playing universe. The first level of SuperMario can be easily solved with a simple algorithm, but higher levels require more sophistication to get around overhangs. Car racing is simple when there is only a single car, but competitor levels require an understanding of competitor’s strategies. However, the code to successfully solve one of these games does not immediately generalize to solutions for the other games.
He next opined that intelligence is more than solving specific problems, but implies the ability to solve a wide range of problem. This can be summarized by the Legg and Hutter formula which is a sum of skills playing all games weighted by the game complexity.
In competitions of algos across a variety of games, the Monte Carlo Tree Search (a statistical tree search algorithm that uses a forest of trees) appears to do best.
Julian next talked about how AI can be used to create better games for humans: PCG (Procedural Generated Content)
- Save development time
- Non-human creativity – most humans just copy other humans
- Create endless games
- Create player-adaptive games
- Study game design by formalizing it
He talked about using evolution to search for good content using
- Combinatorial creativity = combine lots of ideas to search a space
- Transformational creativity = change the problem definition to come up with new ideas
He proposed a collaboration of humans and algos. One tool to do this is the LUDI game description developed by Cameron Browne. Using the game descriptions of many games, one can use a genetic algorithm to combine the rules to create other games, some of which are interesting to humans. The game #Yavalath was created using this process. He also showed pictures of a collaborative tool for creating versions of the Cut-The-Rope game in which a human places objects in the space and the algo solves it.
Other research looks at humans playing specific games to develop an algorithm that predicts which aspects of the game create user interest and predict whether other individuals with different skill levels will find a game (level of a game) interesting.
#SoftRobotics for Hard Problems
Posted on March 25th, 2015
03/25/2015 @NYResistor, 87 3rd Ave, Brooklyn, NY
Matt Borgatti @Super-Releaser showed how to create a soft robotic “animal” made of silicone. He first spoke about how soft robots can fill an important niche where robots assist humans in medical situations, but without the dangers of hard robots hurting the person. Soft robots can be made of fabric or muscle wire (but see also Empire Robotics), but most commonly are made of silicone. Internal cavities are filled with pressurized air to change their shape.
The remainder of the presentation was a step-by-step demonstration of how to create the body of a robot. The general steps are
- Use a CAD program such as Solidworks to design the robot and the moulds to shape the silicone
- Create the moulds using a 3-d printer
- Cast a wax image of the internal air space within the robot
- Thoroughly mix the two silicone components
- Use a vacuum pump to remove bubbles in the mixture
- Place the wax image within the main mould and cast the silicone body
- Attach the body to a nozzle and air supply
Matt created a robot and had members of the audience participate in some steps including filling moulds and mixing the silicone.
Matt also spoke about the ongoing challenges to create the next generation of robots. These include increasing the force that the robot generates: currently most are filled with 2 to 3 atmospheres of pressure which generates 1 pound of force. He noted that design changes, such as adding spines to redirect the force, can increase the available force. He also noted how improved simulation software could speed development by giving a better understanding of how different internal air shapes affect the robot’s function.
For another perspective on soft robotics, please see my previous post on the topic.
35th annual Computer Graphics Film Show SIGGRAPH Video Review
Posted on October 16th, 2014
The meeting was a highly entertaining (see for example “Johnny Express“) look at some of the most recent computer graphics shown at the #ACM #SIGGRAPH in August in Vancouver. Some of the academic topics explored are rendering many characters in the same scene (Mr Peabody playing multiple instruments) and moving toward a more fluid and dynamic interpretation of the characters (such as Olive Oil’s arms in Popeye cartoons).
The presenters also talked about Blender, open source #animation software. Audience members were encouraged to download the demo videos, but especially to view the production files showing the inner workings of the demos.