New York Tech Journal
Tech news from the Big Apple

DataDrivenNYC: bringing the power of #DataAnalysis to ordinary users, #marketers, #analysts.

Posted on June 18th, 2016

#DataDrivenNYC

06/13/2016 @AXA Equitable Center (787 7th Avenue, New York, NY 10019)

20160613_183900 20160613_185245 20160613_191943 20160613_194901

The four speakers were

Adam @NarrativeScience talked about how people with different personalities and jobs may require/prefer different takes on the same data. His firm ingests data and has systems to generate natural language reports customized to the subject area and the reader’s needs.

They current develop stories with the guidance of experts, but eventually will more to machine learning to automate new subject areas.

Next, Neha @Confluent talked about how they created Apache Kafka: a streaming platform which collects data and allows access to these data in real time.

The move from batch processing also incorporates other features such as persistent databases, a design for distributed processing, fault tolerance, online partitioning, elastic scaling, etc. It is micro-event processing.

The third speaker, Nitay @ActionIQ talked about the challenges of retaining customers using most current business intelligence tools: data are usually organized by activity, not customers so BI tools often require either the construction of complex SQL queries by developers or siloed queries by business analysts.

ActionIQ wants to bridge this gap using automated tools to create specialized databases that the analyst uses, but have the sophistication and understanding of the analyst’s needs to input complex data simply. Examples are converting physical measurements to common units, knowing that fields labeled as dates should be converted to a common date format, etc. The data importer looks at the whole workflow to create the specialized database needed by marketers.

Finally, Christopher @Arimo talked about creating collaborative tools, so business users and data scientists can use work together on large shared databases using the tools that they each prefer: natural language, SQL, R, Python, etc.

Christopher then talked about the techniques for making deep learning more efficient and effective. He spoke about the tradeoffs when optimizing the two steps in neural network optimization: computing the gradient to determine the direction and the descent step moving parameters along the gradient. He emphasized that communications bottlenecks between these two steps can negate the computational advantages of tuning each step in isolation.

In closing he talked about how the optimal path of AI is probably a combination of human neurons and computing machines (away from the von Neumann model). This also is a way around the view of Stephen Hawkins and others that AI could spell the end of humans. His final comment was that “the great companies that will leverage AI have not yet been born.”

posted in:  data, data analysis, Data Driven NYC, Data science, databases, Open source    / leave comments:   No comments yet