New York Tech Journal
Tech news from the Big Apple

HardwiredNYC: #drones, #AmazonEcho, #SmartLuggage, #SmartCities

Posted on April 5th, 2016

#HardwiredNYC

04/05/2015 @WeWork, 115 W 18th Street, NY

20160405_183845[1] 20160405_190431[1] 20160405_191824[1] 20160405_195056[1]

The following speakers presented

Brian Streem @Aerobo (build and service drones) talked about the drone market. After showing two videos: Chromaticity and Paint the Sky (about the shooting of Chromaticity), Brian spoke about

  1. The Aerobo mini – a lightweight drone designed to provide live broadcast quality feeds which can be transmitted up to 3 miles.
  2. They have a 333 exemption from regulation which is required to commercially fly a drone in the U.S. They anticipate new FAA rules that may relax some of the current requirements, such as requiring a pilot’s license for commercial flights. Also he anticipates that a category of micro UAVs will be created with alternative licensing requirements.

Next, Donn Morrill @AmazonEcho talked about the hardware and software of the echo and how Amazon is targeting it to be the center of smart home integration. He provided some insights into the design philosophy of the software including

  1. They will probably not release tools to analyze the tone, volume, speed of speech detected by the API since they are sensitive to user experience and want to product the brand
  2. In a similar vein, they do not plan to allow skills to be self-initiating (all skills require you to initiate the conversation) to avoid verbal spam.

Next, Matt Turck interviewed John Udashkin  & Justin @Raden (luggage with a battery and Bluetooth). Justin initiated the product following extensive travel experience when he worked in the fashion industry. Fashion dictated the design of a product with simple lines as in the iPhone and Beats.

Justin and John noted

  1. Justin researched the luggage industry under the tutelage of a retired executive at Tumi.
  2. This contact allowed him to gain credibility when looking for a manufacturing partner
  3. They chose VC over crowd funding for its greater flexibility
  4. John formerly worked at Quirky, so he had the manufacturing contacts needed for the electronics
  5. Integrated the electronics was difficult since luggage and electronics factories are very different: luggage factories are larger and dirtier, electronics factors are smaller and emphasize cleanliness.
  6. They were careful to avoid problems passing air transport security such as limiting the size of the battery and making it removable. Also wiring and Bluetooth can be accessible if the bag is inspected.
  7. They eventually see their app as a full utility platform with information such as TSA wait times, real time flight updates, etc.
  8. They are looking beyond online sales and see the advantages of retail outlets such as malls.
  9. Their product can be seen at a popup store at 72 Spring Street, NY

They also talked about pivots during development

Design change. They tested a biometric lock, but found it was not useful and can create electronics issues since luggage gets knocked around.

The electronics enclosure was originally different, but it suffered damage & wire breakage. The eventual design has a strong backplate to shock proof and water proof the electronics.

Finally, Joao Barros @Veniam talked about the communication network developed in Porto, Portugal (also known for the elegant arch-truss bridge constructed there by Gustav Eiffel). Porto has a mesh network consisting of wi-fi hotspots supplemented by hot spots in vehicles. These hotspots allow seamless integration of wi-fi, cellular and 5.9GHz networks.

Joao said that vehicles offer an ideal platform for hotspots

  1. They are mobile so they can collect data throughout the city = smart city
  2. Their batteries are recharged by the engine
  3. They are large enough that it is not an inconvenience to have a box large enough to hold multiple communications devices
  4. They can provide in-vehicle entertainment
  5. They can be used as an emergence communications backup for other systems
  6. They can be used to avoid vehicle collisions

The key technology is the ability to perform seamlessly handoffs across different networks (wi-fi, cell, 5.9GHz).

Specific applications are sensors of garbage cans indicating when they have already been emptied and heart rate monitoring of drivers indicating issues on the roads.

The system is also installed in Singapore and they will soon announce a rollout in the U.S.

posted in:  hardware, Hardwired NYC, Natural User Interface, NYC smart city and energy data, startup, UX    / leave comments:   No comments yet

Express Yourself: Extracting #emotional analytics from #speech

Posted on March 7th, 2016

#HUICentral

03/07/2016 @WeWork, 69 Charlton St, NY

20160307_191733[1] 20160307_193123[1] 20160307_200524[1]

Yuval Mor & Bianca Meger @BeyondVerbal talked about the potential applications for their product. BeyondVerbal produces software, including their Moodies smartphone app, that assess one’s emotional state through the intonation of one’s speech.

They take 13 second vocalizations (excluding pauses) and report the speaker’s emotional state based on 432 combined #emotions (12 emotions x 12 emotions x 3 levels of energy), based on 12 basic emotions (which can appear as pairs: 12 x 12) times 3 levels of energy (pushing out/neutral/pulling). They also monitor 3 indices: arousal, valence (pos/neg), temperament (somber/self-controlled/confrontational).

The software can be tricked by actors (and politicians) who are proficient in projecting emotions of the characters they play. They do not do speaker separation and are resilient to some types of background noise. Speech after voice compression may be difficult since various frequencies are removed, however, they have improved their ability to analyze youtube clips. They said there were differences in the diagnostic abilities for phonetic languages vs tonal languages, but many characteristics appear to be cross cultural.

They claim to measuring 100 different acoustic features, but did not provide citations to academic research. Their validation appeared to be primarily internal with a team of psychologists evaluating spoken words.

One potential application is in predicting the onset of a heart attack base on one’s voice versus a prior baseline. They are currently conducting this research on 100 patients at Mayo clinic.

 

posted in:  HUI Centrql, Natural User Interface, NUI Central, UX    / leave comments:   No comments yet

CodeDrivenNYC: caching web pages, #NLP, bringing #coding to the masses

Posted on November 20th, 2015

#CodeDrivenNYC

11/19/2015 @FirstMark, 100 Fifth Ave, NY

20151119_182753[1] 20151119_184814[1] 20151119_185517[1] 20151119_190304[1] 20151119_191249[1]

The first of the three presenters, David Mauro @Buzzfeed spoke about creating Mattress, their first open source IoS framework. Mattress caches web pages for later, off-line consumption. It also makes it appear that the page is loading quicker when online.

David spoke about the hurdles implementing this product

  1. How do we download an entire web page?
  2. How do we provide the content back to user

Their first decision was to download the URL using UIWebView and then capture all requests as they come through using  NSURLProtocol. UIWebView runs on main thread and is resource intensive, but the alternative to manually parse the HTML and the JS. They download the URL using UIWebView and then capture all requests as they come through using  NSURLProtocol. But WKWebView does not handle NSUIRLProtocol and there is a bug so you cannot just save another NSURLCache. They use commonCrypto to retain the URL, with the name hashed so even the longest name is uniquely identified.

They also need to know when a page if done downloaded.  Automated solutions have tendencies to either terminate prematurely or not terminate at all. Instead, they ask the user when the download is done.

How to provide the content back to NSURLProtocol? First ask the user if they are offline. If so, they retrieve the page from the custom offline cache. If they are online, the system reloads the initial request.

The system was designed as a simple API that can be run either in foreground or in background fetch. The background fetch needs to be monitored carefully so it does not use too much of the battery or slow the processing excessively.

The second speaker, Rob Spectre @Twilio demonstrated how easily applications can be made interactive using the Natural Language Processing tool, Textblob running in python.

Rob showed how to create an app that receives SMS text messages and changes its response based on your message. In just a few lines of code, Rob showed how the response can be differentiated based on the length of the message, it’s sentiment, it’s sentence structure, etc.

Ryan Bubinski @Codecademy asked the question “What is code?”

As an overview of the many ways to answer that question he recommended the 38,000 word article written by Paul Ford in Bloomberg June 2015

He summarized his view by saying that code is a lever that is becoming more powerful every day. As an example, he mentioned OpenFace, an open source program which uses a neural net for face recognition.

Making this lever available to more people requires either

  1. Making coding easier or
  2. Making it easier to learn how to code

 

posted in:  Code Driven NYC, iOS, Natural User Interface, Open source, Programming, UI    / leave comments:   No comments yet

#IBM #Watson and #FacialRecognition

Posted on November 16th, 2015

NUI Central

11/16/2015 @Wework, 69 Charlton St, NY

20151116_192744[1] 20151116_193251[1]

Before the main presentation, Roberto Valenti @Sightcorp talked about his company’s development of face analysis technology. The technology can extract information from up to 50 faces simultaneously including age, gender, mood – facial expression, ethnicity, attention

Future applications could include: home automation, gaming (map to avatar, or use as input), medical, interactive ads in a public spaces.

In the main presentation, Michael Karasick @ IBMWatson talked about the applications and APIs currently offered by Watson:

  1. Personality API which correlates word usage in one’s writing with the author’s personality.
  2. Analyze tone of works (email) to target a demographic.
  3. Respond to questions over the phone
  4. Control emotional expressions for Pepper, a robot from Softbank
  5. Vision diagnosis of melanoma
  6. Chef Watson interprets recipes incorporating your food preferences
  7. Watson Stories summarizes stories using natural language analysis. Currently it is being refined using supervised learning under the guidance of an internal team at Watson: System receives feedback on the tone, etc.

posted in:  AI, Natural User Interface, NUI Central    / leave comments:   No comments yet

NYVR: designing a #VR app and making the VR user comfortable with the experience

Posted on September 18th, 2015

New York Virtual Realtiy

09/17/2015 @ Microsoft,11 Times Square, NY

20150917_202520[1]

Martin Schubert talked about his award winning entry in the 2014 3d Jam and Eric Greenbaum talked about making VR pleasant for humans.

After a brief introduction to the technology of the #LeapMotion (two infrared LEDs that produce a black and white image from which one’s hand and finger positions and gestures are determined), Anthony introduced Martin Schubert who created the VR program Weightless (youtube video) using blender, unity, and playmaker. Martin described his process to create the app in 6 steps

  1. Identify the strengths of VR – 3d depth and sense of scale; easy to look around; good spatial awareness; sense of depth in a mid range around 2 meters
  2. Identify the strengths of the Leap Motion – hand motions are natural 3d inputs; display of hands creates body presence; weak in precision pointing (binary inputs); likes fingertip interactions, but there is not haptic feedback -> as a result, moving objects in a weightless environment was more natural that in the presence of gravity (there is mass, but we don’t need to fight against weight)
  3. Create prototype
  4. Create a narrative. Sorting objects in a space station (weightless environment). Have environment set the scene and create user expectations
  5. Repeatable actions. Get objects, sort, repeat
  6. Create a believable space – create points of interest. Set up the user initially (see video). Need to identity what is important. Have as many things as possible react to you

Marin also talked about taking advantage of the widgets in unity. He also said that is it important to have differentiate the foreground from the background and music should be part of the active space and interact with actions

As an aside, Aboard the Looking Glass won first place in the 2014 3D Jam

In the second presentation, Eric Greenbaum talked about considerations when making VR that does not make the user sick.

The key concept is presence so that the user forgets that technology is mediating the experience.

Some considerations are based on hardware: Tracking with low latency and low persistence. 1k by 1k per eye is sufficient resolution. Good optics

But, there are also human physiological considerations:

We are evolutionary primed to avoid experiences that made us nauseous in the past.

  1. Our bodies strive to match signals in the inner ear with what we see.
  2. Give users control of movement
  3. Avoid acceleration and deceleration – Trick is do instantaneous acceleration
  4. Keep things on the level plane
  5. Ground users with fixed objects  a cockpit is one way
  6. Keep horizon steady
  7. Keep objects in a comfortable space – 6 to 10 feet is best
  8. Avoid things that fly at your eyes.
  9. Sound is important
  10. Design environment – People are afraid of small enclosures, high places.
  11. Sense of scale is important
  12. Interaction design. Text is difficult in VR. Guiding light or sound is helpful

Different design considerations for mobile and for desktop

 

posted in:  Natural User Interface, NYVR, psychology, VR    / leave comments:   No comments yet

IBM Watson partnering people with #CognitiveComputing

Posted on November 17th, 2014

NUI central

11/17/2014 @WeWork, 69 Charlton Street, NY

20141117_193352[1] 20141117_194308[1]

David Melville, @IBM research spoke about #Watson and its potential uses as a collaborative tool. Watson is famous as a contestant on #Jeopardy. The basic processes it undertakes can be classified:

  1. multiple parsings of a questions
  2. generate hypothesis
  3. collect & evaluate evidence – e.g. frequency of a word in a doc, source reliability, context of work within a document
  4. weight & combine for final confidences

The engine is written in Java and basic system is open source, but parts of the annotator are proprietary. More technical information is available for developers at http://bluemix.net.

David showed videos demonstrating 1. Fast responses (Jeopardy) 2. Data retrieval (document summary followed by search) 3. Simple learning 4. Speech recognition

The talk was not technical with the emphasis on how the capabilities demonstrated via the videos could be used to create human-computer collaboration.

It is unclear what Watson contributes. It is fast and it can mine large databases. However, much of the expertise which it can contribute would be in specialized areas where human experts contribute both “book knowledge” in depth, but also creative insights. Watson’s current creativity is probably limited and human “experts” are primarily experts because they know the topic area from experience in detail beyond what is available in documents. Watson could be “an expert in a box”, but will probably require time before it can replace a human expert who not only knows the facts, but can integrate the facts with the human/business aspirations and limitations. In these cases, speed helps, but is not the deciding factor in making good decisions.

Speed might be the main contribution of Watson. For instance, Watson’s data retrieval could be used in high speed trading or other activities which require super-human reaction times. Other uses could be in autonomous drone attacks, extra terrestrial exploration, or other activities which require a breadth of knowledge whose success is highly dependent on speed and autonomy. In these cases applying rules quickly is key and Watson can handle a large number of complicated rules.

The use of Watson may hinge on whether the task requires knowledge or wisdom and the speed at which we need to decide.

 

posted in:  applications, Natural User Interface    / leave comments:   No comments yet

#Haptic Feedback for Natural User #Interfaces

Posted on July 22nd, 2014

(You’ve Got) The Magic Touch: Haptic Feedback for Natural User Interfaces

NUI Central

Katherine Kuchenbecker, Heather Culbertson @ Univ of Penn.

7/21/2014 WeWork Labs (69 Charlton St., New York)
20140721_192439[1]

Katherine Kuchenbecker and Heather Culbertson both from the University of Pennsylvania presented three projects from their research on tactile feedback and proprioceptive learning. Their research considers various ways that haptic feedback can improve our interactions with computers and other machines.

20140721_193226[1]

Virtual textures. Heather’s research concentrates on how surfaces feel through a tool, such as a paint brush, dental pick, etc. Using a stylus with accelerometers and a force meter, she measures one’s tactile senses when we explore surfaces from glass, tile, cloth, etc. These measurements can then be played back through an actuator (similar to a speaker with a coil and a magnet) to reproduce the original sensations. Textures appear to be well modeled, but other aspects such as slipperiness are works in progress. She is also working on measuring and playing back tactile sensations in three dimensions.

Robotic surgery. Katherine presented two projects in her lab. The first was adding tactile feedback when surgeons conduct robotic surgery such as removing the prostate. Instead of operating directly on the patient, the surgeon remotely manipulates instruments through small incisions primarily using sight as a guide. Katherine’s research adds both tactile & kinesthetic feedback so surgeons can feel the tissue as well as see them. They surveyed surgeons and found that virtually all would welcome this additional feedback. Their research has shown that subjects moving blocks on pegs performed slower, but more accurately with this additional feedback.

20140721_202817[1]

Motion guidance. Automating guided practice in injury rehab or skills improvement can be done best with lots of feedback. The project looks at combining Kinect sensors and other motion sensors with haptic feedback. With this combination one can perfect motions in sports such as a golf swing by receiving visual feedback and a variety of other feedback such as pressure, vibration, etc.

 Katherine’s TED talk is available here.

posted in:  applications, Natural User Interface, UX    / leave comments:   No comments yet