New York Tech Journal
Tech news from the Big Apple

#Neurobiology of #ComputerInterfaces

Posted on March 18th, 2017

#CodesAndModes

3/17/2017 @Hunter College, 68th & Lexington Ave, New York, Lang Theater

Three talks were on the neurobiology of the computer interfaces were given by

Ellen Pearlman – Utopic or Dystopic

Ruben Van de Ven – Emotional Hero/We Know How You Feel

Greg Garvey – Split Brain

Ellen Pearlman talked about brain computer interfaces. The most available commercial devices are from Emotive, Muse, OpemBCI. The devices need smoothing and feature extraction algorithms to find signal in the noise. Devices take 8 seconds to calibrate and 150 milliseconds for a signal to be detected and transmitted by the device. One of the key signals is the P300 MERMER which indicates that you recognize someone/something.

She talked about the large increase in brain research funding recently by DARPA, NIH and NSF with the IARPA (Intelligence Advanced Research Projects) program soliciting proposals.

Ellen next talked about

  1. Semantic brain maps, which highlight the locations associated with particular stimuli
  2. Optogenetics which can implant false memories which can be turned on and off with blue and orange light
  3. Cortical modems, which can transmit images directly to the brain bypassing the senses (Cubic Corp is working on this)
  4. Brain data stored in the cloud (Cloudbrain and Qusp have been uploading brain data)

Finally, she showed her performance piece, Noor, a brain opera, in which a performer’s brain waves trigger visual patterns as the performer interacts with the audience

**

Next, Ruben Van de Ven  talked about the challenges faced by machine learning methods that claim to determine one’s emotions from facial images. Applications using this technology include ‘Emotion Hero’ a games you can download from the Google Play Store and Hire-vue which evaluations people during job interviews.

Paul Ekman developed the ‘Facial action coding system’ which is the classification scheme used. But, Ruben notes that context affects how we interpretation an expression. Also the validity of the face classification is reliant on human subjective coding in the 19th century from a French asylum. Both place the science behind these methods on shaky ground.

In addition, the software is often marketed as both objective and as a tool for training individuals to mislead the software. But how can the software work if one can learn to manipulate it?

**

Greg Garvey talked about how his art installation take advantage of the modularized brain which is split into left and right processing. His installations show different images to the left and right eyes (and therefore to the left and right brain hemispheres) to raise internal cognitive conflicts in the image being viewed.

posted in:  Art, psychology, UI    / leave comments:   No comments yet

Hardwired: product #design and delivering #magic

Posted on June 11th, 2016

#HardwiredNYC

06/07/2016 @ WeWork, 115 West 18rd Street, NY, 4th floor

20160607_183540[1] 20160607_185434[1] 20160607_192649[1] 20160607_194414[1]

New Lab and Techstars talked briefly before the four speakers:

In the first presentation, Bob Coyne @Wordseye talked about his utility that takes a text description of a scene and creates an image matching that description. This allows users to create 3-d mages without complicated #3-d graphics programs.

They parse sentences to create a semantic map which can include commands to place items, change the lighting, reorient objects, etc. They see uses in education, gaming, and image search.

[Graphics are currently primitive and the manipulations are rough, but there are only 7 months old. Has promise for creating avatars and scenes for game prototypes. Text lack the subtly of gestures, so  text may need to be supplemented by gestures or other inputs.]

In the second presentation, Chris Allen @ iDevices – developers of connected home products and software – talked about the evolution of the company from an initial product in 2009 which was a connected grill.

Since then they have raised $20 million, were asked by Apple to develop products for HomeKit, currently market 7 HomeKit enabled products.

Experiences he communicated:

  1. Do you own research (don’t rely on conventional wisdom): despite being told that $99 was too high a price, they discovered that reducing the price to $75 did not increase sales.
  2. Resist pivoting away from your vision, especially when you have not intellectual property advantage: a waterproof case for phones failed.
  3. Create a great work environment and give your workers equity
  4. They build products that are compatible across platforms, but concentrate on just the three main platforms: Siri, Google, Amazon.

Next, Josh Clark @BigMedium talked about his vision of the future of interfaces: they will leap off the screen combining #speech and #gestures. They will be as magically as the devices in the world of Harry Potter. Unlike the Google glass, which was always an engineering project, we should be asking how can we make any object (even of a coffee cup) do more: design for the thing’s essential ‘thingness’.

Technology should be invisible, but magical:

  1. You can stand in front of a mirror memory and see how you look with a different color dress, or replay a video of what you look like when you turn around or do a side-by-side comparison with a previously worn dress.
  2. Asthmapolis site – when you have an asthma attack, you tap an app. Over time you can see across individuals their locations when they have an attack.
  3. A hackathon app using the Kinect in which one gestures to grab an image off a video so a still image from that moment appears on the phone.

It’s a challenge of imagination.

If the magic fails, we need to make sure the analogue device still works.

[In some cases, magic may not be enough. For instance, Asthmapolis pivoted away from ashma alone and now concentrates on a broader range of symptoms ]

In the last presentation, Martin Brioen@Pepsi talked about how his design team uses #prototyping to lead the development of new ideas.

Different groups within Pepsi have different perspectives and different priorities, so each views ideas differently, but to the get a consensus they all was to need to interact with the new product so they can see, touch, …

At each phase of development you use a different tools concentrated on the look of it, the feel of it, the functionality, etc. At each stage people need to interact with it to test it out. Don’t wait until you have a finished product. Don’t skip steps. Consider the full journey of the consumer;

Employ the least expensive way to try it out

They are not selling product, they are selling experiences: they create a test kitchen for the road.

posted in:  Apple, applications, hardware, Hardwired NYC, Internet of Things, psychology, startup    / leave comments:   No comments yet

Teaching computers to be more creative than humans through #games

Posted on April 19th, 2016

#ACM New York City

04/19/2016 @NYU Courant Institute, 251 Mercer Street, NY

20160419_193404[1] 20160419_194149[1] 20160419_195923[1] 20160419_200935[1]

Julian Togelius @NYU spoke about #AI, #games and #creativity? He talked about how game playing has been part of the development of AI and how AI can change the creation of games for humans.

Julian first talked about how algos play board games better than the best human, starting with #Chess and finally #Go in 2016. But he feels that board games are only a minute part of the universe of games played by humans. He explained how programs tackled 3 video games

  1. #Starcraft
  2. #SuperMario
  3. Car racing

Of the three, Starcraft appears to offer the biggest challenge for computers due to the size and complexity of the game playing universe. The first level of SuperMario can be easily solved with a simple algorithm, but higher levels require more sophistication to get around overhangs. Car racing is simple when there is only a single car, but competitor levels require an understanding of competitor’s strategies. However, the code to successfully solve one of these games does not immediately generalize to solutions for the other games.

He next opined that intelligence is more than solving specific problems, but implies the ability to solve a wide range of problem. This can be summarized by the Legg and Hutter formula which is a sum of skills playing all games weighted by the game complexity.

In competitions of algos across a variety of games, the Monte Carlo Tree Search (a statistical tree search algorithm that uses a forest of trees) appears to do best.

Julian next talked about how AI can be used to create better games for humans: PCG (Procedural Generated Content)

  1. Save development time
  2. Non-human creativity – most humans just copy other humans
  3. Create endless games
  4. Create player-adaptive games
  5. Study game design by formalizing it

He talked about using evolution to search for good content using

  1. Combinatorial creativity = combine lots of ideas to search a space
  2. Transformational creativity = change the problem definition to come up with new ideas

He proposed a collaboration of humans and algos. One tool to do this is the LUDI game description developed by Cameron Browne. Using the game descriptions of many games, one can use a genetic algorithm to combine the rules to create other games, some of which are interesting to humans. The game #Yavalath was created using this process. He also showed pictures of a collaborative tool for creating versions of the Cut-The-Rope game in which a human places objects in the space and the algo solves it.

Other research looks at humans playing specific games to develop an algorithm that predicts which aspects of the game create user interest and predict whether other individuals with different skill levels will find a game (level of a game) interesting.

posted in:  ACM, AI, psychology    / leave comments:   No comments yet

What you get from unmoderated and moderated #testing of #prototypes

Posted on October 22nd, 2015

Jersey City Technology Startups

10/22/2015 @Ishi Systems,  185 Hudson St, Plaza 5, STE 1400, Jersey City , NJ

20151022_195912[1]

Steven Cohen @Validately spoke about the goals and advantages of prototype testing.

He started by reviewing his history in startups: one successful start and one failure. After these experiences he asked why does innovation fail?

  1. False positives – estimated that 50% of a typical product’s features are not used (see http://versionone.com)

He then asked what you can learn from users

  1. UI usability – can user complete a task by clicking on the appropriate links? – the inventor is too close to the product to know this
  2. Real life usability – get a lot of false positives here. “It’s not a big enough problem.”
    1. Are there blockers – usable in real life, such as privacy/security issues.
    2. Better alternatives – something I’m using now. What is it like?
    3. Does it solve a need? Needs to be big to change inertia. Need to get to “no” to show you know the issues. For instance introduce a price and ask if they would buy it (time, reputation, money)

Steven then presented two ways to test prototypes

  1. Unmoderated – asynchronous testing as the user explores the functionality – good for UI usability
    1. Less work, subjects pace themselves and give verbal/written feedback
    2. Can’t learn real life issues
  2. Moderated – live test interacting with the user and asking for opinions – better for real life usability
    1. Deep user learning – usability
    2. More time commitment

Steven did a simulated interview to show how the moderator would probe the user his/her actual usage of the product. One of the most important aspects is to get the user to make decisions on feature tradeoffs. The main tradeoff is whether a feature is work a specific amount of money and if not, why not and would there be a lower price point.

He wants to put up road blocks and see how people react – e.g.  put up a page and ask for a credit card (even if you don’t keep the number) – see how much commitment. This is an attempt to get the feedback that users might not way to give you since they do not want to hurt your feelings or be confrontational.

Other observations were

  1. If you need to pay your own existing customers to participate in a test, then the problem is not that important. Power users will probably always test for free.
  2. Validately will start video recording the user, but does not believe eye tracking adds much since mouse movements correlate highly with eye movements
  3. For unmoderated tests, if 5 of 6 users can do it. it’s probably adequate
  4. For moderated test, start with 6 to 10 people. If there is a consensus, you can get by with fewer. If there is no consensus, then your persona (a straw man demo) is probably not well defined.
  5. Once you have made a change, go back to the initial set of users and see if it fixes the problem – did we understand the comments?

The bottom line is, don’t build stuff that users don’t need

posted in:  Jersey City Technology Startups, psychology, startup, UI, UX    / leave comments:   No comments yet

NYVR: designing a #VR app and making the VR user comfortable with the experience

Posted on September 18th, 2015

New York Virtual Realtiy

09/17/2015 @ Microsoft,11 Times Square, NY

20150917_202520[1]

Martin Schubert talked about his award winning entry in the 2014 3d Jam and Eric Greenbaum talked about making VR pleasant for humans.

After a brief introduction to the technology of the #LeapMotion (two infrared LEDs that produce a black and white image from which one’s hand and finger positions and gestures are determined), Anthony introduced Martin Schubert who created the VR program Weightless (youtube video) using blender, unity, and playmaker. Martin described his process to create the app in 6 steps

  1. Identify the strengths of VR – 3d depth and sense of scale; easy to look around; good spatial awareness; sense of depth in a mid range around 2 meters
  2. Identify the strengths of the Leap Motion – hand motions are natural 3d inputs; display of hands creates body presence; weak in precision pointing (binary inputs); likes fingertip interactions, but there is not haptic feedback -> as a result, moving objects in a weightless environment was more natural that in the presence of gravity (there is mass, but we don’t need to fight against weight)
  3. Create prototype
  4. Create a narrative. Sorting objects in a space station (weightless environment). Have environment set the scene and create user expectations
  5. Repeatable actions. Get objects, sort, repeat
  6. Create a believable space – create points of interest. Set up the user initially (see video). Need to identity what is important. Have as many things as possible react to you

Marin also talked about taking advantage of the widgets in unity. He also said that is it important to have differentiate the foreground from the background and music should be part of the active space and interact with actions

As an aside, Aboard the Looking Glass won first place in the 2014 3D Jam

In the second presentation, Eric Greenbaum talked about considerations when making VR that does not make the user sick.

The key concept is presence so that the user forgets that technology is mediating the experience.

Some considerations are based on hardware: Tracking with low latency and low persistence. 1k by 1k per eye is sufficient resolution. Good optics

But, there are also human physiological considerations:

We are evolutionary primed to avoid experiences that made us nauseous in the past.

  1. Our bodies strive to match signals in the inner ear with what we see.
  2. Give users control of movement
  3. Avoid acceleration and deceleration – Trick is do instantaneous acceleration
  4. Keep things on the level plane
  5. Ground users with fixed objects  a cockpit is one way
  6. Keep horizon steady
  7. Keep objects in a comfortable space – 6 to 10 feet is best
  8. Avoid things that fly at your eyes.
  9. Sound is important
  10. Design environment – People are afraid of small enclosures, high places.
  11. Sense of scale is important
  12. Interaction design. Text is difficult in VR. Guiding light or sound is helpful

Different design considerations for mobile and for desktop

 

posted in:  Natural User Interface, NYVR, psychology, VR    / leave comments:   No comments yet

The #Psychology of #Savings and Personal #Finance with Qapital

Posted on April 13th, 2015

ActionDesignNYC

04/13/2015 @TurnToTech, 184 Fifth Ave, 4th floor, NY

20150413_193934[1]

Jane Ruffino – head of marketing @Qapital – talked about an app to help “millennial” savers save money and attain their goals. Qapital was started in 2012 and launched a product in Sweden in 2013. In March they launched an Apple app for the U.S. market with an Android app later this year. The app is paired with a debit card to one of 5 banks. The goal is to provide encouragement to save for one’s goals using automatic triggers and feedback on attaining goals such as money for a vacation or a major purchase.

Jane emphasized findings from focus groups targeting two audiences.

Age 18-24 – many see cash flow as savings

Age 25-40 – many see cash flow as reserves, but also are saving for bigger-ticket items.

In both audiences, the focus groups said

  1. money is boring, but pursuit of goals is important
  2. they want more from their money – but there is no education on how to invest their savings
  3. they don’t identify as ‘money people’ even if they are good savers
  4. they have anxiety about debt, stability and security
  5. shame defines their relationship to money – about having no money or not knowing what to do with it. This also appears when they consider if they deserve the vacation they want to take?

Given these findings, she recommends that the following be considered when designing the app:

  1. Make money less boring – create a sense of purpose: what do you want this year? Take care of needs and wants
  2. Let them be in charge – the reinforcement must match their interests and goals
  3. Give them credit – people know more about finance than they give themselves credit for – (money management associated with their mother)
  4. Validate their actions – e.g. gamify – save together to cheer each other on.
  5. Help them celebrate every win.

Making the goal concrete is essential to both give focus and create a tangible reward. Short- and intermediate-term goals are the emphasis. Creating the rules for saving should be part of the entertainment / reward for saving.

One challenge will be to keep users interested in the app to avoid the rapid drop-off in usage experienced by some fitness and dieting apps (see notes: Brian Cugelman from the previous meeting of ActionDesignNYC).

Further research on millennial’s money-views might be illuminating as we move from a zero interest rate environment (savings are a safe way to store money) to a higher interest rate environment (one can earn interest that will compound over time).

Jane’s emphasis was on savings behavior, but this might be the gateway to education on investing. I would not be surprised to see a relationship between savings behavior/rewards and risk tolerance/aversion as an investor(for a technical discussion of risk aversion download the paper on my research blog post).

posted in:  ActionDesignNYC, applications, finance, psychology    / leave comments:   No comments yet