New York Tech Journal
Tech news from the Big Apple

How to build a #MixedReality experience for #Hololens

Posted on April 14th, 2017

#NYSoftwareEngineers, @NYSE

4/14/2017 @MicrosoftReactorAtGrandCentral, 335 Madison Ave, NY, 4th floor

Mike Pell and John gave a roadmap for generating #MixedReality content. They started with general rules for generating content and how these rules apply to building MR content.

  1. Know your audience –
    1. Role of emotion in design – we want to believe in what is shown in a hologram.
    2. Think situation – where am I? at home you are comfortable doing certain things, but there are different needs and different things you are comfortable in public
    3. Think spatially – different if you can walk around the object
    4. Think inclusive – widen your audience
  2. Know Your medium
    1. For now you look ridiculous when wearing a VR headset– but maybe this eventually becomes like a welder shield which you wear when you are doing something specialized
    2. Breakthrough experience – stagecraft – so one can see what the hololens user is seeing
  3. Know Your palette

Interactive Story Design – a fast way to generate MR content

  1. Character
    1. Who is your “spect-actor” (normally someone who observers – have a sense of who the individual is for this moment – avoid blind spot, so pick a specific person. )
    2. Who are your “interactors” – will change as a result of the interaction – can be objects, text, people
    3. This creates a story
  2. Location – design depends on where this occurs
  3. Journey – how does participant change

How to bring the idea to life: how to develop the script for the MR experience

3-step micro sprints – 3 to 6 minute segments – so you don’t get attached to something that doesn’t work. Set 1 to 2 minute time limit for each step

  1. Parameters – limited resources help creative development
    1. Personify everything including text has a POV, feelings, etc.
    2. 3 emotional responses – what is the emotional response of a chair when you sit in it?
      1. Negative
      2. Neutral
  1. 3 conduits
    1. Language
    2. Facial expression – everything has a face including interfaces and objects
  1. Playtest – do something with it
    1. 3 perspectives
      1. Participant
      2. Interacters – changes in personality over time
  1. PMI – evaluative process – write on index cards (not as a feedback session) so everyone shares their perspective. Next loop back to the parameters (step 1)
    1. Plus – this is interesting
    2. Minus – this weak
    3. Interesting – neither of the above “this is interesting”

How to envision and go fast:

  1. Filming on location – randomly take pictures – look for things that speak to you as creating an interesting experience.
  2. Understand the experience – look at the people (i.e. people viewing art)
  3. Visualize it – put people into the scene (vector silhouette in different poses) put artwork into scene along with viewers.
  4. Build a prototype using Unity. Put on the Hololens and see how it feels

They then went through a example session in which a child is inside looking at a T-Rex in the MOMA outdoor patio. The first building block was getting three emotional responses for the T-Rex:

  1. Positive – joy looking at a potential meal: the child
  2. Negative – too bad the glass barrier is here
  3. Neutral – let me look around to see what is around me

To see where we should be going, look at what children want to do with the technology

posted in:  Animation, Art, UI, video    / leave comments:   No comments yet

#VideoStreaming, #webpack,#diagrams

Posted on January 18th, 2017


01/17/2017 @FirstMarkCapital, 100 Fifth Ave, NY 3rd floor

Tim Whidden, VP Engineering at 1stdibs: Webpack Before It Was Cool – Lessons Learned

Sarah Groff-Palermo, Designer and Developer: Label Goes Here: A Talk About Diagrams

Dave Yeu, VP Engineering at Livestream: A Primer to Video on the Web: Video Delivery & Its Challenges

Dave Yeu @livestream talked about some of the challenges of streaming large amounts of video and livestreaming: petabytes storage, io, cpu, latency (for live video)


  1. Long-lived connections – there are several solutions
    1. HLS (Http live streaming) which cuts video into small segments and uses http as the delivery vehicle. Originally developed by Apple as a way to deliver video to iPhone as their coverage moves from cell tower to cell tower. It uses the power of http protocol = a play list & small chunks which are separate url’s: m3u8 files that point to the actual files.
      1. But there are challenges – if you need 3 chunks in your buffer, then you have a 15 second delay. As you decrease the size of each chunk, the play list gets longer so you need to do more requests for the m3u8 file.
    2. DASH – segments follow a template which reduces index requests
    3. RTMP – persistent connections, extremely low latency, used by Facebook
  2. Authorization – but don’t want you to rebroadcast. (no key, so not DRM).
    1. Move authentication to cache level – use Varnish.
    2. Add token to the playlist, Varnish vets the token and serves the content. => all things come through their api.
    3. But – you expand the scope of your app = cache + server.
  3. Geo-restrictions
    1. Could do this: IP address + restrictions. But in this case you need to put geo-block behind the cache and server.
    2. Instead, the api generate s geo-block config. Varnish loads in a memory map and checks
    3. If there is a geo violation, then Varnish returns a modified url, so the server can decide how to respond


Tim Whidden @1stdibs, an online market place for curated goods –“ ebay for rich people” spoke about Webpack, a front end module system. He described how modules increase the usability of functions and performs other functions like code compression.


Finally, Sarah Groff-Palermo spoke about how diagrams help her clarify the code she has written and provide documentation for her and others in the future.

She described a classification of learning types from sequential learner (likes tutorials) to global learners (like to see the big picture first) (see . Sarah showed several diagrams and pointed out how they help her get and keep the global picture. She especially likes the paradigm from Ben Schneiderman  – overview, zoom and filter then details-on-demand

For further ideals she recommended

  1. the book Going Forth – lots of diagrams
  2. Now you see it by Stephen Few
  3. Flowing data – blog by Nathan Yau
  4. Keynote is a good tool to use for diagrams

posted in:  applications, Code Driven NYC, video    / leave comments:   No comments yet

Harness the power of #Web #Audio

Posted on April 20th, 2016


04/20/2016 @TechStars, 1407 Broadway, NY

20160420_192722[1] 20160420_193308[1]

Titus Blair @Dolby demonstrated the importance of sound in the mood and usability of a web page. He then showed the audience how to incorporate higher quality audio into a web site.

He first showed a video of a beach scene. Different audio tracks changed the mood from excitement to mystery to romantic to suspenseful to tropical.

By sending a wav file to the Dolby development site one creates a high quality audio file in mp4 format which can be downloaded and played through selected browser (currently including Echo and Safari).

Titus then showed two examples, a #video game and a frequency spectrum display, and walked the audience through the code needed to play audio file.

  1. A #Javascript file, dolby.min.js, needs to be sourced (available on github)
  2. Web code needs to test if the browser can handle the Dolby digital plus file
  3. Parameters in the backgroundSound variable adjust the playback rate and other qualities
  4. To get frequency spectrum, an audiocontext variable does an fft which can be plotted

Finally, Titus illustrated our sensitivity to sound by playing the video “How to make someone sound like an idiot”.

Slides for this presentation are available on

posted in:  javaScript, NYC JS, Programming, UX, video    / leave comments:   No comments yet

Data Driven NYC: improving #CustomerSerivce, #pro-bono data analysis, analyzing #video

Posted on May 27th, 2015


05/26/2015 @ Bloomberg, Lexington Ave, NY

20150526_194745[1] 20150526_193037[1]

Four speakers talked about their approaches to data:

The first speaker, David Luan @ Dextro described his company which uses deep learning techniques to summarize and search videos by categories and items appearing in the videos.  The goal is to create a real-time automated method for understanding how consumers view videos.

They describe the video by a Salience graph showing the important themes in the video and a time line of when concepts/items are displayed.

Analysis of video is complicated as items are embedded in a context and information needs to be summarized at the correct level (not too low, such as there are ice skates, seats, lights, etc., but at the level of understanding that this is a specific hockey game). They also aim to use motion cues to give context to the items and segment the video into meaningful chunks.

They work with a taxonomy provided by the customer to create models based on the units wanted by the customer.

David talked about the challenges of speeding the computation using GPUs and how they eventually will incorporate metadata and the sound track.

The second speaker, Sameer Maskey @FuseMachines talked about how they use data science analysis to improve customer service.

He talked about the treasure trove of data generated in prior customer service interactions. These can be analyzed to improve the customer experience by

  1. Improving the ability of customers to find solutions using self service
  2. Empower customer service reps with tool that anticipate the flow of the conversation

Sameer mentioned several ways that this information can assist in these tasks:

  1. Expose information embedded in documents
  2. Considers what the user is looking at and predicts the types of questions that the user will ask.
  3. Train customer service reps using previous conversations. New rep talk to the system and see how the system responds.
  4. On a call, the system automatically brings up documents that might be needed.

Three fundamental problems are important

  1. Data to score – ranks answers
  2. Data to classes/labels – predict answer type
  3. Data to cluster – cluster topics

They currently do not have the sophistication to ask for further clarification or start a dialog such as “when is my next garbage collection?” which should be answered by the question, “what is you location within the city?”

Jake Porway @DataKind spoke about his program to use data for the greater good.

DataKind brings pro bono data scientists to improve the understanding of data by non-profits. They have had 10,000 analysts working on 100’s of projects. Projects include:

  1. org – kick starter for NYC public schools soliciting online donations. Applied semantics3 to automate the taxonomy. Can determine which types of schools ask for what types of categories of goods/services.
  2. Crisis Text Line – teen’s text if they are in need – note that 5% of users take up 40% of all services. Created a predictive model of when someone will become a repeat texter so they can intervene more quickly.
  3. GiveDirectly – money to the poorest places in Kenya & Uganda. Check Thatch vs iron roofs to determine which communities are the poorest – build a map of types of roofs in different communities by analyzing satellite imagery. Jake talked about the limitations of this method and how refining the specifications is part of the process.

Jake said they have recently set up a centralized project group that can initiate its own projects

The last speaker, Mahamoud El-Assir @Verizon talked in very general terms about how Verizon leveraging data analysis to improve customer experience. He talked about information about the various channels and various services can be used to better match a advertising and advice to the customer needs

  1. Talking to customers – rep can consider the TV, Data and equipment usage.
  2. Supervisors coach their agents in real time – types of calls and the resolution on calls
  3. Shift to cross-channel analysis (from silos for particular products)

posted in:  Data Driven NYC, video    / leave comments:   No comments yet

Benton C. #Bainbridge, #video #artist

Posted on April 15th, 2015

Volumetric Society, Hardware Hack Lab

04/15/2015 @ThoughtWorks, 99 Madison, Ave, NY


Benton C. Bainbridge talked about his current projects and his development as an artist.

Benton’s father was an engineer at NASA and his mother was an artist, so he grew up playing with discarded NASA electronics hardware in his basement while exploring film making on physical film. He became interested in collaborative film making which eventually lead him to experimentation with manipulating the images on cathode ray tube TVs.

He next talked about early explorations of TV images including plugging an audio input into the video input jack. Other effects, many pioneered by Nam June Pak and others, included magnets to change the path of the electron beam and modifying the video controls to adjust the position, size, brightness of the image, etc. One of his favorite tools was the Rutt/Etra visual synthesizer.

Benton next talked about more recent works. These include the RGBD toolkit which manipulates the image based on inputs from the Kinect. He has used delays on TIVO playback as part of his portrait series. He also talked about an installation with two iPads used as controllers for visual filters and two screens. You can stand in from of one iPad manipulating the image of the person standing in front of the other iPad. The other person is manipulating an image of you.

The best sources for earlier materials are videos on youtube.

The best source for current material is his web site.

posted in:  Art, video, Volumetric Society    / leave comments:   2 comments

Trends in the use of video

Posted on June 18th, 2014

Kaltura Connect conference

6/17-18/2014, Time-Warner Center, New York

Among the many themes presented in the conference two themes were prominent in many sessions:

  1. The video experience is increasingly a unified event across interconnected devices: phones, tablets, and TV. Video will also see the continued convergence of live vs. recorded and more links between social media and the traditional TV experience.
  2. Education has become one of the major users of video to disseminate content. Video is used to promote the university and the university faculty. Uses in education range for remote learning, practice sections, archive lectures, online quizzes, etc. Increasingly video is becoming more interactive and used as a teaching archive. Expansion of schools such as the Southern New Hampshire University and Embry-Riddle Aeronautical University show how much remote learning has grown.

The conference also had interesting presentations on how wearable technology and unmanned aerial vehicles (a.k.a. drones) will change our view of ourselves, the world, and the video experience.

Videos of sessions are posted here

posted in:  video    / leave comments:   No comments yet

I’ll come to your emotional rescue: connecting digital experiences to emotion

Posted on June 17th, 2014

NUI Central –NY Nick Langeveld @ Affectiva 6/16/2014 WeWork Labs (69 Charlton Street, NY) An  unobtrusive approach to better understand consumer responses to products and product ads. Affectiva has software to detect human emotions expressed through the facial muscles. Facial images are analyzed in real time by extracting movement and positions of key parts of the face (including the forehead, mouth, cheeks and nose). Currently measures 7 (or more) emotions, valence and expressiveness. Currently working with ad agencies to understand how people respond to advertisements with projects including ad recall and purchase intent. Versus traditional methods, the balance is between being unobtrusive and monitoring a response in real time versus the greater richness as consumers explain their views and the time delay between the initial experience and the time to verbalize their experience. The two approaches are reported to have similar levels of validity. Affectiva has created an iOS SDK (and will shortly have an Android version) so developers can build apps to analyze smartphone camera images.

posted in:  applications, video    / leave comments:   No comments yet

Creating Product Information Videos that inform, inspire, engage and entertain

Posted on June 16th, 2014

Eric Paapanen @ Oracle

6/16/2014 @ Kaltura Connect – Marriott Essex House, NY

Excellent presentation on creating videos. Goals when creating a video:

  1. Informative
  2. Inspiring
  3. Engaging
  4. Entertaining

To create an engaging video

  1. Create engaging topic. Customer feedback, forums, online survey, polls
  2. Tell a story – how viewer seems himself in the story – beginning, middle, end; problem statement and solution;…
  3. Careful planning – continuity; buy-in from stakeholders; storyboard; script or talking points
  4. Think in shots – keep it moving visually; keep shots short (< 5 seconds); pan-n-zoom; callouts; highlights
  5. Composing shots – rule of 1/3’s – divide into 3×3
  6. Callouts – focus attention; keep text short and display for a short period of time (to avoid distractions)
  7. Animations  – alternative to slides; illustrate concepts —- Camtasia for moving things around the screen also adobe creative team for more advanced tools
  8. Voice-overs – practice,practice, practice; LSEDIFY (let someone else do it for you – hire a pro); foreshadowing (e.g. a knock on the door, then cut to another scene)

Frequent errors when producing videos  –

  1. Scenes too static
  2. voiceover that does not sync with display

posted in:  applications, UX, video    / leave comments:   No comments yet


Posted on May 30th, 2014


Meeting at Vimeo on May 29, 2014 at 555 W18th Street, NY

Design for Vimeo has gone from a small version of the browser site to a mobile version emphasizing video play and networking. Editing was removed and the size of the thumbnails was increased. Text was deemphasized. Cameo does real time video editing using the cloud. Key decision are 6 second segments, prepackaged music, selection of themes, limited scene editing. Both are only on iOS.

Good view on how mobile interfaces are improved by focusing on the key deliverable and simplifying the graphics and the interface.



posted in:  applications, video    / leave comments:   No comments yet