New York Tech Journal
Tech news from the Big Apple

How to build a #MixedReality experience for #Hololens

Posted on April 14th, 2017

#NYSoftwareEngineers, @NYSE

4/14/2017 @MicrosoftReactorAtGrandCentral, 335 Madison Ave, NY, 4th floor

Mike Pell and John gave a roadmap for generating #MixedReality content. They started with general rules for generating content and how these rules apply to building MR content.

  1. Know your audience –
    1. Role of emotion in design – we want to believe in what is shown in a hologram.
    2. Think situation – where am I? at home you are comfortable doing certain things, but there are different needs and different things you are comfortable in public
    3. Think spatially – different if you can walk around the object
    4. Think inclusive – widen your audience
  2. Know Your medium
    1. For now you look ridiculous when wearing a VR headset– but maybe this eventually becomes like a welder shield which you wear when you are doing something specialized
    2. Breakthrough experience – stagecraft – so one can see what the hololens user is seeing
  3. Know Your palette

Interactive Story Design – a fast way to generate MR content

  1. Character
    1. Who is your “spect-actor” (normally someone who observers – have a sense of who the individual is for this moment – avoid blind spot, so pick a specific person. )
    2. Who are your “interactors” – will change as a result of the interaction – can be objects, text, people
    3. This creates a story
  2. Location – design depends on where this occurs
  3. Journey – how does participant change

How to bring the idea to life: how to develop the script for the MR experience

3-step micro sprints – 3 to 6 minute segments – so you don’t get attached to something that doesn’t work. Set 1 to 2 minute time limit for each step

  1. Parameters – limited resources help creative development
    1. Personify everything including text has a POV, feelings, etc.
    2. 3 emotional responses – what is the emotional response of a chair when you sit in it?
      1. Negative
      2. Neutral
  1. 3 conduits
    1. Language
    2. Facial expression – everything has a face including interfaces and objects
  1. Playtest – do something with it
    1. 3 perspectives
      1. Participant
      2. Interacters – changes in personality over time
  1. PMI – evaluative process – write on index cards (not as a feedback session) so everyone shares their perspective. Next loop back to the parameters (step 1)
    1. Plus – this is interesting
    2. Minus – this weak
    3. Interesting – neither of the above “this is interesting”

How to envision and go fast:

  1. Filming on location – randomly take pictures – look for things that speak to you as creating an interesting experience.
  2. Understand the experience – look at the people (i.e. people viewing art)
  3. Visualize it – put people into the scene (vector silhouette in different poses) put artwork into scene along with viewers.
  4. Build a prototype using Unity. Put on the Hololens and see how it feels

They then went through a example session in which a child is inside looking at a T-Rex in the MOMA outdoor patio. The first building block was getting three emotional responses for the T-Rex:

  1. Positive – joy looking at a potential meal: the child
  2. Negative – too bad the glass barrier is here
  3. Neutral – let me look around to see what is around me

To see where we should be going, look at what children want to do with the technology

posted in:  Animation, Art, UI, video    / leave comments:   No comments yet

CodeDrivenNYC: #Web #Annotation, #NeuralNets #DeepLearning, #WebGL #Anatomy

Posted on December 17th, 2015

#CodeDrivenNYC

12/16/2015 @FirstMarkCap, 100 5th Ave, NY

Three speakers talked about challenging programming problems and how they solved them

20151216_181952[1] BioDigital20151216_185209[1] 20151216_190526[1]

Matt Brown @Genius talked about how they implemented their product which allows users to annotate text on web pages. The challenge is locating the text that was annotated on a web page and the web page may be modified after the annotation was added. In this case, the text fragment and the location of the fragment may have changed, but the annotation should still point to the same part of the text. This means that the location of the text in the dom may have changed and the fragment itself may have been modified.

To restore the annotation they use fuzzy matching in the following steps

  1. Identify regions that may hold the text
  2. Conduct a fuzzy search to find possible starting and ending points for the matching text
  3. Highlight the text that is the closest match from the candidates in the fuzzy search

The user highlights text in the original web page and the program stores the highlighted fragment along with text showing the context both before and after the fragment.

When the user loads the web page, the following steps are performed to locate the fragment

  1. Use jQuery body.text to extract all text from the web site
  2. Build a list of infrequently used words and locate these words in the web site text
  3. Use the JavaScript implementation of Google’s diff-match-patch library to find the fragment in the text (The library uses the Bitap algorithm to find the best fuzzy match). The algorithm finds starting locations for text matches to the fragment. If the fragment is longer than 64 characters, only the first 64 characters are used. Searches are conducted using the before-context with the fragment to determine the general location in the text and using only the fragment to determine the possible starting points of the fragment in the text.
  4. Reverse the order of characters in both the fragment and the text. Repeat the previous step to determine possible ending points of the fragment in the text.
  5. Extract candidate locations for the fragment and pick the location which has the minimum Levenshtein distance (fewest character substitutions/inserts/removals).
  6. Highlight the text in that location. Repeat this process for each stored fragment.

Next, Peter Brodsky @HyperScience spoke about how his company is making the training of neural nets more efficient. HyperScience trains neural nets (containing up to 6 layers) on a variety of tasks (e.g. looking for abnormal employee behavior, reassembling shredded documents, eliminating porn from web sites).

The problems they want to overcome are

  1. Local minimum solutions are obtained instead of a global minimum
  2. Expensive to train
  3. Poor reuse

To overcome these problems they do the following. Once the nets are trained, they examine the nets and extract subnets that have similar patterns of weights. They test whether these subnets are performing common functions by swapping subnets across neural networks. If the performance does not change then they assume that the subnets are performing a common task. Over time they create libraries of subnets.

They can then describe the internal structure of the net in terms of the functions of subnets instead of in terms of nodes. This improves their ability to understand the processing within the net.

This has several advantages.

  1. They can create larger and more complex networks
  2. They can start with a weight vector and guide the net away from local minima and toward the global minimum.
  3. Their networks should learn faster since the standard building blocks are already in place and do not need to be reinvented.

In the third presentation, Tarek Sherif @BioDigital talked about how BioDigital is implementing anatomical content for the web. The challenge is to create 3d, interactive pictures showing human bodies in motion or in sections, layers, etc.

BioDigital uses webGL to render their content in HTML/CSS/JS on all browsers and mobile devices. Due to the computational load, optimization of memory management and JavaScript code is important.

The content can be static, animated or a series of animations. The challenge is to keep the size down for quick downloads, but have the user experience the beauty of the images.

Displaying anatomical content is challenging since it can be

  1. Deeply nested – e.g. brain inside skull
  2. Hierarchical – is the click on the hand or the arm?
  3. Scale – from cells to the whole body

User interactions can include –highlighting, dissection, isolation, transparency, annotation, rotation,…

Mobile is even more challenging

  1. Limited memory and GPU
  2. Variety of devices
    1. GL variable limits
    2. Shader precision
    3. Available extensions

To allow their images to be plugged into web sites, they create an API

  1. Create an iframe to embed into a page
  2. Allows basic interactions
  3. The underlying JavaScript can be customized

API challenges

  1. 3d terminology and concepts
  2. 3d navigation
  3. Anatomical concepts
  4. Architecture of the Human

Examples can be seen at https://developer.biodigital.com

The artists primarily use Maya and Zbrush as their creative tools.

Models can be customized for specific patients.

posted in:  Animation, applications, Code Driven NYC, Programming    / leave comments:   No comments yet

35th annual Computer Graphics Film Show SIGGRAPH Video Review

Posted on October 16th, 2014

Princeton chapter of the Association for Computing Machinery

10/16/2014 Friend Center Auditorium, Room 101, Princeton University, Olden & William Street
20141016_203425[1]

The meeting was a highly entertaining (see for example “Johnny Express“) look at some of the most recent computer graphics shown at the #ACM #SIGGRAPH in August in Vancouver. Some of the academic topics explored are rendering many characters in the same scene (Mr Peabody playing multiple instruments) and moving toward a more fluid and dynamic interpretation of the characters (such as Olive Oil’s arms in Popeye cartoons).

The presenters also talked about Blender, open source #animation software. Audience members were encouraged to download the demo videos, but especially to view the production files showing the inner workings of the demos.

posted in:  ACM, Animation    / leave comments:   No comments yet