How to build a #MixedReality experience for #Hololens
Posted on April 14th, 2017
4/14/2017 @MicrosoftReactorAtGrandCentral, 335 Madison Ave, NY, 4th floor
Mike Pell and John gave a roadmap for generating #MixedReality content. They started with general rules for generating content and how these rules apply to building MR content.
- Know your audience –
- Role of emotion in design – we want to believe in what is shown in a hologram.
- Think situation – where am I? at home you are comfortable doing certain things, but there are different needs and different things you are comfortable in public
- Think spatially – different if you can walk around the object
- Think inclusive – widen your audience
- Know Your medium
- For now you look ridiculous when wearing a VR headset– but maybe this eventually becomes like a welder shield which you wear when you are doing something specialized
- Breakthrough experience – stagecraft – so one can see what the hololens user is seeing
- Know Your palette
Interactive Story Design – a fast way to generate MR content
- Who is your “spect-actor” (normally someone who observers – have a sense of who the individual is for this moment – avoid blind spot, so pick a specific person. )
- Who are your “interactors” – will change as a result of the interaction – can be objects, text, people
- This creates a story
- Location – design depends on where this occurs
- Journey – how does participant change
How to bring the idea to life: how to develop the script for the MR experience
3-step micro sprints – 3 to 6 minute segments – so you don’t get attached to something that doesn’t work. Set 1 to 2 minute time limit for each step
- Parameters – limited resources help creative development
- Personify everything including text has a POV, feelings, etc.
- 3 emotional responses – what is the emotional response of a chair when you sit in it?
- 3 conduits
- Facial expression – everything has a face including interfaces and objects
- Body language
- Playtest – do something with it
- 3 perspectives
- Interacters – changes in personality over time
- 3 perspectives
- Audience – who is watching
- PMI – evaluative process – write on index cards (not as a feedback session) so everyone shares their perspective. Next loop back to the parameters (step 1)
- Plus – this is interesting
- Minus – this weak
- Interesting – neither of the above “this is interesting”
How to envision and go fast:
- Filming on location – randomly take pictures – look for things that speak to you as creating an interesting experience.
- Understand the experience – look at the people (i.e. people viewing art)
- Visualize it – put people into the scene (vector silhouette in different poses) put artwork into scene along with viewers.
- Build a prototype using Unity. Put on the Hololens and see how it feels
They then went through a example session in which a child is inside looking at a T-Rex in the MOMA outdoor patio. The first building block was getting three emotional responses for the T-Rex:
- Positive – joy looking at a potential meal: the child
- Negative – too bad the glass barrier is here
- Neutral – let me look around to see what is around me
To see where we should be going, look at what children want to do with the technology
#AR / #VR and the sense of place, AR #documentaries
Posted on March 18th, 2017
3/17/2017 @Hunter College, 68th & Lexington Ave, New York, Lang Theater
Four talks were on the use of AR/VR were given by
Samara Smith & Sarah Wright – Revealing Here: Using AR and VR to Transform Sense of Place
Ed Johnston – Augmented Asbury Park
Sam Topiary – Exit Zero: San Francisco Freeway to the Future
Samara Smith spoke about three installations and the role of the documentary
- Commotion for the #QueensMuseum in NY – tablet allows users to superimpose routes to work for individual commuters overlaying the physical 3-d display of the panorama of New York.
- Walking tour of #HambergerSquare Park in Greensboro, NC. Here walkers take a picture of historical objects in the park and are told the story of the park.
- Walking tour of Central Park in NY which uses GPS to activate sounds relevant to the location in the park.
Next, Sarah Nelson Wright spoke about VR projects to give people new perspectives on specific locations
- #HiddenVistas showed #HuntersPointSouth as a wild space in NY after industry in the early 20th century abandoned the location and prior to new construction there.
- An installation at the Queens museum used a VR mask shaped as a pair of binoculars to give users the impression that they were looking at distant scenes from within the museum
- A walking tour of #Soho called #InvisibleSeams superimposed images over advertising billboards of the conditions of garment workers and the pollution from garment product.
Ed Johnson talked about #AugmentedAshburyPark, an AR app which superimposed lost buildings/items on the #AsburyPark, NJ boardwalk. Using images from postcards, a visual and geolocation indexed interactive map was created from the carousel, the wreck of the Moro, etc. The app can also display these images by scanning historical posters from Asbury Park.
They are currently exploring use of Argonjs to revive the geolocation features of the app.
Finally, Sam Topiary talked about the challenges of documenting the history of the Hayes Valley, a site in downtown San Francisco. The challenge is how to tell the story of a part of the city that has seen transformations over time and whose transformations are indicative of the changing fortunes of the city and the inhabitants near the specific location. She concentrated on specific periods
- Grass roots organizations that blocked and eventually remove freeways through San Francisco
- Temporary conversion of the site to an urban farm for a 3 year period
- Construction on the site of luxury apartments as the neighborhood gentrifies
The periods correspond to the ups and downs of the local economy.
She asked the questions of how to communicate the nuances of the history and location without overwhelming the viewer.
She and other panel members talked about how to make an immersive experience but not make it so immersive that the audience becomes passive. Where does narrative fit into this continuum? How does one find and appropriate level of audience interaction?
#Neurobiology of #ComputerInterfaces
Posted on March 18th, 2017
3/17/2017 @Hunter College, 68th & Lexington Ave, New York, Lang Theater
Three talks were on the neurobiology of the computer interfaces were given by
Ellen Pearlman – Utopic or Dystopic
Ruben Van de Ven – Emotional Hero/We Know How You Feel
Greg Garvey – Split Brain
Ellen Pearlman talked about brain computer interfaces. The most available commercial devices are from Emotive, Muse, OpemBCI. The devices need smoothing and feature extraction algorithms to find signal in the noise. Devices take 8 seconds to calibrate and 150 milliseconds for a signal to be detected and transmitted by the device. One of the key signals is the P300 MERMER which indicates that you recognize someone/something.
She talked about the large increase in brain research funding recently by DARPA, NIH and NSF with the IARPA (Intelligence Advanced Research Projects) program soliciting proposals.
Ellen next talked about
- Semantic brain maps, which highlight the locations associated with particular stimuli
- Optogenetics which can implant false memories which can be turned on and off with blue and orange light
- Cortical modems, which can transmit images directly to the brain bypassing the senses (Cubic Corp is working on this)
- Brain data stored in the cloud (Cloudbrain and Qusp have been uploading brain data)
Finally, she showed her performance piece, Noor, a brain opera, in which a performer’s brain waves trigger visual patterns as the performer interacts with the audience
Next, Ruben Van de Ven talked about the challenges faced by machine learning methods that claim to determine one’s emotions from facial images. Applications using this technology include ‘Emotion Hero’ a games you can download from the Google Play Store and Hire-vue which evaluations people during job interviews.
Paul Ekman developed the ‘Facial action coding system’ which is the classification scheme used. But, Ruben notes that context affects how we interpretation an expression. Also the validity of the face classification is reliant on human subjective coding in the 19th century from a French asylum. Both place the science behind these methods on shaky ground.
In addition, the software is often marketed as both objective and as a tool for training individuals to mislead the software. But how can the software work if one can learn to manipulate it?
Greg Garvey talked about how his art installation take advantage of the modularized brain which is split into left and right processing. His installations show different images to the left and right eyes (and therefore to the left and right brain hemispheres) to raise internal cognitive conflicts in the image being viewed.
#Visualization Metaphors: unraveling the big picture
Posted on May 19th, 2016
05/18/2016 @TheGraduateCenter CUNY, 365 5th Ave, NY
Manuel Lima ( @mslima ) @Parsons gave examples of #data representations. He first looked back 800 years and talked about Ars Memorativa, the art of memory , a set of mnemonic principals to organize information: e.g. spatial orientation, order of things on paper, chunking, association (to reinforce relations), affect, repetition. (These are also foundation principals of #Gestalt psychology).
Of the many metaphors, trees are most used: e.g. tree of life and the tree of good and evil. geneology, evolution, laws, …
Manuel then talked about how #trees work well for hierarchical systems, but we are looking more frequently at more complex systems. In science, for instance:
17-19th century – single variable relationships
20th century – systems of relationships (trees)
21st century – organized complexity (networks)
Even the tree of life can be seen as a network once bacteria’s interaction with organisms is overlaid on the tree.
He then showed various 15 distinct typologies for mapping networks and showed works of art inspired by networks (the new networkism) : 2-d: Emma McNally, 3-d: Tomas Saraceno and Chiharu Shiota.
The following authors were suggested as references on network visualization: Edward Tufte, Jacques Bertin (French philosopher), and Pat Hanrahan (a computer science prof at Stanford extended his work, also one of the founders of Tableau)
A Panel Discussion on The Future of #DigitalPerformance
Posted on November 21st, 2015
11/20/2015 @Westbeth Gallery, 155 Bank St, NY
The Westbeth Galley, which displays art incorporating digital technology, hosted a discussion on the future of images and other digital media in performing arts. The panelists had a range of backgrounds and current uses of digital presentations.
Mark Coniglio – Creator of Isadora Software, Co-Founder, Troika Ranch
Wendall K. Harrington – theatre and production design, Assistant Professor, Yale School of Drama
Jared Mezzocchi – Award winning multi-media theatre director and designer, designer, Assistant Professor University of Maryland
Maya Ciarrocchi – Interdisciplinary Artist
Kevin Cunningham – Director 3 Legged Dog Theatre
Moderator: Andrew Scoville – Brooklyn based theater director focusing on developing new work that merges science and performance
Several themes were explored by the panelists.
- The inclusion of alternative or digital content in performances should push the main ideas forward and needs to be consistent with the other parts of the program as if it was just another performer/actor/musician in the ensemble
- One needs to express the idea before thinking of technology. Make sure that technology is not attached at the end. It should be given time within rehearsals to grow. If possible avoid the word “digital” as it artificially divides you and the rest of the creative team.
- The artist needs to intuit the director’s vision and present what is needed not what is requested.
- Flash may be easier with digital media, but the goal is still to give the audience a new experience so they have a chance to grow. The goal is still to tell a story and give a memorable experience.
- Creative tension is important as it is in all artistic ventures.
The panelists also mentioned digits works that they considered unusually immersive or interesting.
- Daito Manabe (Rhizomatiks) puts his experimental videos online. He has a series in which he electrically stimulates his face. He also has videos with dancers interacting with lights and drones and robots.
- Tod Machover and the MIT media lab worked on a glove and other interactive instruments.
- Luke DuBois drew a map of the U.S. with cities identified by the most used words on the dating sites for people in those cities.
- Audience participation at ”danger parties”.
- On commercial side there is a call to make it immersive which can only be done with digital technology: “Charlie Victor Romeo”.
Hardwired: Smart Air Vents, Art Displayed Electronically, #Fencing, #Logistics for #Startups
Posted on October 28th, 2015
10/17/2015 @Wework, 115 W 18th Street, NY
The four speakers were
- Tim Morehouse, Olympic Medalist, US Olympic Fencing Team
- Renee DiResta, Co-Author of “The Hardware Startup: Building Your Product, Business, and Brand” and VP Business Development at Haven (online platform for ocean freight)
- Ryan Fant, Founder and COO of Keen Home (smart vents for the connected home)
- Vladimir Vukicevic, CEO of Meural (art streamed to your wall via Connected Canvas)
Ryan Fant@Keen Home (will sell a smart vent at Lowe’s starting next week) talked about the challenges they have faced over the last few months getting their product into stores. Ryan walked the audience through the requirements to place an item in Lowe’s stores.
- Break packs containing two units
- Master cartons which hold several break packs
- UPC and faceplate labels so Lowe’s can track stock
- Serial numbers on all items and matching numbers on the break pack and back side of each retail box
- Pallet labels
- Overseas logistics – need to book product from Shenzhen to Hong Kong
- Customs forms
- They also need a partner such as FedEx, Flexport, or DHL
- Their warehousing in Denver
- Preparation of in-store displays to bring visibility. These include physical displays, screen showing to configure your home and a video commercial to run on those screen.
Their rollout at Lowe’s was further complicated since they were unable to get product to Lowe’s 6 distribution center prior to the rollout next week, so they needed to ship product directly to the 900 Lowe’s POs for rollout next Monday.
In the next presentation, Vlad @Meural spoke about their dedicated hardware device that displays high resolution art images on a wall display. Meural will start shipping in a month.
Vlad talked about the main driving influences as the product was being developed
- Start with the creators. Incubated in an art studio in the lower east side
- What they create will influence people. Portrait orientation important to distinguish it from TV. Have a haze cover that absorbs light. Also have wood frame. Image algorithmically optimized.
- Need to be able to update the product even after its delivered. So can update over the air.
The display is run on System on chip that is more powerful than a Raspberry Pi. The display has a sensor that adjusts colors for the ambient light. It can an also be set turn off if the ambient lights are turned off.
They were initially self funded and then had a 500k seed round.
They currently control the entire system, so they can control the scarcity of images.
Their team consists of Vlad (COO), chief designer, CTO, operations (supply chain/manufacturers), head of partnerships who gets content.
Now if you buy it, you get full access to a full library of content. In the future you will have the opportunity to get subscriptions for specific artists or galleries.
They have a utility patent on the full package
Next, Tim Morehouse @XGenFencing talked about developing a new technology for refereeing fencing competitions. As an Olympic silver medalist, he wanted to bring fencing to schools, but faced the barrier of high equipment costs to monitor touches. The current scoring system technology remains virtually unchanged since its introduction in 1932 and is expensive, requires a lot of setup, and breaks easily.
Tim’s first prototype was a chest plate that registered strikes.
Tim demonstrated his technology which put sensors in foils and lights on the epee guard so you know when there is a touch. The system also locks out so only the first touch is recognized and uses an accelerometer to track when one lunges (also tracks dangerous sword movements by beginners). This system could eventually eliminate any discretion by judges.
Audience comments revealed interest in tracking performance and moving fencing into the virtual world.
Renee @Haven talked about the complexity of logistics faced by startups.
She first talked about the complexity of tariffs and how even slight modification in the imported product affect taxes. Renee then talked about many of the practical considerations when bringing your product onshore from the offshore manufacturer:
- Air vs. Ocean – 5x cost different, but can be higher if a holiday rush or if labor issues. Might ship 25% by air to guarantee the initial back is delivered on time.
- LCL vs FCL (less than container load vs full container load) – often cheaper to book an entire container. Cheaper since deliver directly to destination. Also if there is a problem with the shipment in the other half of the container, then you may be stuck at the dock.
- Have a guy who can do negotiating. Have the price broken down, not all-in
- If doing truck to a port in China, it might be better to let the local person do it since they are most familiar with the shipping. (a selected violation of the previous point)
- Deciding whether to take ownership at factory, or at the seaport, or on shore,…
- Warehousing vs fulfillment – inhouse or outsourced. Amazon and Shipwire are the most used by startups.
Haven is a market place to connect buyers and sellers of shipping capacity and helps startups understand the supply chain.
Night Café by Mac Cauley: lessons learned in creating a #VR experience
Posted on August 28th, 2015
08/27/2015 @Samsung, 130 Prince Street, NY
Mac Cauley talked about the inspiration and development of his award winning VR app, Night Café. Night Café is an interactive VR experience based on the paintings of Vincent #vanGogh and specifically his painting #NightCafé.
Mac originally was working on a live action film of a fictional painter inspired by van Gogh. The project evolved into a virtual reality application. He was also inspired by the works of Alexa Meade who paints subjects and photographs them.
Night Café (van Gogh, 1888) was used as his starting point for its expressionistic colors and absorbing perspective. Converting a single point of view into an immersive world required 3-d modeling of subjects and items along with a reimagining of the corner that is not visible in the painting. For these Mac studied the fixture of that period and modeled the individual subjects in three dimensions.
He adhered to certain design rules. These included making each object unique and emphasizing the textures which is one of the distinguishing characteristics of van Gogh’s paintings.
He used many tools to the create objects, object skins, animation, etc.
Some of his main tools were: Maya, Unity, Mudbox, RapidRig.
He also talked about the challenges and dead ends as he developed the application
When animating, he initially used Kinect mocap which he considere cool, but felt that the quality was inadequate. He used keyframing and found the process slow but worth it.
He tried shading using particles, but found they did give sufficient detail and ran too slowly. He ultimately used flat shading with no lighting since he could take colors from the painting and it provided good performance.
He initially designed a complex set of controls to move through the space, but eventually realized that simple controls (such as tap and hold to move forward) using a touch pad worked best.
The VR experience was enhanced by optimizing-texture aliasing, mesh batching, texture/audio compression, reduce particle counts, etc.
He summarized his main lessons learned:
- characters are very interesting to see in 3d
- particles are awesome
- movement is tricky – slow down movement, simplify controls, eliminate acceleration (stay or move only)
- Note 4 is powerful enough
- Stylized worlds can still be immersive.
Tomas Laurenzo on #art and Tom Ritchford on #Git
Posted on June 11th, 2015
06/10/2015 @Thoughtworks, 99 Madison Ave, 15th floor, New York
The two speakers talked about the two aspects of this meetup: art and technology.
In the first presentation, #TomasLaurenzo (@SCM CityU Hong Kong) talked about his art: visual, politically motivated and whimsical. Using his education as an engineer, his art makes use of technology to create interactive displays. Installations he described include
- “Poem race” compares classic writers by coding a selection of their writings in Morse code. The dots and dashes control motors that that vibrate a ramp. The first item down the ramp indicates a winning author.
- Viewers interact with a Kinect which controls colors illuminating a cluster of balloons. The color changes based on sounds from music and people in the audience and inputs from smart phones.
- “Nadia” is a remembrance of the disappearances in Latin America. The viewer moves a physical lighter set to control how the image is “burned”
- “two systems” The viewer turns two knobs that control a “fire” which shows pictures of an image being burned.
- Wearable cinema done in conjunction with Alba, a designer (alba.uy), in which materials move and react to heart beats.
- A musical instrument controlled by the movement of your head.
- Actuators to deform the shape of a rectangular canvas on which an image is projected
- The empathy extension which highlights the differences in press coverage between terrorist acts in Kenya and in France. He changes “Kenya” for “France” when you search for “France” on Google.
In the second presentation, #TomRitchford gave a brief summary of Github along with a history of version control/change management software systems.
Source Code Control Systems (SCCS) were the developed in the 1970’s to create a central repository holding the official version of the software source code.
Later a quick way was developed to determine if your code matches that of the central repository: a hash code which is a compact fingerprint identifier (the SHA-1 hash function is 40 bytes long) quickly shows that two version of the code are the same or different (the delta).
When Linux was being created, there are thousands working on the system, so methods were needed to identify which branches were modified. Git was developed to organize the project. Git hash codes the entire project and the subparts within the project. Development proceeds along branches which are a sequence of commits each with its own hash. In this way, program changes by individuals are a sequence of hash codes. Functions include
- Push and pull only works on ancestors to descendents.
- You can cherry pick to get changes which are not from the ancestor.
- Rebasing – pull the series of changes and plant them on top of another series of changes.
Benton C. #Bainbridge, #video #artist
Posted on April 15th, 2015
Volumetric Society, Hardware Hack Lab
04/15/2015 @ThoughtWorks, 99 Madison, Ave, NY
Benton C. Bainbridge talked about his current projects and his development as an artist.
Benton’s father was an engineer at NASA and his mother was an artist, so he grew up playing with discarded NASA electronics hardware in his basement while exploring film making on physical film. He became interested in collaborative film making which eventually lead him to experimentation with manipulating the images on cathode ray tube TVs.
He next talked about early explorations of TV images including plugging an audio input into the video input jack. Other effects, many pioneered by Nam June Pak and others, included magnets to change the path of the electron beam and modifying the video controls to adjust the position, size, brightness of the image, etc. One of his favorite tools was the Rutt/Etra visual synthesizer.
Benton next talked about more recent works. These include the RGBD toolkit which manipulates the image based on inputs from the Kinect. He has used delays on TIVO playback as part of his portrait series. He also talked about an installation with two iPads used as controllers for visual filters and two screens. You can stand in from of one iPad manipulating the image of the person standing in front of the other iPad. The other person is manipulating an image of you.
The best sources for earlier materials are videos on youtube.
The best source for current material is his web site.