Hardwired: #VirtualReality Devices and #Drones
Posted on January 12th, 2016
01/12/2016 @ WeWork, 115 W 18th St, NY
Four companies talked about virtual reality devices and flying drones
• Nicholas Horbaczewski and Ryan Gury, CEO and Director of Product at The Drone Racing League (drone racing sports league)
• Amir Rubin, Founder and CEO of Paracosm (cloud-based 3D mapping)
• Andre Lorenceau, Founder & CEO of LiveLike (VR sports broadcasting)
• Jan Goetgeluk, Founder and CEO of Virtuix (immersive virtual reality system)
In the first presentation, Andre Lorenceau & Jeremie Lasnier @Livelike spoke about developing a system so sports fans at home can view the game as if they were in the stadium. Their system will eventually allow fans at home to use a virtual reality headset to look around the space and do on-demand access of other views of the game.
Their system is designed to use current in-stadium video streams (the ability to see different parts of the playing field will initially be based on a single feed from a wide-angle camera at midfield), but be upgradable as new in-stadium, streaming technology is rolled out. (This differentiates them from NextVR which is building a streaming platform from the ground-up)
Andre and Jeremie talked about the challenges in getting VR right. One of the challenges is lack of a standard controller with different controllers specifying different gestures for similar functions. Interacting with a touch pad may not be correct in some circumstances, while pressing a virtual touchpad may be hard to do. Monitoring one’s gaze may work in some cases, but they need to avoid unwanted changes in the video when starring at parts of the action.
On the up side, there are many new opportunities to enhance as well as monetize the experience. Electronic placement of ads and user services are a possibility. These could be similar to the lines superimposed on the field during football broadcasts or they could be virtual objects moving through the space (they showed a video in which a Star Wars ship flew over the playing field).
In the second presentation, Nicholas Horbaczewski and Ryan Gury @DroneRacingLeague spoke about their soon-to-launched drone racing series. In these competitions, professional drone pilots will fly standardized drones over a race course. The fliers will wear goggles showing a video feed from the drone as the drones fly at over 80 miles per hour. The course will take less than 2 minutes to fly with winners determined by their performance over a series of heats.
Nicholas and Ryan spoke about their tuneup races held in Yonkers and in Sun Life Stadium in Miami. They prepare 80 to 100 rigs prior to the competition with all rigs made with the same hardware, but tuned to the liking of each competitor. They talked about five keys to a successful race
- Performance – drones are constructed from the highest quality parts with multiple identical rigs provided to each competitor for the four day event – includes practice, preliminary heats, finals
- Visibility – the sport is a spectator sport so the drones need to be visible from the ground. To do this, each drone is covered with high intensity lights with each participant identified by a different color.
- Pit crew – drones must be able to withstand crashes with minimal damage, so they can be repaired quickly. Everything needs to be tested.
- Stability – Due to the number of drones needed, each needs to be assembled and tested quickly. This necessitates maximal use of circuit boards and minimal wiring. Drones also have a large number of customizable settings.
- Radios – use analog video feeds to the pilots. (digital video has latency and also drops frames) they need to install a robust network for communications throughout the course even as it snakes through tunnels and around obstacles.
Returning to VR, Amir Rubin @Paracosm talked about the software his company produces to take a point cloud extracted from the physical world and create a picture of the surfaces that can be used to create a virtual world.
He first talked about why an accurate picture of the world is needed to create a truly immersive game to insure the following are true of the experience.
- Need correct perspective & occlusion
- World locking – persistence in physical world, no jitter
- interactions – shadows, physics, path planning
to solve these issues one needs to
- know shape and geometry of the world –
- know where I am in the world
Paracosm takes coordinates of objects extracted from the world using remote sensing devices such as the Kinect or RealSense (Project Tango) and sends them offline for processing to create a model of a space detailing walls furniture and other interior content. The offline processing removes the extra mesh points that explode memory and slow down processing (for instance the Project
Tango by itself cannot retain all the points is sees so it culls them, thereby limiting the scope of any game played on it). The refined set of points is then returned to the device (in this case the Project Tango) to serve as the framework for the augmented reality world.
Once this is done, the device can create characters that appear to interact correctly with the real world or superimpose data or views on the real world.
This offline processing makes a more true-to-life augmented reality game, but cannot react quickly if objects move in the space. This means that consumer product is still in the future. For this reason, Paracosm is concentrating on commercial engineering applications which would only involve static items.
In the final presentation, Jan Goetgeluk @ Virtuix talked about the Omni, which his company produces. The Omni allows you to walk around in the virtual world in a device that looks like a toddler’s walker. The device holds you in place, but allows you to walk in any direction (an omni-directional treadmill). The device will be sold starting this month for $699 and has no moving parts to track you footsteps.
Jan talked about his personal journey from Belgium, to the U.S., to Rice University, to investment banking, to entrepreneur. He talked about how it has taken him 8 years to come to production including three rounds of fund raising totaling 8mm USD. He talked about the delays and challenges even for a product that had early enthusiastic support from the gaming community, Kickstarter and Shark Tank.
Will #Minecraft create the first software mega hit in VR?
Posted on July 5th, 2015
07/05/2015 from Minecon July 4-5, 2015, London
When #Oculus was bought by #Facebook, I know that I was not alone in thinking about the virtual worlds that #VR would allow us to access. The system that immediately came to mind was Minecraft. Here, one already has a vast library of worlds built in 3-d and the millions of game players who have the expertise and desire to build new worlds. Once a world is built, gamers explore it on a computer screen, but being able to easily look around a space would make the experience even more compelling.
#Microsoft’s acquisition of Minecraft in 2014 also fits in the story since #Hololens was announced earlier this year and Mindcraft demonstrated how the two products work together at the E3 conference. This doesn’t guarantee that Hololens will not suffer the same fate as #GoogleGlass, but it at least shows that Hololens has a more compelling argument for a killer app than Google Glass had at its introduction.
The presence of Minecraft could have a major effect on the evolution and acceptance of VR. At first look, established computer game makers appear to have a large advantage in creating VR software. These companies have the software tools to visualize worlds in three dimensions, many experienced programmers used to coding in a 3-d world, an understanding of what the consumer wants, and the hardware and systems to efficiently render the graphics and optimize the system. However, as I noted in a previous post, Minecraft has worked to create the tools and educate its players on creating games beyond block building. Its users all know how to create objects in a 3-d virtual world and its ecosystem includes many experts in adapting the vanilla system to create new user experiences. This possibly leads to the following scenario.
In this scenario, large numbers of Minecraft players create 3-d games that can be accessed as VR. Some of these games, while crude graphically, become popular. The most popular games are adapted by shops that specialize in smoothing the contours and movements of items while retaining the underlying software logic. Specialists also create procedures to move code running on the CPU to run on a GPU.
It will be interesting to see if companies such as #Unity3d, #InfinityWard, #GameMaker, #EAsports will need to respond to this challenge.
Bringing Droidcon Montreal to NYC: #VR, #noSQL, Surface #Textures
Posted on April 17th, 2015
04.16.2015 @Spotify, 45 W 18th St, NY
Three speakers presented excerpts from their presentations in Montreal
Dario Laverde spoke about VR on Android. He first outlined the hardware capabilities of the HTC Vive to create an immersive VR experience. These requirements are designed to insure “presence” (viewing without getting sick). These include: 90Hz refresh rate, 2 screens with 1080 resolution, ability to track your location.
Last summer, Google introduced Google cardboard which allows you to place a phone in a holder to experience VR. Today Google announced that it will officially certify some commercially available viewers. This will allow the software to be fitted to the lenses on the viewer.
Dario also talked about the need for a good pair of headphones for positional audio. Android developers can create surround sound using the open SL ES spec. He mentioned that a magnet on the viewer interfaces with NFC to provide a single-click input to the software. He noted that some phone do not have NFC in which case they can be controlled using a Bluetooth clicker.
Dario walked-through of some code snippets:
- Intent filter – identifies the app as a cardboard app within Google play: g.co/cardboardapps
- onNewFrame – get the location of your head
- onCardboardTrigger – monitor if NFC invoked
He noted that when adapting tilt-sensitive games, that viewer movement assumptions are different. For instance, a race car when tilted moves the car left or right, while a left or right head movement does not alter the direction of the car, only your view within the car.
He concluded by talking about how the limitations on computation power are the main barrier to adding other inputs (beyond head motion and NFC).
Next Will Hoang @Coachbase talked about Coachbase’s noSQL database for local storage on Android. He talked about how their database gives off-line capabilities to apps and automatic, real-time syncing to the cloud and other connected mobile devices when connections are available.
The third speaker was Lisa Neigut@ElectricObjects. Electronic Objects has created an Android-powered display for works of art. It received its funding in a Kickstarter in April 2014. They have created a large Android display panel similar to a tablet but without touch sensitivity.
The display was originally powered by a Raspberry Pi, but they have moved to Bluetooth with Android KitKat 4.4 (OS19). They expect to upgrade the OS once Freescale upgrades the iMX6D board.
Artists create materials in one of three formats
- Static images: jpg, png
- Html – Chromium WebView – use Crosswalk
Lisa then talked extensively about the challenges of displaying dynamic Gif files.
Since Andoid does not have a native Gif decoder they first needed to understand the structure of Gif files: Each Gif contains a main frame and a series of subframes. Each frame is specified by a size and the coordinates for this frame within the main picture. Frames are presented in sequence to give the impression of an image in motion.
Gifs in which only a small part of the image is updated by each frame make fewer demands on the processors and therefore appear as a smooth set of motions. Frames with more changes can cause frames to be dropped or mangled on output.
They explored several methods for displaying Gif files:
- Media player was inefficient: images did not look good
- Chome: (Gifs were packaged in an html file) had a long initial delay (several minutes) upon initial loading of the display, but ran well afterwards. This was a problem since uses would want to browse the artworks before deciding to display the piece.
- Glide – does not drop frames, but displays only 3 to 4 frames/sec
- Native – gifDrawable. But still need better performance
To improve performance they used the Systrace utility to visualize the bottlenecks in the refresh cycle when processing each frame. Specifically, they used this to better understand the Android Graphics pipeline: SurfaceFlinger & BufferQueue.
Eventually they were able to attain 30 frames per second performance on a 720 psi display by directly accessing the surface textures using the TextureView Pipeline. This is consistent with recommendations made by some gamemaker systems: modify textures when possible to make gameplay fluid.