New York Tech Journal
Tech news from the Big Apple

Hardwired: product #design and delivering #magic

Posted on June 11th, 2016


06/07/2016 @ WeWork, 115 West 18rd Street, NY, 4th floor

20160607_183540[1] 20160607_185434[1] 20160607_192649[1] 20160607_194414[1]

New Lab and Techstars talked briefly before the four speakers:

In the first presentation, Bob Coyne @Wordseye talked about his utility that takes a text description of a scene and creates an image matching that description. This allows users to create 3-d mages without complicated #3-d graphics programs.

They parse sentences to create a semantic map which can include commands to place items, change the lighting, reorient objects, etc. They see uses in education, gaming, and image search.

[Graphics are currently primitive and the manipulations are rough, but there are only 7 months old. Has promise for creating avatars and scenes for game prototypes. Text lack the subtly of gestures, so  text may need to be supplemented by gestures or other inputs.]

In the second presentation, Chris Allen @ iDevices – developers of connected home products and software – talked about the evolution of the company from an initial product in 2009 which was a connected grill.

Since then they have raised $20 million, were asked by Apple to develop products for HomeKit, currently market 7 HomeKit enabled products.

Experiences he communicated:

  1. Do you own research (don’t rely on conventional wisdom): despite being told that $99 was too high a price, they discovered that reducing the price to $75 did not increase sales.
  2. Resist pivoting away from your vision, especially when you have not intellectual property advantage: a waterproof case for phones failed.
  3. Create a great work environment and give your workers equity
  4. They build products that are compatible across platforms, but concentrate on just the three main platforms: Siri, Google, Amazon.

Next, Josh Clark @BigMedium talked about his vision of the future of interfaces: they will leap off the screen combining #speech and #gestures. They will be as magically as the devices in the world of Harry Potter. Unlike the Google glass, which was always an engineering project, we should be asking how can we make any object (even of a coffee cup) do more: design for the thing’s essential ‘thingness’.

Technology should be invisible, but magical:

  1. You can stand in front of a mirror memory and see how you look with a different color dress, or replay a video of what you look like when you turn around or do a side-by-side comparison with a previously worn dress.
  2. Asthmapolis site – when you have an asthma attack, you tap an app. Over time you can see across individuals their locations when they have an attack.
  3. A hackathon app using the Kinect in which one gestures to grab an image off a video so a still image from that moment appears on the phone.

It’s a challenge of imagination.

If the magic fails, we need to make sure the analogue device still works.

[In some cases, magic may not be enough. For instance, Asthmapolis pivoted away from ashma alone and now concentrates on a broader range of symptoms ]

In the last presentation, Martin Brioen@Pepsi talked about how his design team uses #prototyping to lead the development of new ideas.

Different groups within Pepsi have different perspectives and different priorities, so each views ideas differently, but to the get a consensus they all was to need to interact with the new product so they can see, touch, …

At each phase of development you use a different tools concentrated on the look of it, the feel of it, the functionality, etc. At each stage people need to interact with it to test it out. Don’t wait until you have a finished product. Don’t skip steps. Consider the full journey of the consumer;

Employ the least expensive way to try it out

They are not selling product, they are selling experiences: they create a test kitchen for the road.

posted in:  Apple, applications, hardware, Hardwired NYC, Internet of Things, psychology, startup    / leave comments:   No comments yet

How to Build a Bulletproof #SDK

Posted on June 3rd, 2016


06/02/2016  @Yahoo, 229 West 43rd Street, NY, 10th floor

20160602_183850[1] 20160602_190336[1] 20160602_191143[1]

The speaker from @Flurry emphasized four main themes on the way to making happy developers using your SDK:

  1. Respect users (hardware, not people in this case)
  2. Respect developers (people)
  3. Clarify assumptions (more about developers)
  4. Things you can’t control

Within each theme

  1. Respect users.
    1. Be considerate of battery life. Actions include
      1. Limit network calls
      2. Illuminate the screen only when necessary
    2. Network time is expensive – keep it to a minimum by downloading once and keeping the download in memory
    3. Phone space is limited – when you are done with the data you have downloaded, delete it.
    4. Minimize startup time
      1. Use techniques to keep startup time to below 2 seconds
      2. Don’t block the main thread
      3. If possible defer loading until after startup
  2. Respect developers
    1. Don’t do anything that causes the app to be rejected from the store, such as renaming system variables in iOS or using Id’s in Android that you should not reference
      1. Don’t violate any store policies
      2. Don’t request information that is off limits
      3. Don’t call private APIs
    2. Don’t put all your good ideas in a single SDK
      1. Bloatware is not welcome (see phone space and startup time above)
      2. It’s often better to have several small SDKs
    3. Create slim SDKs and ones that don’t leak
  1. Clarify assumptions
    1. Document all your assumptions
    2. Even better, design the API’s so developers can’t violate assumptions
    3. If the SDK fails, complain LOUDly in the debug logs
  2. Things you can’t control
    1. You need to be vigilant for system changes
    2. There is nothing you can do about them, but react quickly

There are differences between #iOS and #Android that require some modifications in the SDK. One example is in the speed of the exit from an app. Apple devices tend to have less memory, so they are more aggressive in terminating apps quickly and reclaiming memory. This is less so in Android.

posted in:  Android, Apple, applications, iOS, Yahoo Tech Talks    / leave comments:   No comments yet

Apple Watch Development

Posted on May 18th, 2015

NYC Mobile .Net Developers

05/18/2015 @Microsoft, 11 Times Square, NY


Mike Bluestein @mikebluestein talked about his experiences developing apps, including a Golf Watch app, that run on the #AppleWatch. He described the features, capabilities and limitations of the watch and then showed apps and how the user interacts with the app.

Mike started by saying that the simulator can be used to develop prototype apps, but apps often run differently on a physical watch. Lessons learned include:

  1. Want to launch quickly – so avoid immediate database access. Create a splash screen
  2. All code runs on the iPhone. Later this year, Apple might allow code to run on watch
  3. 2 screen sizes, but same aspect ratio: 38mm and 42mm
  4. Has sensors, but 3rd party developers do not have access to them now. Only Apple apps can access them.
  5. Bluetooth connection between watch and phone
  6. Full animation capability: Create image-based animations: cycles through a set of images.
  7. Watch is a UI presentation layer only
  8. Limited watch SDK for now. Full SDK later this year.
  9. No auto-layout for building watch UI
  10. Can do tap, slider, table selection, button-force
  11. Voice recognition: Audio input is passed to the phone for processing.
  12. Present notifications and glances.
  13. Need to be connected to an iPhone.
  14. Sensor inputs captured and sent to health kit
  15. Can animate the background – could be used as a progress indicator with text in the middle

Layout: Two ways to navigate – but cannot mix them

  1. Page-based – swipe to get to next screen. No hierarchy
  2. Hierarchical – drill down

The watch talks to the watch extension which runs on the phone. But the watch extension cannot launch another foreground app. It can only launch a background app.


  1. Open with force touch gesture (press and hold)
  2. Handle selection in WKInterfaceController
  3. Max 4 buttons

Give impression of button changes by hiding and revealing buttons – but cannot dynamically create the interface. Can only do in a storyboard – cannot do programmatically

Watch Extension

  1. Runs on phone, but in a separate sandbox than the iOS app. Can call an iOS app, but the iOS app will run in background.
  2. Handles interactions raised by watch app
  3. Has its own life cycle
  4. Performs shorter running tasks
  5. Access shared data

If watch loses connectivity and you try to enter something it “goes into spin mode”. The app needs to be restarted for connectivity to be restored.

App Groups – used to share data between iPhone and watch extension

Communication with the Watch

  1. App groups – cannot launch anything, just moves data – but can have a monitor loop
  2. OpenParentApplication – call from extension to launch the iPhone app in the background
  3. HandleWatchKitExtensionRequest – only way to get into the app delegate
  4. Darwin notifications – the watch notifies the extension that something is done – bridge between iOS app and watch extension using only app groups. The iOS app, loops looking for data to change in the shared area.

Images on the Watch

  1. SetImage works with UIImage
  2. SetImageData – sends bitmap data – faster than regular SetImage
  3. SetImageNamed – sends only the name to watch to reference a previously loaded image
  4. Can create animated image if you send a set of images
  5. 20MB cache

Notifications – short-look notification (initial display). Long-look displayed after user looks at the short-look. It’s also scrollable – customizable

Glances – swipe up on the watch face to access. Only interaction: tap glance to open watch app. Glances can be used as an app launcher.

posted in:  Apple, NYC Mobile, Watch    / leave comments:   No comments yet