New York Tech Journal
Tech news from the Big Apple

How to Build a Bulletproof #SDK

Posted on June 3rd, 2016

#YahooTechTalksNYC

06/02/2016  @Yahoo, 229 West 43rd Street, NY, 10th floor

20160602_183850[1] 20160602_190336[1] 20160602_191143[1]

The speaker from @Flurry emphasized four main themes on the way to making happy developers using your SDK:

  1. Respect users (hardware, not people in this case)
  2. Respect developers (people)
  3. Clarify assumptions (more about developers)
  4. Things you can’t control

Within each theme

  1. Respect users.
    1. Be considerate of battery life. Actions include
      1. Limit network calls
      2. Illuminate the screen only when necessary
    2. Network time is expensive – keep it to a minimum by downloading once and keeping the download in memory
    3. Phone space is limited – when you are done with the data you have downloaded, delete it.
    4. Minimize startup time
      1. Use techniques to keep startup time to below 2 seconds
      2. Don’t block the main thread
      3. If possible defer loading until after startup
  2. Respect developers
    1. Don’t do anything that causes the app to be rejected from the store, such as renaming system variables in iOS or using Id’s in Android that you should not reference
      1. Don’t violate any store policies
      2. Don’t request information that is off limits
      3. Don’t call private APIs
    2. Don’t put all your good ideas in a single SDK
      1. Bloatware is not welcome (see phone space and startup time above)
      2. It’s often better to have several small SDKs
    3. Create slim SDKs and ones that don’t leak
  1. Clarify assumptions
    1. Document all your assumptions
    2. Even better, design the API’s so developers can’t violate assumptions
    3. If the SDK fails, complain LOUDly in the debug logs
  2. Things you can’t control
    1. You need to be vigilant for system changes
    2. There is nothing you can do about them, but react quickly

There are differences between #iOS and #Android that require some modifications in the SDK. One example is in the speed of the exit from an app. Apple devices tend to have less memory, so they are more aggressive in terminating apps quickly and reclaiming memory. This is less so in Android.

posted in:  Android, Apple, applications, iOS, Yahoo Tech Talks    / leave comments:   No comments yet

#RxAndroid: replace anonymous #callback functions

Posted on November 11th, 2015

New York #Android Developers

11/10/2015 @AmericanExpress, 200 Vesey Street, NY

20151110_195013[1]20151110_192603[1]

Trevor John and Alex Matchneer spoke about the advantages of using RxAndroid, an Android adaptation of #RxJava, to simplify the handling of asynchronous events.

Trevor John @PlayDraft introduced the Rx concepts of observables and subscribers. Observables are functions that handle externally triggered events, such as button presses or updates from external devices. The observables can filter and manipulate the results, but are not executed until a subscriber subscribes to it. The structure of observers and subscribers, along with the lambda style of function calls, removes the need for anonymous function calls that were the prior basis for event handling.

Trevor then spoke about the new requirements to create unit tests with RxAndroid. The challenge is to create functions without the extended syntax needed to specify mock events. He talked about some methods to simplify the test code and concluded with general advice on testing

  1. Keep test names short, so you know what it tests e.g.: methodName_context_assertion()
  2. Keep tests short
  3. Limit assertions per test

In the second presentation, Alex Matchneer @FutureProofRetail spoke about lesson learned in extending the functionality of FutureProofRetail’s mobile app. The company has an app which allows customers to purchase grocery items in the store using their mobile app. This allows the customer to skip the checkout line. The initial app has been extended to include item that needs to be weighed. To do this they interfaced a tablet with a scale (using USB), so that customers could weigh items and see a bar code displayed on the tablet. Customers scan the bar code on their mobile phone to include it in their shopping basket.

To streamline the programming logic Alex used Rx.

He created an observable which

  1. Received a character string showing weights from the USB interface with the scale
  2. Filter out strings missing units, decimal points, etc.
  3. Eliminate spurious weights
  4. Wait for the weight to stabilize before displaying the bar code
  5. If the scale is unused, display the “how to use” video on the screen

Alex showed how specific functions in Rx performed these functions

  1. A series of functions specify when a value is received, if the values are repeated, etc.
  2. The filter command removes strings that are not consistent with a regular expression
  3. The buffer command groups individual observations into a overlapping or non-overlapping arrays so the function can easily remove the max and min values and calculate the mean for the remaining values
  4. The switchmap function shows the bar code if the weight is stable for ½ second.
  5. Switchmap can also be used to trigger the video if the scale is currently unused

The observer-subscriber dichotomy simplifies the organization of the code and makes it robust to events triggered by inputs from the scale and user actions.

posted in:  Android, New York Android Developers, Programming    / leave comments:   No comments yet

Android apps: #SQL and #Java, #Designers and #Developers, #Information hierarchies, #Translation

Posted on October 23rd, 2015

New York Android Developers

10/19/2015 @Facebook, 770 Broadway, NY

20151019_190859[1] 20151019_200348[1]

Four speakers talked about different applications and challenges of implementing apps for Android

Kevin Galligan @Touchlab spoke about accessing SQL databases using Java while retaining the best characteristics of object-oriented programming with relational databases.

Kevin talked about the family of Object Relational Mapping (ORM) utilities to perform this linkage. Comparing the offerings is based on performance and structural features. Performance issues revolve around the following questions:

  1. Handling hibernation
  2. Understanding the db structure
  3. Foreign references (parent and child)

However, amongst Android ORMs, there is not much difference on simple tables. For more complicated queries, source-gen is measurably faster than reflection-based. However, Kevin warned the audience not to trust the published benchmark performances.

He then offered a high level comparison of the main ORMs:

  1. ORMLite –lots of apps run this, however is it slow-ish,
  2. Active android – looks like support is dropping off
  3. GreenDAO – external model intrusion, fast (source-gen), single primary key
  4. DBFlow – source-gen, fast, poor foreign relations, support multiple primary keys
  5. Squeaky – APT port of ORMlite, source-gen, single primary key,
  6. Realm – not like SQLite, column oriented, single field primary key, queries tied to thread, fast, foreign calls simple, but it’s under development and is so not entirely stable

The second speaker was Max Ignatyev @Synpli.io which is a collaboration tool to improve the communications between designers and developers. It offers a single platform for design changes, thereby eliminating confusion whether communication is via dropbox, email, text, etc.

The designer uses Sketch or Photoshop and the developer sees specs in dp and a common color pallet which is integrated as ready-to-use assets in the IDE.

Sympli.io is currently a free offering in beta test.

Next, Liam Spradlin @Touchlab spoke about how users navigate through applications and how to make interfaces so users know what to do next as they complete tasks. He proposed an information hierarchy in which the users see what is to be done immediately in an invert pyramid:

  1. Primary information and actions
  2. Important details
  3. Background information

The last speaker, Mike Castleman @Meetup spoke about the challenges of making Meetup.com more accessible to non-English speakers.

The first challenge is to translate all strings, with sentence translation being especially important to avoid problems with gender and plurals on multiple words in a sentence. They considered third party translators such as Google, but decided to use in-house, native-speakers as translators as they know the product. They also provide context to translators by uploading screen shots. They use PhraseApp as their management tool to organize your translations.

Once translated, the layout needs to be altered as strings often become longer in other languages, such as German.

Dates and times have different forms and punctuations, so they use tools such as CLDR – (common locale data repository) and Libcore.icu.ICU to get the conventions correct.

Sorting strings and search can also be challenging as some languages, such as Japanese are sorted by the sound of the words, not the actually lexical representation of the spoken words.

posted in:  Android, databases, New York Android Developers    / leave comments:   No comments yet

Review of Google I/O 2015

Posted on June 26th, 2015

NYC GDG

06/24/2015 @Google, Chelsea Market, 9th & 15th St, NY

20150624_192103[1]20150624_181811[1] 20150624_193955[1] 20150624_200720[1]

Speakers presented their impressions of the #Google I/O meeting that was held in San Francisco, May 28 to 29.

First, Nitya Narasimhan gave a rapid overview of the directions of Google’s research over the past year. These initiatives include

  1. Advanced Tech and Products (ATAP)
  2. Chrome browser update
  3. Polymer & web components
  4. Cardboard
  5. Internet of Things
  6. ATAP – advanced tech & products included
  7. Jacquard – Use fabric as a touch sensor
  8. Ara – modular smart phone- with hot swapable components
  9. Soli – use radar for gestural interactions – interactive interfaces can be on anything
  10. Abacus- replace passwords with a series of gestures and your unique actions
  11. Vault- security archive
  12. Tango – indoor location sensing
  13. Chrome browser update – focus on R.A.I.L. performance standards (click here for a full presentation or Paul Irish’s keynote) which are performance goals to improve the UI. Other things to improve the user experience
  1. Polymer & modern web APIs. (web components) :
  2. you can create custom HTML tags
  3. you can export HTML components to other HTML files
  4. templates to create Polymer elements
  5. shady DOM – faster than shadow DOM but less capable – for older browsers

Polymer has families of custom components that make it easier to create custom HTML tags

  1. Iron- web components
  2. Material design
  3. Go web – Google services
  4. Platinum
  5. Gold targets ecommerce

Polymer starter kit is available with app templates

Material design is now available for both web and mobile.

  1. Cardboard VR
  2. Cardboard 2.0 now available for iPhone.
  3. Can handle bigger phones and the device is easier to assemble
  4. Expeditions is an initiative to create content for cardboard for education.
  5. Jump – a goPro hardware setup which captures a 360 degree view with software to stitch it together. Output to cardboard and Youtube.
  6. SDK for cardboard- example includes a walkthrough for building with Unity.
  7. Design- designing for virtual reality – see the online demo – see cardboard design lab for guidelines
  8. Tango – 3d motion tracking and depth scanning

5.Internet of Things

  1. Nest – thermostat
  2. Thread group – industry consortium looking to standardize a web and communication links between objects
  3. Brillo – smallest version of Android OS to run on almost anything – a tool for building with Brillo is expected out soon
  4. Weave is the messaging system
  5. Speech and neural networks

Next, Dario Laverde covered Android M, the upcoming version of the OS. It’s also called MNC, short for ‘Macadamia Nut Cookie’ and has an expected release date in the 3rd quarter

  1. Google Now can be done from any app. This means one can ask for the lead singer of a song as an app is playing that song
  2. App permissions : at run time you are prompted if you want to access camera, etc. – similar to that on the iPhone. Developers need to check their apps on the emulator to see if this affects older apps. Can also view permissions by app or by capability (e.g. camera).
  3. Voice Interactor allows for confirmation of actions by voice
  4. Fingerprints for authentication
  5. Android backup – backups up all data by default
  6. Google Play Services 7.5 – one can build deep links to features within your app.
  7. Can Cast the screen image to a remote display;
  8. Smart lock so passwords entered on one device do not required repeated password entries across web and mobile devices
  9. new exercise types added to types in Google Fit database
  10. untouched devices causes apps to become “inactive”. However, developers can whitelist an app so it does not go to sleep;
  11. Android design support library: notification icon can now be a resource id or a bitmap (not just jpg and png files)
  12. new version of Android Studio including real time step-by-step debugging in C++ in Android NDK
  13. Styluses are now supported on tablets
  14. Tools: Systrace is a tool to locate problems. Also new compiler optimization
  15. External storage such as USB device supported by adb
  16. Graphics – separate TORCH light from the camera controls
  17. Audio – midi interface

Other developments related to Android include

  1. New Android Developer Guide
  2. Android wear now allows maps to be displayed on a watch face see Github/googlemaps…
  3. Project Tango
  4. Tango tablet contains the sensors for motion tracking, area learning and depth perception.
  5. Google announced 3 Contests to build Tango-powered apps in utility, VR or entertainment
  6. Tablet sells for $512

The third speaker, Ralph Yoozo spoke about Firebase, an online, real time database. Firebase is easy to setup since it requires no server side code to set up security. It can receive data using web sockets or virtual infinite web pages. He has built two applications using Firebase

  1. A web page to show runners their times (adjusted for their start time) as they crossed the finish line
  2. The bills in discussion in the NY state senate: see https://nysenate.firebaseapp.com/# s

Ralph also noted

  1. Firebase is open so everyone can read and write to it, but this can be adjusted
  2. Can run a curl command form the command line to test the app
  3. Have a simulator page to debug the code

Ralph also briefly talked about Universal Second Factor which promises to offer better security than just a password. It is a small device (can attach to key chain) that provides a second layer of security in addition to your password. It uses a FIDO protocol.

The meeting was concluded by two brief talks

In the first, Howard Goldstein@NYTimes talked about Smart Lock, which integrates Chrome’s password manager so it extends to Android. Howard said it was very easy for the New York Times to integrate Smart Lock into their applications

  1. Needs Google APIClient
  2. Request credentials (the password)
  3. If succeed, can auto-login
  4. If fail, some credentials may not have passwords
  5. If fail, might have multiple accounts on the device – have the user select an account
  6. Can push credentials to Google so user does not need to enter them again

The second brief talk was by Anna Yan, a first timer’s visit to I/O. She spoke about the two devices demonstrated:

  1. Exiii makes robotic arms costing $200 They capture motions to an android phone and can be customizable using 3d printed
  2. Neosensory maps sound patterns to different locations on a vest making use of the sense of touch. The vest can be used as a sensory substitution for the deaf or in extremely noisy environments.

posted in:  Android, GDG, Internet of Things, NYC GDG, Programming, Wearables    / leave comments:   No comments yet

#Android Development: #Java #Threads, #Video Frame Timing, #Tango, training programmers

Posted on June 19th, 2015

New York Android Developers

Grubhub @ 5 Bryant Park, 15th Floor, NY

Four speakers talked about different aspects of programming/development.

20150618_190536[1] 20150618_190643[1] 20150618_190746[1] 20150618_190841[1] 20150618_191013[1]

In the first presentation, Jamie McDonald @SoundCloud talked about how #RxJava uses a new paradigm in Android for managing asynchronous processing threads.

Android runs all UI processing on the main the main thread, so processes requiring large amounts of resources are better executed on separate threads so as not to impact the user interactions. There are several methods for starting threads and broadcasting the output. These include:

  1. Runnable classes
  2. IntentService for background processes.
  3. AsyncTask which perform tasks by doInBackground with a callback.

Of these, the async task may have the most straightforward syntax, but lack error handing or a good ways to schedule tasks. For instance, a series of ordered tasks must be coordinated in nested async callbacks. Generally, with these methods the execution order of tasks may be hard to control as the operating system has evolved from serial execution to thread pools and back to serial as phones have moved from single to multiple processors.

RxJava (and RxAndroid) adds to this toolkit using a reactive programming framework developed originally by Netflix. It includes a library for observable abstractions with some functional programming. Everything is represented as a stream receiving observables and outputting to subscribers.

  1. An observable receives and operates on objects.
  2. A subscriber states an interest in receiving events and determinates what to do with the outputs or errors
  3. A scheduler links observables and subscribers and specifies which threads handle which tasks.

Jamie, presented code snippets illustrating an observer, a subscriber and a scheduler. He also talked about some of the most commonly used Stream Operators:

  1. Flatmap – takes source and maps to another output
  2. OnErrorResumeNext – recover if something goes wrong

The scheduler can also be used to create a single event bus which puts items in a queue and handles errors and outputs. This can prove especially useful if you change the main screen in which case you merely subscribe to an event on the scheduler’s queue.

But RxJava is new so there are still some issues: unit-test debugging may be challenging for long call chains; there is asteep learning curve with new concepts (e.g. backpressure if the observer is emitting more outputs than can be called by the subscriber). He noted that RxJava still requires code to handle when the orientation of the phone changes from portrait to landscape or vice versa.

20150618_193458[1]

In the second talk, Jason Sendros @Facebook talked about the synchronizing the display refresh rate with the animation frame update rate. The standard for screen refresh is 60 frames per second (17 ms), so the software should render its image in less than 17ms.

So, what happens when you miss a frame? Nothing initially. The previous frame is redisplayed and the screen starts its next refresh delay. What happens if you miss a second frame? The display appears jerky or does not scroll smoothly.

How do you detect problems and to fix them? There are several tools/methods to diagnose and fix the problem:

  1. abd shell dumpsys gfxinfo: information on a per frame basis, but cannot be used in production
  2. choreographer: (available after JellyBean OS) gives you the vsync time stamp. So, if you go over 33ms, you can use choreographer to disable the frame output.
  3. Maxframetime is a good diagnostic metric that measures the percent of time spent in buckets: % of time with 0 dropped frames, 1 dropped frames, etc.
  4. Systrace allows you to locate the most expensive worker in each frame. It is superlight so it can run on a user’s device without impacting performance.
  5. Automated testing can be used to look inside high level categories.

He recommended frequent us of microbenchmarks (at Facebook, they will closely monitor the cost of notification for dataset changes) to prevent performance from regressing as the software upgrades are made.

Testing is challenging since one must balance stability of test results (replicability) and performance in the real-world (accuracy).

The meetup was concluded by two lighting talks. In the first, Matthew Davis talked about project Tango which is a stack of sensors supplementing the standard functionality of android devices. Sensors include

  1. A higher resolution camera
  2. motion tracking camera
  3. IR depth sensor
  4. more RAM

The sensors are designed to allow geolocation indoors using motion tracking, area learning, and depth perception to create a visual inertial odometer using key visual markers.

Sensors collect data which is stored in a database, so the system learns about your environment over time. It’s database learns if you have been in the area before and will update the information. Data files can also be transmitted so you can coordinate activities with others (an example is a community cube stacker game in which several players build a common object a la Minecraft).

Qualcomm is designing a hardware set for phones.

Amy Quispe@Coalition4Queens talked about their project, #AccessCode to teach local, disadvantaged students how to code.   Students generally do not have programming backgrounds.

She presented two apps made by students during their 8 week Android course:

  1. 4 function + scientific calculator
  2. Meme-ify – make memes from pictures and background info with text markup that can be shared on social media.

She encouraged audience members to be instructors in the project.

posted in:  Android, applications, New York Android Developers, Programming    / leave comments:   No comments yet

Bringing Droidcon Montreal to NYC: #VR, #noSQL, Surface #Textures

Posted on April 17th, 2015

New York Android Developers

04.16.2015 @Spotify, 45 W 18th St, NY

Three speakers presented excerpts from their presentations in Montreal

20150416_191148[1]20150416_191328[1]

Dario Laverde spoke about VR on Android. He first outlined the hardware capabilities of the HTC Vive to create an immersive VR experience. These requirements are designed to insure “presence” (viewing without getting sick). These include: 90Hz refresh rate, 2 screens with 1080 resolution, ability to track your location.

Last summer, Google introduced Google cardboard which allows you to place a phone in a holder to experience VR. Today Google announced that it will officially certify some commercially available viewers. This will allow the software to be fitted to the lenses on the viewer.

Dario also talked about the need for a good pair of headphones for positional audio. Android developers can create surround sound using the open SL ES spec. He mentioned that a magnet on the viewer interfaces with NFC to provide a single-click input to the software. He noted that some phone do not have NFC in which case they can be controlled using a Bluetooth clicker.

Dario walked-through of some code snippets:

  1. Intent filter – identifies the app as a cardboard app within Google play: g.co/cardboardapps
  2. onNewFrame – get the location of your head
  3. onCardboardTrigger – monitor if NFC invoked

He noted that when adapting tilt-sensitive games, that viewer movement assumptions are different. For instance, a race car when tilted moves the car left or right, while a left or right head movement does not alter the direction of the car, only your view within the car.

He concluded by talking about how the limitations on computation power are the main barrier to adding other inputs (beyond head motion and NFC).

Next Will Hoang @Coachbase talked about Coachbase’s noSQL database for local storage on Android. He talked about how their database gives off-line capabilities to apps and automatic, real-time syncing to the cloud and other connected mobile devices when connections are available.

20150416_201809[1] 20150416_201801[1] 20150416_203526[1]

The third speaker was Lisa Neigut@ElectricObjects. Electronic Objects has created an Android-powered display for works of art. It received its funding in a Kickstarter in April 2014. They have created a large Android display panel similar to a tablet but without touch sensitivity.

The display was originally powered by a Raspberry Pi, but they have moved to Bluetooth with Android KitKat 4.4 (OS19). They expect to upgrade the OS once Freescale upgrades the iMX6D board.

Artists create materials in one of three formats

  1. Static images: jpg, png
  2. Html – Chromium WebView – use Crosswalk
  3. GIF

Lisa then talked extensively about the challenges of displaying dynamic Gif files.

Since Andoid does not have a native Gif decoder they first needed to understand the structure of Gif files: Each Gif contains a main frame and a series of subframes. Each frame is specified by a size and the coordinates for this frame within the main picture. Frames are presented in sequence to give the impression of an image in motion.

Gifs in which only a small part of the image is updated by each frame make fewer demands on the processors and therefore appear as a smooth set of motions. Frames with more changes can cause frames to be dropped or mangled on output.

They explored several methods for displaying Gif files:

  1. Media player was inefficient: images did not look good
  2. Chome: (Gifs were packaged in an html file) had a long initial delay (several minutes) upon initial loading of the display, but ran well afterwards. This was a problem since uses would want to browse the artworks before deciding to display the piece.
  3. Glide – does not drop frames, but displays only 3 to 4 frames/sec
  4. Native – gifDrawable. But still need better performance

To improve performance they used the Systrace utility to visualize the bottlenecks in the refresh cycle when processing each frame. Specifically, they used this to better understand the Android Graphics pipeline: SurfaceFlinger & BufferQueue.

Eventually they were able to attain 30 frames per second performance on a 720 psi display by directly accessing the surface textures using the TextureView Pipeline. This is consistent with recommendations made by some gamemaker systems: modify textures when possible to make gameplay fluid.

posted in:  Android, databases, Virtual Reality    / leave comments:   No comments yet

Mar GDG Meetup: #ServiceWorkers, Advanced #TextViews and ng-conf Recap

Posted on March 19th, 2015

NYC GDG

03/18/2015 @Google, Chelsea Market, NY, 75 Ninth Ave, NY

Presentations on #TextView and #ServiceWorkers were supplemented by four shorter presentations.

Chiu-Ki Chen spoke about how Android TextView in the UI can be used to create a wide variety of exciting visual effects.

She first concentrated on how the contents of a TextView can create text shadows, glowing text, custom fonts, color gradients fonts and text pattern coloring

She next showed how spannable strings can be used to create strings of characters with variety of adjustments in font height, color, clickability etc. within a single TextView. A single TextView can therefore create the look and feel of tagged segments in an Html document.

As a final touch, Chiu-Ki showed how to create text on a lined page by combining text and graphics on a canvas. She also showed how emoji unicodes can be remapped to images which can be resized and positioned like characters.

20150318_194246[1] 20150318_201541[1]

As a followup to last month’s meetingJeff Posnick spoke about Service Workers. Service Workers are #javaScript code that creates native-app capabilities for web pages. These include local responses to events, off-line capabilities, push notifications in Chrome:

  1. JavaScript runs outside of the main page => no DOM access
  2. Responds to events
  3. Applies to all pages under a scope and persists when tabs are closed
  4. A cache is exposed to the service worker, so pages can be used off-line.
  5. Can modify CSS.
  6. For security reasons, it only runs on https locations

Programming. Service workers are based on Promises (a syntax which is an alternative to callbacks for asynchronous code.) In the code, an asynchronous process is called and returns immediately with a promise that will eventually be fulfilled. Once the promise is fulfilled, the code starts the next process. Debugging can be done in Chrome using the web page: chrome://serviceworker-internals

Implementation: When a web page is first called, the service worker is not invoked, (but this can be overridden). This is done so the page runs on other browsers that may not support Service Workers, In addition, the JavaScript should include code like

if (‘serviceWorker’ in navigator) …

so the Service Worker only runs when the browser can handle it.

Service worker should live at the root of your domain since it only has control over network requests that are within that part of the tree.

Things you can do. If you are offline and the video you requested is not available, it will return the previous video you downloaded. It can write a custom off-line page. It can trigger a push notification so long as Chrome is running, even if the page is not active. It can send notifications to a wearable watch.

In the shorter presentations

  1. Alex Qin – @CoalitionForQueens – www.c4q.nyc talked about their program to teach adults to code. They are looking for instructors.
  2. Elcin Erkin – updated the audience on the latest developments in AngularJS from the ng-conf 2015 in Salt Lake City Utah. The main discussion was about migrating to Angular 2.0. The upgrade will feature
    1. Better performance and design.
    2. New TypeScript created jointly by Microsoft and Angular
    3. Speed up using unidirectional bindings.
    4. Changes in syntax and semantics.
    5. A new router to better serve complex applications.
    6. The Angular team has taken over the Material Design component.
    7. Incorporate web components.
  3. Ray Tsang – In honor of pi day (3/14) he calculated pi to 100B digits in 5 ½ hours on a multicore processor.(I wonder what the distribution of digits looks like)
  4. Dario Laverde– showed how easy it is to create Wearable watch faces using Android Studio. In less than a minute he created a face without writing a single line of code.

posted in:  Android, GDG, Programming    / leave comments:   No comments yet

Building mobile apps at Yahoo

Posted on March 4th, 2015

Yahoo Tech Talks

03/04/2015 @Yahoo, 229 West 43rd St., NY

20150304_194311[1] 20150304_184758[1]

Pierre Maloka moderated presentations on #MobileApps development by six designers, developers and testers at #Yahoo. The speakers described their process from setting goals to product delivery.

The mobile development team at Yahoo was assembled two years ago and all speakers emphasized the importance of differentiating their product from those of their competitors.

Differentiation starts with a user-friendly design that emphasizes images, links to easily explore the news, and is responsive to user requests. The speakers talked about tools, organization, internal communication and testing.

Tools:

  1. Keynote PDF – can extract images for manipulation
  2. Flinto – make prototypes easily for early testing
  3. Pixate – tool to build animations and interactions in native apps
  4. NodeJS and Express a web application framework
  5. Redis – data structure server to speed data from server to client
  6. Flurry – capture session, session length, active users, new users, retention, frequency of use Also use to monitor events (match to Key Performance Indicators)
  7. Slack – instant messaging which can be grouped into ad hoc channels
  8. Vidyo – video conferencing.
  9. Yo/yo link shortening. Links to documentation and dashboards.

Organization:

  1. Quarterly development and review cycle
  2. Planning stage sets the goals and prototypes ideas along with initial user testing.
  3. Development stage iterates on regression and integration testing and testing by all Yahoo employees (dogfooding). It culminates in a technical and design review before being released to the public
  4. Client server tasks can be changed without re-installing the app so it contains tasks that are being evaluated or functions that will be changed frequently
  5. Tasks are broken into a presentation layer and a data layer for ease of maintenance and debugging

Internal Communications:

  1. Developers collaborate with design to figure out what to do and what is easy
  2. Get feedback as early as possible
  3. Yahoo has a pod structure so all decision makers sit together.
  4. Solicited feedback throughout all of Yahoo in the dogfood phase.

Testing:

  1. Conduct testing early and often
  2. Major updates are reviewed and certified for best practices
  3. Instrument all products to monitor what users are doing

Additional comments included

  1. Most testing is manual and uses internal tools.
  2. Use Jenkins for testing on iOS. Don’t use UI testing built into XCode.
  3. Virtually all development is native on both Android and iOS, but the Android group might take advantage of web login when security is updated.
  4. The UI sticks to the format for the platform (iOS or Android). Visuals stick to the Yahoo web style.
  5. On the iOS side, they have tested Swift, but will stay an Objective-C shop for now. The main issues involve the need for more development in the IDE tooling. Also the compiled files tend to be large as Apple wants to insure backward compatible across new versions of Swift.

posted in:  Android, iOS, Yahoo Tech Talks    / leave comments:   2 comments

Almost all Kevins: #Android development ideas for #ContentProviders, #ColorAdjustment, #transitions, #CodeTesting

Posted on February 27th, 2015

New York Android Developers

02/25/2015 @Tumblr 35 E 21st St., NY

Five speakers gave lightning talks about a diverse set of topics.

Kevin Grant –  Developing with Motion in Mind: Strategies for Motion-First development in your Android applications.

Kevin Coughlin – Working with ORMs: Differences between Realm and Cupboard, and discusses his experience with both.

Kevin Schultz – Building Maintainable Android Applications (and maybe a quick talk on the new Android testing tools that were just released!)

Eric Leong – OpenGL Basic on Android: How to use OpenGL in your apps when working with images and videos.

Zack Parness – Hiccup: an open source library that tries to simplify the way ContentProviders are used for data access.

In the first talk, Zak Parness talked about the advantages of using #Hiccup as a #ContentProvider on the server side. Hiccup provides a #RESTful backend which separates the UI from the server content. This structure simplifies the program logic, making it easier to maintain and upgrade each side without changing the other.

Next, Kevin Coughlin introduced two ORMs (Object-relational mapping – converting data between incompatible type systems).

#Cupboard is a SQLite wrapper that uses POJOs (plain old Java object) which performs the SQL function PUT, GET, UPDATE, DELETE, QUERY.

#Realm – database approach. Clean builder-type API, cross-platform

But – in beta, need getters/setters, performance? No content provider support, no migration support, crashes.

The third speaker, Eric Leong talked about performing color correction using #OpenGL on Android. Color correction processes each pixel independently of other pixels thereby allowing for parallel processing of the image using the GPU. Eric talked about the importance of minimizing movement of data between the CPU and the GPU and how openGL converts polygons in the image to fragments and then puts a texture on the fragment. The color correction is then done on the texture.

Kevin Grant next talked about how screen transitions improve the user experience and how to implement them in the lollipop version of the operating system.

He first showed how an interface without transitions can be hard to navigate. In such an interface, the user presses a button and the screen immediately changes. This provides no visual hints on which button was pressed and how the button-press relates to the new screen. This increases the memory load on the user and can make it hard for the user to remember where they are when they click through multiple screens. He used Instagram and Twitch as examples of mobile interfaces which have this problem.

Kevin next walked the audience through an interaction on Google Play Games. It using fades and gradual scaling to make it much easier to navigate the site.

Kevin next talked about how to implement transitions. SDK 21 / Lollipop is the first Android OS that provides the tools to easily implement transitions. Lollipop has the ability to render two images parts of which are both displayed during the transition and SKD 21 has the functions in the toolkit to take advantage of this.

In SDK 21 there are two types of transitions:

  1. Hero transitions – one screen to the next – default to move
  2. Content transitions – can enter or leave: fade, explode, slide – default to fade

The toolkit includes routines to enter and exit the transitions, callbacks to create complex transitions and listeners executed once the transition is completed. Further information is available from Alex Lockwood.

In a previous post, material design (for example Polymer) was presented which is closely allied with Transitions

Aliya Merali from Coalition for Queens is looking for volunteers to each coding to 30 students enrolled in a 20 hours/week course. The program seeks to give a diverse set of students the tools to become professional programmers.

In the final presentation, Kevin Schultz @Gilt spoke about the importance of testing strategies during code development. He emphasized that every developer has a testing strategy with the highest cost strategy being no formalized testing strategy which means that the customer does the testing. Besides being customer unfriendly, this strategy is also slows the development process.

  1. He likes unit tests as the backbone of your test suite. It is narrow in scope and quick to test. His main recommendation was to property structure the code. One standard recommendation is separating the computation and the user interface. In terms of typical Java code for Android apps, this means separating the Fragment (or Activity) from the viewModel. Some ways to do this include:
    1. The fragment is responsible for loading models, networking, finding views, treading
    2. The viewModel is responsible for business logic
    3. Keep references to views entirely in the ViewModel
    4. Do not pass the context to the viewModel, pass in resources
    5. Don’t have methods for each view.

posted in:  Android    / leave comments:   No comments yet

Beacons & Geolocation

Posted on November 7th, 2014

11/7/2014 @HelenMills space, 137 W 26th St, NY

Event sponsored by www.gimbal.com #beaconday14

20141107_092414

Gimbal is an SDK and analytics platform for location-based mobile apps. Sessions were divided into technical walkthroughs for creating iOS apps and talks by vendors of apps using Gimbal’s services or providing infrastructure for targeting ads base on mobile location.

The two technical sessions showed how to use Xcode to construct apps with the Gimbal SDK. Geolocation apps are based on the following

20141107_154311

  1. Location hardware which can be GPS (or other global systems) combined with finer grain services such as BTLE (blue tooth low energy) transponders.
    1. S-20 transponder which is palm size, takes regular batteries and transmits up to two years without changing batteries. Transmits Gimbal format and also beacon format
    2. S-10 (see picture) which is thumb size and works only for a few months on its battery. Transmits Gimbal format, but can be reprogrammed to transmit in beacon format.
  2. BTLE hardware to receive the signal. As of now only available on iPhone, but will be coming to Android phones with BTLE in early 2015
  3. Software: iOS Gimbal version 1 is released, but will be replaced by version 2 which is in beta. The v2 interface requires fewer software calls to the SDK. There is an Android SDK, but it is being revamped for OS 5.x which will be released in 2015. My understanding is that the current Gimbal Android SDK is primarily for use with geofence and does not work for geolocation. The software needs to smooth the inputs, create triggers for arrivals/departures, smooth the inputs for GPS and beacon, etc. The software maintains a list of beacon locations.
  4. Database software to create ad content, match to location information and determine the actions when users arrive or depart from locations.
  5. Analysis software to refine your use of the geolocation data.

Gary Damm and Sidd Panigrahi also described geofencing which is triggered by someone being within a boundary. They contrasted it with geolocation which emphasizes near location to a point in space.

Other presenters emphasized applications of geolocation.

20141107_14365020141107_14092120141107_11213820141107_10322420141107_101131

Karen Pattani-Hanson & Megan Barry@UrbanAirship presented a case study at the US tennis Open. There 20 beacons covered the tennis center and were able to provide notifications that.

  1. Welcome first time visitor get welcome
  2. Featured activities – live streaming, US open radio, live prediction challenge
  3. Sponsorship monetization – e.g. esurance used iBeacon to identify people coming to the booth
  4. Sell last minute tickets – segment audience according to location and if on grounds, 32% click through rate

Dan Maxwell @VerveMobile talked about how Verve looks outside the store to drive traffic into the store – locate customers near the store, push inducement to go to the store, customize offers depending on location within the store. Their technology verifies that the customer entered the store and went to the part of the store. Also pops up a coupon once at that location in the store. Match to redemptions of the coupon

Antonio Tomarchio @Beintoo talked about building networks of beacons across different venues and locations and building a network of advertisers. The emphasized the importance of providing a trustworthy opt-in or opt-out mechanism

Anthony Dorment @Phigital talked about a tool that creates cards that to be displayed when triggered by a location arrival. Cards can include messages, media, video, user interaction, Google maps, etc. The goal is to make it easy to link content with the service to initiate it at a location.

Lior Nir @ShopAdvisor talked about a service across multiple vendors to alert customers to offers when they are near certain locations.

Brain Spracklen @SparkCompass talked about two case studies where geolocation data spurred user participation:

Case study #1- Ole Miss tries to encourage attendance at sports other than football. In the past needed to check in to get points. Replace by beacons. Encourage attendance once the program is announced. Also distributed to local merchants. In the future integrate with mobile payment and wearables to cut concession lines – place an order & you are sensed as you go to the concession and you are asked for your payment method.

Case study #2 – San Diego convention center – Comicon

Spark Compass will be coming out with a spark app in the next few weeks to program your own beacons and you own ads.

Toby @ControlGroup talked about some out-of the box uses for geolocation culminating in a demonstration of a companion robot – a mobile chair that can follow you (you’re your smart phone) using location sensors on the left and right arms.

posted in:  Android, applications, Geolocation, iOS    / leave comments:   No comments yet