This Week's Sponsor:

Kolide

Ensures that if a device isn’t secure it can’t access your apps.  It’s Device Trust for Okta.


The Siri API

The Siri API

Samuel Iglesias has written an excellent post detailing the (possible) challenges developers will have to cope with if Apple decides to release a Siri API.

The second half of Siri integration, Semantics, is the tricky part: something that most iOS developers have never dealt with. Semantics will attempt to capture the various ways a user can ask for something, and, more importantly, the ways Siri, in turn, can ask for more information should that be required. This means that developers will need to imagine and provide “hints” about the numerous ways a user can ask for something. Sure, machine learning can cover some of that, but at this early stage Siri will need human supervision to work seamlessly.

This is exactly what I have been wondering since speculation on the Siri API started last year. How will an app be capable of telling Siri the kinds of input (read: natural language) it accepts? Will developers have to do it manually? Will Apple provide a series of automated tools to associate specific features (say, creating a task in OmniFocus) with common expressions and words? And how is Apple going to look into the natural language processing developers will implement in their apps?

Of course, the Siri API is still at the speculation stage, but it does make sense to greatly expand upon Siri’s capabilities as an assistant capable of working with any app. The TBA sessions at WWDC are intriguing, and Tim Cook said we’ll be pleased with the direction they’re taking with Siri. Right now, I’d say integrating with third-party software would be a fantastic direction.