It’s iPhone 4S day, and one of the headline features of the new handset is its ability to interpret natural speech from the user and provide surprisingly accurate responses.  It’s called Siri, and it’s based on technology purchased by Apple in 2010.

Why is it only rolling out on Apple’s newest phone and not offered through a software update to the similarly equipped iPad 2 or last year’s iPhone 4?  The reactions have been varied.  Some feel it’s a ploy by Apple to keep sales high on the newest products, while others point to speculation about the hardware requirements of such a service.  Chris Foresman at Ars Technica leans toward the latter argument:

Right now Siri is limited to the iPhone 4S. Presumably much of the reason for that limitation is that Siri requires a lot of computing power to work. Siri co-founder Norman Winarsky told 9to5 Mac that the Siri app, originally released in early 2010, required a number of workarounds and optimizations to work well on the then-current iPhone 3GS’s 600MHz processor. Even with the significant processing boost gained from the iPhone 4S’s dual-core A5 processor, however, Apple is still calling the tech a “beta” nearly two years after its first public release.

On this week’s Talk Show podcast, John Gruber looks at it a different way.  He argues that since most of the processing and interpretation of the speech is happening on the server side with Apple (Siri only works with an active network connection), the company is hesitant to roll this out to all capable devices on day one for fear of serious performance issues when the servers are overloaded with requests from tens of millions of new Siri users giving it a spin.  By only offering it to the 3 million or so new iPhone 4S owners, he argues, Apple has the opportunity to implement the service slowly and tweak things as they go.

I think it’s more of a benefit to Apple to include Siri service on more of its devices in the coming year.  I used the previous version of Siri on my old iPhone 3GS and it performed flawlessly.  It’s been widely speculated that Siri’s current voice recognition technology is powered by Nuance on the server side, so at least on the surface it appears that the iPhone is spending more time sending and receiving Siri data than actually processing requests.

Furthermore, Apple has the ability to reduce its dependency on Google by steering search requests in different directions.  Siri supplies data from Wolfram Alpha encyclopedia when possible, and restaurant requests are supplied by Yelp.  Google results still appear, but seemingly only when the other methods have not panned out.  Considering how much traffic the iOS has driven to Google over the years, the long term implications of this are huge.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s