When Apple bought the already fairly popular voice search-based artificially intelligent assistant app, Siri, they integrated it into their own iOS platform and revealed it to the world (a second time) as if we're brand new. Much of the world sat in awe of Siri's cunning and wit. But more so, people were amazed that a quick and painless voice search client could return such rich and relevant information with little effort from the end user.
I, however, wasn't immediately convinced voice input and search were the future. I have always felt it was a side-step versus a lunge towards the future. Voice input is becoming far less course as hundreds of thousands – maybe even millions – stare into their phones to ask them questions as if their pocket computers hold the answers to the universe. But let's be honest, voice input isn't exactly new or revolutionary. And in a lot of cases, it's no more useful than taking the time to tap out a message with your fingers. (Go ahead, try it. It won't kill you, I promise.)
Instead, both Qualcomm and Google are paving the path straight to the future. (For anyone who doesn't know, the future arrives October 21, 2015, and you better believe I have my calendar marked.) But how is that, exactly?
By creating a true artificially intelligent digital assistant. With Siri, the user has to constantly consult the artificial assistant with a question or command. If you never long press the home button on your iPhone (or your new iPad once iOS 6 arrives), you might forget Siri exists. It's hidden, tucked away to never disturb you if you don't want or need its services.
Just two weeks ago at the Google I/O 2012 developers conference, Google unveiled its rather beefy digital assistant, Google Now. The more you use it and the more you Google Search, the more Now knows about you. While a bit creepy at first, I've learned to ignore how eerily accurate it is at times and embrace the benefits.
With time, it begins offering suggestions and recommendations based on time, location, upcoming events and your travel plans. Without any user interaction whatsoever, Google Now will give you restaurant or other popular places to visit if it knows you're away from home. If you travel out of the country, it will automatically provide currency converters and translators. And all it takes to teach Google Now your favorite sports team is queue a few Google searches about your team of choice and, over time, you will start receiving live-updating scores during games, etc.
All of this happens, automatically, in the background with absolutely zero required input from the user.
Google Now's list of services isn't extensive just yet, but I'm willing to bet El Goog is working around the clock to expand its reach and to get it in the hands of as many Android users as possible.
Popular tech evangelist Robert Scoble weighed his opinion on the contextual movement, which he dubs "Mobile 3.0", yesterday on his blog Scobelizer. But his focus was on Gimbal, a platform from Qualcomm Labs that allows Android and iOS developers to equip their apps with context awareness, effectively making almost any application artificially intelligent.
Gimbal will make use of all the sensors in your smartphone (or tablet), collect the data and "make sense of it" as Scoble explains. "Developers will have a single data pool on your cell phone to talk with (Qualcomm was very smart about privacy — none of this data leaves your own cell phone unless you give it permission to)," says Scoble, and they can use that information to appeal to you in new and previously unseen ways.
The example of Gimbal given on its site is of a mother who likes pizza and movies. Gimbal knows this based on her recent pizza purchases and movie watching habits. After picking up her three children from school (they also happen to like pizza and movies), the mother's phone alerts her as she's within two minutes of a new pizza place and offers a deal to her for a free movie rental with her pizza order.
Geofencing, which is what is going on when the mother's phone alerts her when she's a certain distance from the pizza shop, isn't exactly a new concept either. But the combination of the geofence with "Interest Sensing", as Gimbal calls it, and other contextual awareness (such as time, speed and any other information a sensor in your phone can put off) makes it unbelievably useful and seamless.
There are two immediate downsides to this context aware movement, however. First, it's an extremely easy way for advertisers to blast your phone with various ads, based on your location and preferences. Two, with all of your smartphone's sensors constantly firing and collecting data, your phone's battery, which probably isn't great to begin with, will take a hit. However, Roland Ligtenberg, product developer at Qualcomm Labs, told Scoble that "if you did all this in hardware there would be a lot less battery cost."
Either way, I'm excited and ready for this Mobile 3.0 movement to start. I'm ready to throw caution to the wind and allow my smartphone to know everything about me and tell me what I like and where I want to go. (Okay, maybe that's a bit extreme. But you get the idea.)
To be completely honest, I wasn't all that excited for Google Now. Sure, I wanted to try it out myself, just to see if it was worth all the fuss Google stirred up. And, initially, I wasn't all that impressed. It just seemed like a beefed up voice or text search app. But after a few days (and now over a week) of using it, I'm beginning to understand what a context aware service is truly capable of. I'm with Scoble 100 percent (which is not exactly common) in that this context aware movement could be the next big phase of mobile, and it could easily stretch to other devices and gadgets. And the thought of that is truly amazing.
What do you think about context aware digital assistant services? Are they too creepy for your liking? Or will you let them infiltrate every part of your life and to learn and know seemingly everything about you and your preferences? Is Google Now or Gimbal the future?