Horace Dediu, former Nokia employee turned Apple analyst, was one of the first people we can remember who suggested that the invention of unique and different input methods are directly responsible for the user interface revolutions that we’ve experienced over the past few decades. Apple changed things with the mouse, then with the mulitouch screen, and now with Siri. Senseye thinks they’re one step ahead of everyone else, that eye tracking is going to be the next major user input. They put together a video demoing how one could control scrolling in the web browser, a spaceship in a game, and how your smartphone’s screen can be automatically dimmed when you’re not looking at it. So how does it work? Besides their custom software, all that’s needed is a device with a front facing camera, something that practically every smartphone on the market already has, and an infrared LED. The folks at Senseye say that the first Senseye-enabled smartphones will start shipping in the summer of 2013, but they do plan on releasing an application in 2012 that works on existing Android smartphones and tablet, albeit crudely.
Now call us crazy, but unless we see more compelling demos, we’re not that impressed. What Siri did with voice was let us ask our devices a question. It’ll take a few months, but we’re confident that Google and Microsoft will come out with competitive solutions. Eye tracking on the other hand, what exactly is it going to do that isn’t already easy enough today with touch? We’ve yet to see where touch can go in terms of haptics and bendable screens, which will add extra dimensions to the already well established touch paradigm, thus potentially making the whole concept of touch radically different.
That being said, we’re open to new things, as long as they’re genuinely useful. Remember, Microsoft “revolutionized” gaming with Kinect, which did body tracking, but has it really hit the mainstream? Nope.
