It appears Google did decide to skip the 13th floor. What could we be waiting for?
A scenario-friendly Bluetooth proximity API. It seems that with freedom of motion enabled by Glass that Glass’ APIs could offer direction to nearest beacon by combining signal strength with accelerometer/compass. But in lieu of that, API support for proximity would be great for a project I’m working on.
Less picky head-on. I’ve resorted to turning off head detection every time I have people try scenarios because Glass simply won’t turn on for them. And if I turn head-on off, it works great for demo, but I accumulate a fabulous collection of upside down pictures & videos of my desk. There should be a middle ground.
Eye-based hands-free. Google’s recent “don’t be a glasshole” PR work reminds us they designed Glass to use with voice. But there are times when blurting out a command to no one in particular is akin to throwing a grenade, and of course not everything can be done through voice in XE12, so why throw grenades? Sure, more should possible with voice, but I’d like to see experiments similar to “wink” (to take a photo) that use the inward-facing sensor. I’m not sure exactly sure what — I just think that sensor is way underutilized.
Preparing for the alternative hardware story. I believe wearables’ future is in software and services, not the hardware itself. People don’t wear the same things all day, and the singular Glass hardware kills facial symmetry. But the software seems well thought-out and, like Android proper, should work on many devices that target particular industries, situations, and fashions. Bolting the XE on 4 designer frames is not what I mean. Let’s see some real alternatives that share the same software and services.