It appears Google did decide to skip the 13th floor. What could we be waiting for?
A scenario-friendly Bluetooth proximity API. It seems that with freedom of motion Glass’ APIs could offer direction to nearest beacon by combining signal strength with accelerometer/compass. But in lieu of that, Android 4.4′s support for BLE proximity would be great for a project I’m working on.
Less picky head-on. I’ve resorted to turning off head detection every time I have people try scenarios because Glass simply won’t turn on for them. And if I forget to turn it back on, I accumulate a fabulous collection of upside down pictures & videos of my desk. There should be a middle ground.
Hands-based voice-free. Google’s recent “don’t be a glasshole” PR work reminds us they designed Glass to use with voice. But there are times when barking a command just doesn’t work (like when I wanted to live-tweet at Hackfort), and of course as of XE12 not everything can be done by voice anyhow.
I’d like Glass to ship with a small pocket keypad, like a screenless Blackberry, for touch-typing into the timeline.
That’s a good segue to my final point:
Preparing for the alternative hardware story. I believe wearables’ future is in software and services, not this early hardware that has poor battery life, is uncomfortable for all day use, can’t be used with your own sunglasses, kills facial symmetry, and has been legitimately made fun of on SNL.
But the software is visionary, and, like Android proper, could work on devices that target particular industries, situations, fashions, and needs, starting yesterday. I think we’ll start to see this affect the software (i.e.. builds of the OS that run on a spec/VM instead of XE hardware specifically) sooner than later.