InsightaaS: In the most recent post from his Rough Type blog, Internet/IT philosopher Nicolas Carr looks at the implications of Google’s ‘broad patent for “hand gestures to signify what is important.'” He believes that “Glass turns the human body into a computer input device more fully and more publicly than anything we’ve seen before,” with the result that “Glass turns the real into a simulation of itself.”
Do you wear Google Glass, or does Google Glass wear you?
That question came to the fore on October 15, when the U.S. government granted Google a broad patent for “hand gestures to signify what is important.” Now, don’t panic. You’re not going to be required to ask Google’s permission before pointing or clapping or high-fiving or fist-pumping. The gestures governed by the patent are limited to those “used to provide user input to a wearable computing device.” They are, in particular, hand motions that the company envisions will help people use Glass and other head-mounted computers.
One of the challenges presented by devices like Glass is the lack of flexible input devices. Desktops have keyboards and mice. Laptops have touchpads. Smartphones, tablets, and other touchscreen devices have the user’s fingers. How do you send instructions to a computer that takes the form of a pair of glasses? How do you operate its apps? You can move your head around – Glass has a motion sensor – but that accomplishes only so much. There aren’t all that many ways you can waggle your noggin, and none of them are particularly precise. But Glass does have a camera, and the camera can be programmed to recognize particular hand gestures and translate them into instructions for software applications…
Read the entire post: http://www.roughtype.com/?p=3939