mLearnCon Reflections: The Affordances of Wearable Tech by Clark Quinn

We welcome Clark Quinn, Executive Director of Quinnovation as a Guest Writer to TWIST to share his thoughts on the last week’s mLearn Conference and Expo. Quinn_Clark

At the (great) mLearnCon conference this past week, one of the recurrent topics was wearables.  As I’ve advocated in the past, we should looking at the key affordances of new technologies to try to leapfrog the limits to our imagination. And it occurred to me that I haven’t really looked at wearables from this perspective, so it clearly is time.

Now, we should not be limited to the existing examples.  For example, Google Glass is still in the experimental stage (as Karen McGrane asked, is Glass the wearable Segway?), so we should work from the conceptual opportunities, not the current versions.

To start with, we can review my previous attempt at characterizing mobile as a whole, and we see that the core capabilities – content, compute, communicate, and capture – are unchanged.  So what do we have that is unique?

First, what is different is that these devices are worn.  That is, they’re on you, not near you in holster, pocket, or pack.  So that means that they are ‘hands free’.  This is important for many applications, such as when both hands are busy. They can still be providing support despite no free appendages.

So what distinguishes between the different wearables?  To be clear, at this point we are talking glasses or watches. How do they differ? They both have visual and audio output, processing capability, and connectivity. However, glasses lay visual on top of your existing vision, augmenting your view.  Watches, on the other hand, are on your wrist, maintaining tight contact with your skin. wearable-tech-header

So, while each share much in common with your existing smartphone (as a more apt reference point than a tablet), they have unique opportunities. Glasses don’t provide a separate view experience inherently.  Phones do, and while the screen *can* be combined with a camera view to augment visual reality, it’s not their default  mode.   Glasses can be augmenting as a natural outcome, as well as providing small separate visuals.

Watches, on the other hand, have the advantage of contact with the skin.  There are metrics that depend on physical contact. You can augment a phone with Bluetooth sensors on the body, but biometrics such as pulse, oxygenation, glucose levels and more are on tap.

For both, however, there is one unique element, and that is continuity.  They are just *on* for most of your travels. Your watch and your glasses (at least for those of us needing sight correction, in the the case of glasses :) are likely to be on, so they have the opportunity for continual monitoring and augmenting.  While they may need to draw our attention, they can be continually sampling the context – visual auditory, and bio – and that will provide new opportunities to do important things.

While I think visual augmentation and bio monitoring as very cool, the continual personally-connected nature may have the most long-term implications. And while it may be that for initial social acceptance they may need to be somewhat innocuous as we wrestle with social issues like privacy, I reckon we likely eventually treat them as fashion accessories as well as cognitive ones.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>