We present this thought-provoking piece by Peter Hancock. Peter is a past President of HFES and is currently a professor at the University of Central Florida.
I am sitting here, writing this my first blog in the Florida Mall, just outside the two major computer stores, where I have been perusing and contemplating the various touch-activated devices that lie between the traditional laptop and the emerging smartphone. From my perspective, all of these offerings are lamentably slow in their response capacities. Not only do they stutter under my traveling fingers, they are abhorrently inhibitive of cross interaction. Nevertheless, they are nominally the state-of-the-art. What is it then that they do not do; and, more to the point, what is it they both could do and should do?
The interface as a touch-screen is all well and good by me. I like embodied cognition and believe in tactile interaction. But the primary instrument of this forms of communication, – my hand, is severely limited by this unifunctional touch-screen. The interaction is degraded and constrained; the imagination and possibilities even more so. Had I but only one useful digit (and perhaps you know the one I mean), then the interface restriction might be barely tolerable. However, the human hand is a marvelous thing (only think of a surgeon’s dexterity, or a concert pianist’s versatility) and the touch-screen here is forcing me to play “chopsticks.”
Touch-screens to these systems should be multiply, and dynamically, homeomorphically mapped to the functional utility of the whole of the hand (qua action-end effector). [If you don’t understand this last sentence; break out a dictionary]. We can plot the generalized capacities of the hand with respect to the innervative (and innovative?) patterns of the sensory and motor homunculi. This begins to inform us as to the extensive patterns concerning how the hand can function; observations of highly dexterous individuals provide us with even further insight.
Now imagine an interface that permits multiple and changing touch points at will so that opening, closing, and merging of active applications can be conducted severally and at the limits of the responsive motor system not the present sad, frustrating, and generally slow rate of the technical, tactile, touch-screen buffering capacity. Now users will be able to play information symphonies and knowledge concertos weaved from vast data tapestries (surely blogs encourage mixed metaphors?). How different this will be from the single-channel, punctate activation of each sequential and non-convivial APP! This facilitation is, of course, purely on the tactile input side. I say nothing here directly about the limits of and integration of that other motor channel, voice-input. All that I can say is that the current rate-limiter of interaction remains the interface – and by now it should not be.
The active mind is where interactions occur. Discrete APPS may be sufficient for individual, purposed-directed tasks. However, the rate of, and the moral imperative which drives, the computational support system, must go beyond any such restrictions. APPs, by their nature, tend to be closed-end in their functionality. Indeed, APPs may be described by their constraints. What is needed for our next step is an unbounding that cannot be realized in such a rate-limited environment. We need Paderewski’s of the interface. Virtuosos who will not merely play the instrument to its greatest unbound extent, but will act to extend our understanding of what that instrument (interface) can be. Interfaces need to be conduits of imagination, not portals of limited commonality however bright, shiny, appealing, and financially profitable that such packaged, modular elements might be. Conviviality is more than a mere concept; it is a commitment to an expanded, unburdened, and unbound horizon. Such devices are far more than high street toys. They are the keys to civilization’s survival whose locks are getting harder and harder to open day-by-day.