“THIS IS SO GREAT! You gotta try this!” I’m standing at an exhibition booth in the Convention Center watching a man seated at a PC. There are 3-D goggles on his head, a kind of joystick in his hand, a charming Swedish assistant guiding his movements. This clearly isn’t some virtual reality shoot-’em-up; all he’s doing is slowly moving the little pen-sized device. When I finally get my chance at the contraption (the line’s been a half-dozen deep) I find out what all the fuss is about.
This Swedish company, called Reachin for reasons that soon became obvious, has developed software that integrates visual 3-D environments with what’s called “haptics,” or the experience of touch. By manipulating the little device in front of me, and staring through the goggles at a mirrored image of the monitor screen, I have the sensation of touching the pen against a set of objects and shapes—a rubber ball, a tube of toothpaste. I press on them, feel their different densities and textures, run the pen along their contours. The primary application for this technology, I’m told, will be medical training, and soon I am, nauseatingly enough, rooting around in the “soft-tissue simulator,” pressing on someone’s animated gall bladder, pulling on their liver, clipping off a piece of stomach. It’s like I have reached directly into the computer and found someone’s innards.
Reachin, which is setting up its first U.S. sales office in Bellevue, was just one of several hundred companies represented at last week’s giant international conference on “Computer-Human Interaction,” or CHI 2001. The conference attracted thousands of participants. Bill Gates gave the opening address. And it covered an astonishing, often amusing, range of topics.
The conference covered basic stuff about user interfaces and mobile phone displays as well as far more arcane studies, such as “Detecting Deception in Technologically Mediated Communication” (i.e., how to tell when someone’s lying in an email). Also prevalent was the sort of inflated intellectualizing of the mundane—”Social Navigation of Food Recipes”— worthy of any MLA conference.
BILL GATES’ WILLINGNESS to address the group represented something of a venture into hostile territory. Microsoft, after all, has taken plenty of flak over the years for the bloat and difficulty of its products, especially the Office suite. Gates said that Microsoft had done some good things in human/computer interface “and also done its fair share of bad things.” (Gates made no mention of Mr. Clippie, the paper clip help agent in Office, which came in for some abuse down in the conference poster room from a Georgia Institute of Technology researcher who wrote: “An unexpected cartoon character may cause confusion…[and be] silly and annoying.”)
Gates said that it was “a very tough problem” to develop software in which, “when the system is not working it’s understandable [to the user] what’s going on and what needs to be done.” But he said he expected things to improve in the future, especially when Internet-based software provided for a continuous connection between the company and users.
The point was emphasized again at a follow-up luncheon, where one of Gates’ senior vice presidents, Craig Mundie, suggested that “persistent connectivity” would provide more ability to communicate with software users and give them automatic guidance. But he also complained that “every time we try to collect data [about user behavior] outside of a controlled environment, we get someone who wants to take us to task on the privacy side.”
Microsoft’s resolve didn’t seem to impress Ben Shneiderman of University of Maryland at College Park, a longtime expert in human-computer interaction who received the CHI achievement award at the conference. “If a company is truly committed to usability, why is less than 1 percent of its employees devoted to this topic?” he asked at the luncheon. Mundie responded that in addition to the company’s 142 “usability engineers,” Microsoft also had “close to 2,000 people involved in work on the interface. I think it’s a mistake to suggest that usability should be separated from the basic design of the product.”
Shneiderman made a modest proposal that PC users get a nickel every time they’re confronted with a dialogue box they don’t understand and $1 every time the machine crashes. “There’s more time wasted on computers than on the highways,” he contends. He thinks there should be regular quality and performance reports for software and computer equipment, just as there are for the airlines. (Late out of Denver 30 percent of the time. Screen freezes when I attempt to copy into Word from IE 60 percent of the time, etc.).
WITH ANY LUCK, tool bars and dialogue boxes will soon be a thing of the past anyway. There was wide agreement at CHI 2001 that, in the not-too-distant future, “computing won’t be about the thing we call a computer anymore,” and that we’ll all be interacting with everyday devices that are, in essence, computerized. Indeed one group of whimsical researchers from the IBM Almaden Research Center offered their vision at the conference Design Expo, where they proposed to dispense with separate devices—the cell phone, the Palm Pilot—and instead “embed computing power in what we’re already wearing.” The e-mood ring that changes color to notify you of a new email message, the video display bracelet, the earrings that serve as your digital earpiece and the necklace you use for talking back. (No prototypes were available.)
A lot of discussion centered around ways in which the human-computer gap will start to disappear, both as computers become more humanized, and as our interaction with them becomes more direct and life-like (as with the Reachin technology). CHI 2001’s closing plenary speaker, Gregg Vanderheiden of the Trace Research Center in Madison, who focuses on improving usability for the disabled, told me: “I think we’ll eventually be able to do a complete brain-machine interface”—a tremendous boon to those whose motor skills, and inputting ability, are limited.
A researcher from the University of Tokyo envisioned a time when computers will “pay attention to non-verbal signals (prosody, gestures)” and the computer will likewise be able to give the impression of “really listening, really caring.” One of the conference’s academic stars, Gavriel Salvendy of Purdue University, told me he thinks we’ll eventually have our personal qualities embedded in a chip, and “the computer will come up with a mode of behavior most appropriate to me.”
For example, he said, “If I’m an extrovert neurotic, the most effective way for the computer to display will be as a stable introvert. It’s the same as the way we choose partners”—opposites attract. Hmmm. A stable introvert computer. Maybe that’s why I always had that inexplicable attraction to HAL.