Betty Margolis makes a final selection — no computer can tell for sure what is meaningful to a person — and adds spoken commentary. With perhaps 10 key images selected, the whole process might take a few minutes, rather than the hours it would take to go through all photos. Caregivers review the electronic slideshow regularly soon after the event, Lee says, and hopefully important memories will be preserved even years later, rather than allowed to slip away.
What if, instead of intensively exercising a failing memory, someone still living independently just wants a prompt now and then?
"If you ask older people what most disturbs them, it's not things like forgetting medication or getting lost," says Ronald M. Baecker, a University of Toronto computer scientist who heads the school's Technologies for Aging Gracefully (TAG) Laboratory. "Several studies show it's forgetting names."
To help people remember names, TAG Lab researchers developed Fried Forecaster, which runs on a cellphone and uses GPS to track your location. It predicts, based on information you've previously entered about your social network and places you go, who you're likely to meet. It's similar in some ways, Baecker says, to phone-based GPS systems teenagers use to keep tabs on networks of friends.
The TAG Lab is working on a similar system called Marco Polo, which helps people with nominal aphasia. This condition, often caused by a stroke, can make it extremely difficult to recall words or names. But Marco Polo, an application for iPhones or Android devices, is designed to take up the linguistic slack. "Based on where you are, it tells you words or phrases that might be useful," he says. So if you frequently buy the same loaf of bread at the same bakery, the device infers what you're doing and gives you the words, if needed. "You can look at it yourself for a prompt, or show it to someone, or even have it talk to them."
Marco Polo, in its third iteration, is on the verge of being ready to market, Baecker says.
In the Robotics Institute at Carnegie Mellon, roboticist Takeo Kanade is developing a memory-support system that's less a device you carry with you to help out, à la the TAG Lab systems, and more of a computerized extension of your own body.
First Person Vision, as it's called, uses two tiny cameras mounted on eyeglasses. One camera looks at your eye to see where you're looking, and another looks at whatever you are observing. Then, based on what you or a helper has configured the system to do, it provides information. If you want people's names, for instance, it could use a face-recognition program to identify people you look at, providing details through a holograph projected in front of your eyes.
It isn't the first concept of a wearable device to provide information, says Kanade, director of Carnegie Mellon's Quality of Life Technology Center. In the past, however, developers were intent on making such systems autonomous, needing little or no input from users. They understand now that computers simply aren't yet capable of some things needed for interacting with people — even something as basic as distinguishing men from women, he says. "Then, we didn't realize people could help the computer do a better job," he says. "Now we see it as a symbiotic system, human and computer working together."
A monitor for daily routines
Not every example of intelligent assistive technology is about memory, however.
Anind K. Dey, a Carnegie Mellon professor of human-computer interaction, is collaborating on the development of a GPS-based system for older drivers that determines how they like to drive, and chooses appropriate routes.
"We know elder drivers typically don't like making unprotected left turns, and some will drive a long way and make three right turns to avoid a left turn," he says. So the system automatically chooses efficient routes drivers are likely to be comfortable with. It can even predict where a driver is headed and help them smoothly avoid road hazards and traffic jams.