Affective computing aims to detect the user's emotional state so that the system can adapt to the user.
This emotional state could be recorded with the data collected at that time (such as when using wearable digital recorders). This would allow such uses as playing back parts of a lecture or meeting you missed due to boredom, playing back funny scenes of a recorded home movie, or simply monitoring stress through the day with respect to time and (GPS) location.
Probably more interesting is adaptive information presentation. A system could provide more or less information as interest varies. Games could respond to emotional feedback--the avatar changes or the game gets easier as the user gets more frustrated. This article even suggests automated music selection based on listener mood and environment.
All this said, emotion is very difficult to measure accurately, even in extremely controlled and artificial environments. Some measure commonly used include skin conductivity (GSR), temperature, blood volume pressure (BVP), heart rate (from BVP), EMG (electromyogram, for muscular electrical activity), and respiration rate.
As a proof of concept, the authors of this article designed and tested the StartleCam. Equipped with a camera, wearable computer, and skin conductivity sensors, the StartleCam would record from the camera but play back only those scenes during which the user was startled. (They startled the user by having someone come up behind them unexpectedly.) Startle response is quite easy to measure. Skin conductivity changes about 3 seconds after being startled.
However, the authors also admitted that in an ambulatory setting, emotions were going to be very hard to detect over the other physiological "noise". The emotion readings can be thrown off by diet, physical exertion, and environmental context (such as whether the subject is at home or the office). Even talking or coughing can confuse the physiological readings.
To highlight these variations, the author conducted a small experiment by collecting readings while users sat, walked, jogged, and coughed. Indeed, the data varied wildly during these conditions. Any emotion reader will have to filter this "ambulatory" noise from the data.
To this end, the authors suggested adding a foot pressure sensor, which will reveal whether the user is currently standing or walking. An audio detector could indicate when the user is talking or coughing. Also, while there is some individual variation, the feet work about as well as hands for skin conductivity readings (GSR). Using the feet would leave the hands free of sensors.
Other issues include how the components of the system would communicate if not through wires. One option is infrared; another is short wave FM sent through the skin. Another major concern is privacy. Who do you want to know about your mood? Your spouse? Your employer? A salesman? Though the authors do provide a proof-of-concept prototype, the challenges of producing a truly useful working system are complex.
Picard, R. W. and J. Healey. "TR#467: Affective Wearables." Personal Technologies (1997) 1:231-240 <http://vismod.media.mit.edu/tech-reports/TR-467/index.html>
CIS: Enrichment Reading
|Last Edited: 03 Dec 2004|
©2004 by Z. Tomaszewski.