Drawing inspiration from psychology, computer vision
and machine learning, the team in the Computer Laboratory at the University of
Cambridge has developed mind-reading
machines -
computers that implement a computational model of mind-reading to infer mental
states of people from their facial signals. The goal is to enhance
human-computer interaction through empathic responses, to improve the
productivity of the user and to enable applications to initiate interactions with
and on behalf of the user, without waiting for explicit input from that user.
There are difficult challenges:
Using a digital video camera, the mind-reading computer ppt system analyzes a person's
facial expressions in real time and infers that person's underlying mental
state, such as whether he or she is agreeing or disagreeing, interested or
bored, thinking or confused.
Prior knowledge of how particular mental states are
expressed in the face is combined with analysis of facial expressions and head
gestures occurring in real time. The model represents these at different
granularities, starting with face and head movements and building those in time
and in space to form a clearer model of what mental state is being represented.
Software from Nevenvision identifies 24 feature points on the face and tracks
them in real time. Movement, shape and colour are then analyzed to identify
gestures like a smile or eyebrows being raised. Combinations of these occurring
over time indicate mental states. For example, a combination of a head nod,
with a smile and eyebrows raised might mean interest. The relationship between
observable head and facial displays and the corresponding hidden mental states
over time is modeled using Dynamic Bayesian Networks.
Why mind reading?
The mind-reading
computer system presents
information about your mental state as easily as a keyboard and mouse present
text and commands. Imagine a future where we are surrounded with mobile phones,
cars and online services that can read our minds and react to our moods. How
would that change our use of technology and our lives? We are working with a
major car manufacturer to implement this system in cars to detect driver mental
states such as drowsiness, distraction and anger.
Current projects in Cambridge are considering further
inputs such as body posture and gestures to improve the inference. We can then
use the same models to control the animation of cartoon avatars. We are also
looking at the use of mind-reading to support on-line shopping and learning
systems.
The mind-reading
computer system may also
be used to monitor and suggest improvements in human- human interaction. The Affective
Computing Group at the MIT Media Laboratory is developing an emotional-social
intelligence prosthesis that explores new technologies to augment and improve
people's social interactions and communication skills.
How does it work?
The mind reading actually involves measuring the volume and oxygen level of the
blood around the subject's brain, using technology called functional
near-infrared spectroscopy (fNIRS).
The user wears a sort of futuristic headband that sends light in that spectrum into the tissues of the head where it is absorbed by active, blood-filled tissues. The headband then measures how much light was not absorbed, letting the computer gauge the metabolic demands that the brain is making.
The user wears a sort of futuristic headband that sends light in that spectrum into the tissues of the head where it is absorbed by active, blood-filled tissues. The headband then measures how much light was not absorbed, letting the computer gauge the metabolic demands that the brain is making.
The results are often compared to an MRI, but can be gathered with lightweight, non-invasive equipment. Wearing the fNIRS sensor, experimental subjects were asked to count the number of squares on a rotating onscreen cube and to perform other tasks. The subjects were then asked to rate the difficulty of the tasks, and their ratings agreed with the work intensity detected by the fNIRS system up to 83 percent of the time.
Comments
Post a Comment