Close relations
work is currently being carried out all over the world in the field of interactive computer agents - animated characters, with 'personalities' who will one day become the 'face' of our computers. The latest idea coming out of mit's Media Lab, Cambridge, Massachusetts on this front is Gandalf - a character capable of face-to-face interaction with people in real-time, and able to perceive their gestures, speech and gaze. Except he can't do all that just yet, but the project does show how control of a graphical face could produce some of the behaviour exhibited by people in conversation. The eventual aim is to enable people to interact with computers in the same manner they interact with other humans.
Gandalf was developed by mit graduate Kristinn Thorisson working with Justine Cassell, Head of mit Media Lab's Gesture & Narrative Language Group.Currently, to interact with Gandalf, the user must wear a body-tracking suit, an eye tracker and a microphone. But eventually, this equipment will become unnecessary as computer-vision systems become able to perceive the user's visual and auditory behaviour. Thorisson explains that Gandalf is based on creation of an architecture for psychosocial dialogue skills that allows implementation of 'full-duplex' multimodal characters so that they accept multimodal input and generate multimodal output in real-time, and are interruptable.
The architecture is based on three artificial-intelligence (ai) approaches: blackboards, Schema theory and behaviour-based ai. Multi-modal information streams in from the user and is processed at three different levels using blackboards for communicating immediate and final results. An action scheduler cylinder composes particular motor commands and sends them to the agent's animation module.
Part of the work includes generating interactive facial animation in a cartoon-style approach. For this Thorisson is developing a Toonface system based on an object-oriented approach to graphical faces which he says easily allows for rapid construction of whacky-looking characters -- automatically.
The animation scheme allows a controlling system to address a single feature on the face, or any combination of features, and animate them smoothly from one position to the next. "Any conceivable configuration of any movable facial feature can be achieved instantly without having to add 'examples' into a constantly expanding database. The system employs the notion on 'motors' that operate on the facial features and move them in either one or two dimensions," says Thorisson.
Gandalf can currently answer questions about the planets of the solar system. But, future work includes adding more complex natural language understanding and generation, and an increased ability to follow dialogue in real time.
Related Content
- Report by the Kerala State Pollution Control Board regarding Periyar river pollution, 19/11/2024
- Judgment of the National Green Tribunal regarding pollution of river Yamuna at Agra and Mathura-Vrindavan due to discharge of untreated sewage in the river, 24/04/2024
- Order of the National Green Tribunal regarding running of five hadda roddis (dump yard for dead animals) in the close vicinity of river Sutlej, Ludhiana, Punjab, 05/04/2024
- Order of the National Green Tribunal regarding improper management of a garbage dumping yard, Rae Bareli road, Lucknow, Uttar Pradesh, 14/03/2024
- Order of the National Green Tribunal regarding brick kilns operating illegally in Uttar Pradesh, 12/02/2024
- Order of the National Green Tribunal regarding illegal operation of brick kilns, Ajmatpur Bindki, Fatehpur, Uttar Pradesh, 21/12/2023