Menu

Gesture Generation by Imitation: From Human Behavior to Computer Character Animation-Michael Kipp

Gesture Generation by Imitation: From Human Behavior to Computer Character Animation-Michael Kipp

★★★★★ 4.5/5
530,000+ Happy Customers
  • Manufactured by
    Lexicon
This manual pertains to the groundbreaking research detailed in "Gesture Generation by Imitation: From Human Behavior to Computer Character Animation," authored by Michael Kipp and published in 2005. This work explores the integration of embodied agents in human-computer interaction, focusing on the generation of natural and individual conversational gestures. By analyzing human gestural behavior from empirical data, specifically recordings from the TV talk show "Das Literarische Quartett," the research extracts key parameters to create believable and human-like movements for animated agents. The dissertation presents a novel three-stage process: observation, modeling, and generation, each supported by dedicated software modules.

The purpose of this document is to outline the methodology and findings of this research, serving as a comprehensive guide to the techniques developed for gesture generation. It details the ANVIL tool for video annotation, the NOVALIS module for computing individual gesture profiles based on statistical methods, and the NOVA generator for creating gestures from annotated text input. This manual is intended for researchers, developers, and anyone interested in the field of computer animation, artificial intelligence, and human-computer interaction, providing insights into creating more natural and engaging embodied agents through imitation of human behavior.

In an effort to extend traditional human-computer interfaces research has introduced embodied agents to utilize the modalities of everyday human-human communication, like facial expression, gestures and body postures. However, giving computer agents a human-like body introduces new challenges. Since human users are very sensitive and critical concerning bodily behavior the agents must act naturally and individually in order to be believable. This dissertation focuses on conversational gestures. It shows how to generate conversational gestures for an animated embodied agent based on annotated text input. The central idea is to imitate the gestural behavior of a human individual. Using TV show recordings as empirical data, gestural key parameters are extracted for the generation of natural and individual gestures. The gesture generation task is solved in three stages: observation, modeling and generation. For each stage, a software module was developed. For observation, the video annotation research tool ANVIL was created. It allows the efficient transcription of gesture, speech and other modalities on multiple layers. ANVIL is application-independent by allowing users to define their own annotation schemes, it provides various import/export facilities and it is extensible via its plug-in interface. Therefore, the tool is suitable for a wide variety of research fields. For this work, selected clips of the TV talk show ``Das Literarische Quartett'' were transcribed and analyzed, arriving at a total of 1,056 gestures. For the modeling stage, the NOVALIS module was created to compute individual gesture profiles from these transcriptions with statistical methods. A gesture profile models the aspects handedness, timing and function of gestures for a single human individual using estimated conditional probabilities. The profiles are based on a shared lexicon of 68 gestures, assembled from the data. Finally, for generation, the NOVA generator was devised to create gestures based on gesture profiles in an overgenerate-and-filter approach. Annotated text input is processed in a graph-based representation in multiple stages where semantic data is added, the location of potential gestures is determined by heuristic rules, and gestures are added and filtered based on a gesture profile. NOVA outputs a linear, player-independent action script in XML.

Author: Kipp, Michael
Publisher: Dissertation.Com
Illustration: N
Language: ENG
Title: Gesture Generation by Imitation: From Human Behavior to Computer Character Animation
Pages: 00277 (Encrypted PDF)
On Sale: 2005-10-31
SKU-13/ISBN: 9781581122558
Category: Computers : General


In an effort to extend traditional human-computer interfaces research has introduced embodied agents to utilize the modalities of everyday human-human communication, like facial expression, gestures and body postures. However, giving computer agents a human-like body introduces new challenges. Since human users are very sensitive and critical concerning bodily behavior the agents must act naturally and individually in order to be believable. This dissertation focuses on conversational gestures. It shows how to generate conversational gestures for an animated embodied agent based on annotated text input. The central idea is to imitate the gestural behavior of a human individual. Using TV show recordings as empirical data, gestural key parameters are extracted for the generation of natural and individual gestures. The gesture generation task is solved in three stages: observation, modeling and generation. For each stage, a software module was developed. For observation, the video annotation research tool ANVIL was created. It allows the efficient transcription of gesture, speech and other modalities on multiple layers. ANVIL is application-independent by allowing users to define their own annotation schemes, it provides various import/export facilities and it is extensible via its plug-in interface. Therefore, the tool is suitable for a wide variety of research fields. For this work, selected clips of the TV talk show ``Das Literarische Quartett'' were transcribed and analyzed, arriving at a total of 1,056 gestures. For the modeling stage, the NOVALIS module was created to compute individual gesture profiles from these transcriptions with statistical methods. A gesture profile models the aspects handedness, timing and function of gestures for a single human individual using estimated conditional probabilities. The profiles are based on a shared lexicon of 68 gestures, assembled from the data. Finally, for generation, the NOVA generator was devised to create gestures based on gesture profiles in an overgenerate-and-filter approach. Annotated text input is processed in a graph-based representation in multiple stages where semantic data is added, the location of potential gestures is determined by heuristic rules, and gestures are added and filtered based on a gesture profile. NOVA outputs a linear, player-independent action script in XML.

Author: Kipp, Michael
Publisher: Dissertation.Com
Illustration: N
Language: ENG
Title: Gesture Generation by Imitation: From Human Behavior to Computer Character Animation
Pages: 00277 (Encrypted PDF)
On Sale: 2005-10-31
SKU-13/ISBN: 9781581122558
Category: Computers : General