Think-Then-React: Towards Better Action-to-Reaction Motion Generation

Anonymous ICLR2025 submission

Given a human action as input, our model Think-Then-React (TTR) first thinks by generating an action description and reasoning about a reaction prompt. It then reacts to the input based on the results of this thinking process. TTR reacts in a real-time manner (at every timestep) and periodically re-thinks at specific interval (every 2 timesteps in this illustration) to mitigate accumulated errors.

Abstract

Modeling human-like action-to-reaction generation has significant real-world applications, like human-robot interaction and games. Despite recent advancements in single-person motion generation, it is still challenging to well handle action-to-reaction generation, due to the difficulty of directly predicting reaction from action sequence without prompts, and the absence of a unified representation that effectively encodes multi-person motion. To address these challenges, we introduce Think-Then-React (TTR), a large language model-based framework designed to generate human-like reactions. First, with our fine-grained multimodal training strategy, TTR is capable to unify two processes during inference: a thinking process that explicitly infers action intentions and reasons corresponding reaction description, which serve as semantic prompts, and a reacting process that predicts reactions based on input action and the inferred semantic prompts. Second, to effectively represent multi-person motion in language models, we propose a unified motion tokenizer by decoupling egocentric pose and absolute space features, which effectively represents action and reaction motion with same encoding. Extensive experiments demonstrate that TTR outperforms existing baselines, achieving significant improvements in evaluation metrics, such as reducing FID from 3.988 to 1.942.

Overall method

(a) We propose a unified tokenizing process that encodes human action and reaction while maintaining absolute space feature and egocentric motion feature. (b) To obtain space tokens of a motion, we first extract its initial space state, i.e., 2D position and body orientation. Then we normalize the body center at the origin while facing positive z axis for effectively encoding the following pose sequences. (c) During inference, our method TTR first infers action's intent and semantics. Then TTR could predict corresponding reaction based on both the input action and inferred intent.

Action-to-Reaction Cases

We visualize samples from Inter-X test set with Blender. The avatar in blue is ground-truth action and avatar in green is our predicted reaction. Slight jitter might be observed due to unstable IK process. Hands motions are not included.

Asymmetrical Cases

Asymmetrical Cases (Baselines)

Symmetrical Cases

Symmetrical Cases (Baselines)

Failure Cases

User study comparisons (Left: TTR & Right: ReGenNet)

*Thanks to nerfies for their webpage template.