LOME: Learning Human-Object Manipulation with Action-Conditioned Egocentric World Model

Quankai Gao1, Jiawei Yang1, Qiangeng Xu2, Le Chen3, Yue Wang1

1University of Southern California, 2Waymo, 3Max Planck Institute for Intelligent Systems

Overview

Method Overview

Figure 1: Overview. Given a reference image and a text instruction describing the manipulations shown in (a), LOME generates a temporally consistent egocentric hand–object interaction video, as shown in (c), conditioned on the corresponding per-frame human actions as in (b). Beyond accurate action adherence, LOME synthesizes realistic physical consequences of hand–object interactions, such as liquid dynamics when pouring from a bottle into a mug.

Abstract

Learning human-object manipulation presents significant challenges due to its fine-grained and contact-rich nature of the motions involved. Traditional physics-based animation requires extensive modeling and manual setup, and more importantly, it neither generalizes well across diverse object morphologies nor scales effectively to real-world environment. To address these limitations, we introduce LOME, an egocentric world model that can generate realistic human-object interactions as videos conditioned on an input image, a text prompt, and per-frame human actions, including both body poses and hand gestures. LOME injects strong and precise action guidance into object manipulation by jointly estimating spatial human actions and the environment contexts during training. After finetuning a pretrained video generative model on videos of diverse egocentric human-object interactions, LOME demonstrates not only high action-following accuracy and strong generalization to unseen scenarios, but also realistic physical consequences of hand–object interactions, e.g., liquid flowing from a bottle into a mug after executing a "pouring" action. Extensive experiments demonstrate that our video-based framework significantly outperforms state-of-the-art image-based and video-based action-conditioned methods and Image/Text-to-Video (I/T2V) generative model in terms of both temporal consistency and motion control. LOME paves the way for photorealistic AR/VR experiences and scalable robotic training, without being limited to simulated environments or relying on explicit 3D/4D modeling.

Training Pipeline

Training Pipeline

Training pipeline of LOME. A pretrained VAE encoder maps the reference image I, input video V, and rasterized 2D action maps  to latent representations. A camera adapter encodes per-frame ray maps into camera features, which are added to the video latents. A Diffusion Transformer (DiT), conditioned on a text prompt, denoises the concatenated noisy action and video latents, and a pretrained decoder 𝒟 reconstructs the generated video.

Action Condition

3D Human Pose

2D Projected Human Pose

Action Map

Main Results

Note: Different methods produce videos at different resolutions. Videos have been padded to maintain consistent size for comparison. All results are on test set (unseen samples). LOME is our method and GwtF is Go-with-the-Flow.

Text condition: "Pick up the black case from the wooden table using the right hand."

GT

LOME

GwtF

Action Map

Wan2.1-I2V

CoSHAND


Text condition: "Stack three coffee cups on a wooden table while sitting against a white background."

GT

LOME

GwtF

Action Map

Wan2.1-I2V

CoSHAND


Text condition: "Pick the food item from the plate and place it back into the food basket."

GT

LOME

GwtF

Action Map

Wan2.1-I2V

CoSHAND


Text condition: "Zip a big plastic bag containing a block and a plush toy."

GT

LOME

GwtF

Action Map

Wan2.1-I2V

CoSHAND


Text condition: "Pour coke into the gray cup placed on the green tablecloth."

GT

LOME

GwtF

Action Map

Wan2.1-I2V

CoSHAND


Occluded-object Manipulation and Diverse Generation

We compare LOME (ours), CoSHAND, Wan-I2V and GwtF on a challenging task where some of the objects to be manipulated are not visible in the input image (e.g., behind the fridge door). Among the three methods, only LOME produces plausible human–object interactions in this setting. LOME 1-4 denote four inference runs under identical conditions, illustrating diverse generation.

Text condition: "Open the fridge door, remove food items from the fridge, place food items onto the table."

GT

LOME

GwtF

Action Map

Wan2.1-I2V

CoSHAND


Text condition: "Open the fridge door, remove food items from the fridge, place food items onto the table."

GT

LOME 1

LOME 2

Action Map

LOME 3

LOME 4


In-the-Wild Results

We showcase LOME on real-world egocentric scenes recorded in our lab with novel objects and environments, demonstrating the generalization ability of LOME.

Text condition: "Pick up the orange bag to the left."

3D Human Pose

LOME

GwtF

Action Map

Wan2.1-I2V

CoSHAND


Text condition: "Pick up the white Airpod case from the wooden table and place it back."

3D Human Pose

LOME

GwtF

Action Map

Wan2.1-I2V

CoSHAND


More Qualitative Results (LOME)