Overall Goal

overall

The overall goal of my research is to enable robots to acquire manipulation skills at the human level. To achieve this, we begin by investigating human behavior and exploring the brain’s control mechanisms over muscles during manipulation tasks. Our objective is to translate insights from human experts’ manipulation into robotic systems, enabling robots to learn like humans and enhancing their operational capabilities—including improved success rates, broader generalization, and increased dexterity. Specifically, we have structured our research into two stages: first, we strive to explain how humans perform operations, and then we address how robots can learn these skills. Below is a brief overview of our current progress.



Stage 1: How humans perform operations.

Research 1: Muti-modal Data for Multi-scale Motor Behavior Modeling

MMBM

Human motor skills are characterized by sequences of motor movements governed by the brain’s hierarchical control strategies, which naturally motivates us to model these sequences in a multi-scale manner. In our research, we model motor sequences from coarse to fine by leveraging the unique characteristics of various data modalities to capture motor characteristics at multi-level granularities.



Research 2: Cross-modal Alignment for Brain-Muscle Modulation Analyzing

CABMA

To explicitly understand how the brain controls muscle during manipulation, our research aligns the representations of EEG-measured brain activity with the corresponding EMG-recorded muscular responses in a shared space. By analyzing the intrinsic properties of these signals and their impact on downstream tasks, we can quantitatively capture the modulation process between the brain and muscles. We have explored various methods to align these cross-modal signals, including Siamese learning (ACM MM 2023), disentangled representation learning (IJCNN 2024), and contrastive learning (IEEE TCYBER 2025).

The downstream proxy tasks we designed validate the effectiveness of our shared representation learning—evidenced by an approximately 4% improvement in relative motor classification accuracy and enhanced fidelity of the cross-modal generated signals (MMD: 0.027; DTW: 20.44). Furthermore, by introducing perturbations to the raw signals and evaluating our trained model’s performance on downstream tasks, we further examine how the brain modulates muscle activity during manipulation. Remarkably, our outcomes align with neuroscience findings across spatial, temporal, and frequency dimensions.



Stage 2: How Robots Learn Skills.

Research 1: Priori Knowledge Transferring for Enhanced Robotic Learning

RLPK

Enhancing the manipulation abilities of robots critically depends on enabling the learning model to acquire the prior knowledge associated with specific tasks. We have investigated two strategies for integrating such prior knowledge:



In the future, we will continue our research, including—but not limited to—exploring how to enable robots to acquire manipulation skills as efficiently as humans and comparing the learning differences between robots and humans. New works will come soon!!!