Skip to content

feat: Add atomic action abstraction layer for embodied AI motion generation#239

Open
yuecideng wants to merge 3 commits intomainfrom
yueci/atomic-action-init
Open

feat: Add atomic action abstraction layer for embodied AI motion generation#239
yuecideng wants to merge 3 commits intomainfrom
yueci/atomic-action-init

Conversation

@yuecideng
Copy link
Copy Markdown
Contributor

Description

This PR introduces an atomic action abstraction layer for embodied AI motion generation. The implementation provides a unified interface for atomic actions like reach, grasp, move, etc., with support for semantic object understanding and extensible custom action registration.

Key Components

  1. Core Classes (core.py):

    • Affordance - Base class for affordance data (GraspPose, InteractionPoints)
    • ObjectSemantics - Semantic information about interaction targets
    • ActionCfg - Configuration class for atomic actions
    • AtomicAction - Abstract base class for all atomic actions
  2. Action Implementations (actions.py):

    • ReachAction - Reach to target pose or object
    • GraspAction - Execute grasp motion
    • ReleaseAction - Release grasp
    • MoveAction - Move to target pose
  3. Action Engine (engine.py):

    • AtomicActionEngine - Execution engine for atomic actions
    • Registry system for custom action registration
    • Support for action composition and chaining

Design Principles

  • Leverage existing infrastructure: Uses existing MotionGenerator, PlanResult, and IK/FK solvers
  • Object semantics: Actions consider semantic labels, affordance data, and geometry
  • Consistent interface: Unified API across all motion primitives with predictable return types
  • Extensibility: Registry-based motion primitive registration with plugin architecture

Design Document

See docs/superpowers/specs/2026-04-17-atomic-action-abstraction-design.md for detailed design specification.

Type of change

  • Bug fix (non-breaking change which fixes an issue)
  • Enhancement (non-breaking change which improves an existing functionality)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (existing functionality will not work without user modification)
  • Documentation update

Testing

Unit tests provided in tests/sim/atomic_actions/test_core.py covering:

  • Affordance base class and subclasses (GraspPose, InteractionPoints)
  • ObjectSemantics dataclass
  • ActionCfg configuration
  • Action registry functions

All 12 tests passing.

Checklist

  • I have run the black . command to format the code base.
  • I have made corresponding changes to the documentation (design spec included)
  • I have added tests that prove my fix is effective or that my feature works
  • Dependencies have been updated, if applicable.

🤖 Generated with Claude Code

yuecideng and others added 3 commits April 17, 2026 13:14
- Fix dataclass field ordering in ObjectSemantics (non-default follows default)
- Convert batch_size property to get_batch_size() method for consistency
- Add missing grasp_types field and get_grasp_by_type() method to GraspPose
- Add missing point_types field and get_points_by_type() method to InteractionPoints
- Add missing velocity_limit and acceleration_limit fields to ActionCfg

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copilot AI review requested due to automatic review settings April 20, 2026 02:19
@yuecideng yuecideng added enhancement New feature or request motion gen Things related to motion generation for robot agent Features related to agentic system labels Apr 20, 2026
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR introduces a new embodichain.lab.sim.atomic_actions module intended to provide a unified “atomic action” abstraction (reach/grasp/move/release) on top of the existing motion-planning stack, plus a small registry/engine and accompanying design spec + unit tests for core data types.

Changes:

  • Added core atomic-action data models (Affordance, ObjectSemantics, ActionCfg) and an AtomicAction base class.
  • Added default atomic actions (ReachAction, GraspAction, MoveAction, ReleaseAction) and an AtomicActionEngine with a global registry.
  • Added a design document and unit tests for core affordance/registry helpers.

Reviewed changes

Copilot reviewed 7 out of 7 changed files in this pull request and generated 14 comments.

Show a summary per file
File Description
embodichain/lab/sim/atomic_actions/core.py Defines affordance/semantics/config + base action class utilities.
embodichain/lab/sim/atomic_actions/actions.py Implements reach/grasp/move/release actions using MotionGenerator/TOPPRA.
embodichain/lab/sim/atomic_actions/engine.py Adds engine orchestration + global registry + placeholder semantic analyzer.
embodichain/lab/sim/atomic_actions/__init__.py Exposes public API for atomic_actions package.
tests/sim/atomic_actions/test_core.py Adds unit tests for affordances, ObjectSemantics, ActionCfg, registry helpers.
tests/sim/atomic_actions/__init__.py Marks test package for atomic actions.
docs/superpowers/specs/2026-04-17-atomic-action-abstraction-design.md Design specification for the new abstraction layer.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +239 to +242
):
self.motion_generator = motion_generator
self.robot = motion_generator.robot
self.device = self.robot.device
Copy link

Copilot AI Apr 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AtomicAction.__init__ only accepts motion_generator, but other code in this PR (e.g., action implementations) calls super().__init__(motion_generator, robot, control_part, device) and later relies on self.control_part. Either update AtomicAction.__init__ to accept/store robot, control_part, and device (and keep it consistent with the concrete actions/engine), or update all actions/engine to match the existing constructor and set control_part elsewhere.

Suggested change
):
self.motion_generator = motion_generator
self.robot = motion_generator.robot
self.device = self.robot.device
robot: Optional[Robot] = None,
control_part: Optional[str] = None,
device: Optional[torch.device] = None,
):
self.motion_generator = motion_generator
self.robot = robot if robot is not None else motion_generator.robot
self.control_part = (
control_part if control_part is not None else ActionCfg.control_part
)
self.device = device if device is not None else self.robot.device

Copilot uses AI. Check for mistakes.
device: torch.device = torch.device("cuda"),
interpolation_type: str = "linear", # "linear", "cubic", "toppra"
):
super().__init__(motion_generator, robot, control_part, device)
Copy link

Copilot AI Apr 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ReachAction.__init__ calls super().__init__(motion_generator, robot, control_part, device), but AtomicAction.__init__ currently only takes motion_generator. This mismatch will prevent instantiation of default actions in AtomicActionEngine.

Suggested change
super().__init__(motion_generator, robot, control_part, device)
super().__init__(motion_generator)
self.robot = robot
self.control_part = control_part
self.device = device

Copilot uses AI. Check for mistakes.
Comment on lines +87 to +101
target_states = [
PlanState(qpos=start_qpos, move_type=MoveType.JOINT_MOVE),
PlanState(xpos=approach_pose, move_type=MoveType.EEF_MOVE),
]

# Plan trajectory
options = MotionGenOptions(
control_part=self.control_part,
is_interpolate=True,
is_linear=self.interpolation_type == "linear",
interpolate_position_step=0.002,
plan_opts=ToppraPlanOptions(
sample_interval=kwargs.get("sample_interval", 30),
),
)
Copy link

Copilot AI Apr 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ReachAction.execute builds target_states mixing MoveType.JOINT_MOVE and MoveType.EEF_MOVE while also setting MotionGenOptions(is_interpolate=True). MotionGenerator.generate only supports pre-interpolation when all states share the same move_type, so this will error (or produce invalid stacks). Use options.start_qpos for the start state and make all target_states EEF_MOVE, or disable pre-interpolation / split planning into separate stages.

Copilot uses AI. Check for mistakes.
)
success, _ = self.robot.compute_ik(
pose=target_pose.unsqueeze(0),
qpos_seed=qpos_seed.unsqueeze(0),
Copy link

Copilot AI Apr 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ReachAction.validate passes qpos_seed= into Robot.compute_ik, but the API uses joint_seed=. As written, this will raise a TypeError when validation is called.

Suggested change
qpos_seed=qpos_seed.unsqueeze(0),
joint_seed=qpos_seed.unsqueeze(0),

Copilot uses AI. Check for mistakes.
Comment on lines +388 to +389
velocity_limit=velocity_limit,
acceleration_limit=acceleration_limit,
Copy link

Copilot AI Apr 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ToppraPlanOptions does not support velocity_limit / acceleration_limit keyword args (it uses a constraints dict instead). Passing these will raise a TypeError when constructing ToppraPlanOptions; map the limits into constraints={"velocity": ..., "acceleration": ...} or extend the planner options type if needed.

Suggested change
velocity_limit=velocity_limit,
acceleration_limit=acceleration_limit,
constraints={
"velocity": velocity_limit,
"acceleration": acceleration_limit,
},

Copilot uses AI. Check for mistakes.
)
success, _ = self.robot.compute_ik(
pose=grasp_pose.unsqueeze(0),
qpos_seed=qpos_seed.unsqueeze(0),
Copy link

Copilot AI Apr 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

GraspAction.validate passes qpos_seed= into Robot.compute_ik, but the API uses joint_seed=. This will raise a TypeError when validation is called.

Suggested change
qpos_seed=qpos_seed.unsqueeze(0),
joint_seed=qpos_seed.unsqueeze(0),

Copilot uses AI. Check for mistakes.
is_linear=is_linear,
interpolate_position_step=0.002,
plan_opts=ToppraPlanOptions(
sample_interval=kwargs.get("sample_interval", 30),
Copy link

Copilot AI Apr 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

MoveAction.execute references kwargs.get(...) when building ToppraPlanOptions, but execute() does not accept **kwargs. This will raise NameError: name 'kwargs' is not defined. Either add **kwargs to the signature or remove the kwargs usage.

Suggested change
sample_interval=kwargs.get("sample_interval", 30),
sample_interval=30,

Copilot uses AI. Check for mistakes.
Comment on lines +114 to +131
from .core import GraspPose, InteractionPoints

# Generate default grasp poses based on object type
default_poses = torch.eye(4).unsqueeze(0)
default_poses[0, 2, 3] = 0.1 # Default offset

grasp_affordance = GraspPose(
object_label=label,
poses=default_poses,
grasp_types=["default"],
)

# Default interaction points
interaction_affordance = InteractionPoints(
object_label=label,
points=torch.zeros(1, 3),
point_types=["contact"],
)
Copy link

Copilot AI Apr 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SemanticAnalyzer.analyze creates tensors (torch.eye, torch.zeros) on the default device (CPU). If the rest of the action stack operates on GPU, downstream ops like object_pose @ grasp_pose can fail with device mismatch. Consider constructing these tensors on the engine/analyzer device (or explicitly moving affordance tensors to self.device).

Copilot uses AI. Check for mistakes.
Comment on lines +201 to +206
from embodichain.lab.sim.atomic_actions import AtomicAction

class TestAction(AtomicAction):
def execute(self, target, **kwargs):
return PlanResult(success=True)

Copy link

Copilot AI Apr 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PlanResult is used in these test-only AtomicAction implementations, but it is never imported in this test module. This will raise NameError when the test class methods are executed; import PlanResult from embodichain.lab.sim.planners or reference it via the module where it is defined.

Copilot uses AI. Check for mistakes.
Comment on lines +82 to +85
# Get current state if not provided
if start_qpos is None:
start_qpos = self._get_current_qpos()

Copy link

Copilot AI Apr 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ReachAction.execute calls self._get_current_qpos(), but AtomicAction in core.py does not define this helper (it exists in the design doc but not in the implementation). This will raise AttributeError at runtime unless all concrete actions re-implement it; consider adding _get_current_qpos() to AtomicAction (likely using robot.get_qpos(name=self.control_part)[0]).

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

agent Features related to agentic system enhancement New feature or request motion gen Things related to motion generation for robot

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants