feat: Add atomic action abstraction layer for embodied AI motion generation#239
feat: Add atomic action abstraction layer for embodied AI motion generation#239
Conversation
- Fix dataclass field ordering in ObjectSemantics (non-default follows default) - Convert batch_size property to get_batch_size() method for consistency - Add missing grasp_types field and get_grasp_by_type() method to GraspPose - Add missing point_types field and get_points_by_type() method to InteractionPoints - Add missing velocity_limit and acceleration_limit fields to ActionCfg Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Pull request overview
This PR introduces a new embodichain.lab.sim.atomic_actions module intended to provide a unified “atomic action” abstraction (reach/grasp/move/release) on top of the existing motion-planning stack, plus a small registry/engine and accompanying design spec + unit tests for core data types.
Changes:
- Added core atomic-action data models (
Affordance,ObjectSemantics,ActionCfg) and anAtomicActionbase class. - Added default atomic actions (
ReachAction,GraspAction,MoveAction,ReleaseAction) and anAtomicActionEnginewith a global registry. - Added a design document and unit tests for core affordance/registry helpers.
Reviewed changes
Copilot reviewed 7 out of 7 changed files in this pull request and generated 14 comments.
Show a summary per file
| File | Description |
|---|---|
embodichain/lab/sim/atomic_actions/core.py |
Defines affordance/semantics/config + base action class utilities. |
embodichain/lab/sim/atomic_actions/actions.py |
Implements reach/grasp/move/release actions using MotionGenerator/TOPPRA. |
embodichain/lab/sim/atomic_actions/engine.py |
Adds engine orchestration + global registry + placeholder semantic analyzer. |
embodichain/lab/sim/atomic_actions/__init__.py |
Exposes public API for atomic_actions package. |
tests/sim/atomic_actions/test_core.py |
Adds unit tests for affordances, ObjectSemantics, ActionCfg, registry helpers. |
tests/sim/atomic_actions/__init__.py |
Marks test package for atomic actions. |
docs/superpowers/specs/2026-04-17-atomic-action-abstraction-design.md |
Design specification for the new abstraction layer. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| ): | ||
| self.motion_generator = motion_generator | ||
| self.robot = motion_generator.robot | ||
| self.device = self.robot.device |
There was a problem hiding this comment.
AtomicAction.__init__ only accepts motion_generator, but other code in this PR (e.g., action implementations) calls super().__init__(motion_generator, robot, control_part, device) and later relies on self.control_part. Either update AtomicAction.__init__ to accept/store robot, control_part, and device (and keep it consistent with the concrete actions/engine), or update all actions/engine to match the existing constructor and set control_part elsewhere.
| ): | |
| self.motion_generator = motion_generator | |
| self.robot = motion_generator.robot | |
| self.device = self.robot.device | |
| robot: Optional[Robot] = None, | |
| control_part: Optional[str] = None, | |
| device: Optional[torch.device] = None, | |
| ): | |
| self.motion_generator = motion_generator | |
| self.robot = robot if robot is not None else motion_generator.robot | |
| self.control_part = ( | |
| control_part if control_part is not None else ActionCfg.control_part | |
| ) | |
| self.device = device if device is not None else self.robot.device |
| device: torch.device = torch.device("cuda"), | ||
| interpolation_type: str = "linear", # "linear", "cubic", "toppra" | ||
| ): | ||
| super().__init__(motion_generator, robot, control_part, device) |
There was a problem hiding this comment.
ReachAction.__init__ calls super().__init__(motion_generator, robot, control_part, device), but AtomicAction.__init__ currently only takes motion_generator. This mismatch will prevent instantiation of default actions in AtomicActionEngine.
| super().__init__(motion_generator, robot, control_part, device) | |
| super().__init__(motion_generator) | |
| self.robot = robot | |
| self.control_part = control_part | |
| self.device = device |
| target_states = [ | ||
| PlanState(qpos=start_qpos, move_type=MoveType.JOINT_MOVE), | ||
| PlanState(xpos=approach_pose, move_type=MoveType.EEF_MOVE), | ||
| ] | ||
|
|
||
| # Plan trajectory | ||
| options = MotionGenOptions( | ||
| control_part=self.control_part, | ||
| is_interpolate=True, | ||
| is_linear=self.interpolation_type == "linear", | ||
| interpolate_position_step=0.002, | ||
| plan_opts=ToppraPlanOptions( | ||
| sample_interval=kwargs.get("sample_interval", 30), | ||
| ), | ||
| ) |
There was a problem hiding this comment.
ReachAction.execute builds target_states mixing MoveType.JOINT_MOVE and MoveType.EEF_MOVE while also setting MotionGenOptions(is_interpolate=True). MotionGenerator.generate only supports pre-interpolation when all states share the same move_type, so this will error (or produce invalid stacks). Use options.start_qpos for the start state and make all target_states EEF_MOVE, or disable pre-interpolation / split planning into separate stages.
| ) | ||
| success, _ = self.robot.compute_ik( | ||
| pose=target_pose.unsqueeze(0), | ||
| qpos_seed=qpos_seed.unsqueeze(0), |
There was a problem hiding this comment.
ReachAction.validate passes qpos_seed= into Robot.compute_ik, but the API uses joint_seed=. As written, this will raise a TypeError when validation is called.
| qpos_seed=qpos_seed.unsqueeze(0), | |
| joint_seed=qpos_seed.unsqueeze(0), |
| velocity_limit=velocity_limit, | ||
| acceleration_limit=acceleration_limit, |
There was a problem hiding this comment.
ToppraPlanOptions does not support velocity_limit / acceleration_limit keyword args (it uses a constraints dict instead). Passing these will raise a TypeError when constructing ToppraPlanOptions; map the limits into constraints={"velocity": ..., "acceleration": ...} or extend the planner options type if needed.
| velocity_limit=velocity_limit, | |
| acceleration_limit=acceleration_limit, | |
| constraints={ | |
| "velocity": velocity_limit, | |
| "acceleration": acceleration_limit, | |
| }, |
| ) | ||
| success, _ = self.robot.compute_ik( | ||
| pose=grasp_pose.unsqueeze(0), | ||
| qpos_seed=qpos_seed.unsqueeze(0), |
There was a problem hiding this comment.
GraspAction.validate passes qpos_seed= into Robot.compute_ik, but the API uses joint_seed=. This will raise a TypeError when validation is called.
| qpos_seed=qpos_seed.unsqueeze(0), | |
| joint_seed=qpos_seed.unsqueeze(0), |
| is_linear=is_linear, | ||
| interpolate_position_step=0.002, | ||
| plan_opts=ToppraPlanOptions( | ||
| sample_interval=kwargs.get("sample_interval", 30), |
There was a problem hiding this comment.
MoveAction.execute references kwargs.get(...) when building ToppraPlanOptions, but execute() does not accept **kwargs. This will raise NameError: name 'kwargs' is not defined. Either add **kwargs to the signature or remove the kwargs usage.
| sample_interval=kwargs.get("sample_interval", 30), | |
| sample_interval=30, |
| from .core import GraspPose, InteractionPoints | ||
|
|
||
| # Generate default grasp poses based on object type | ||
| default_poses = torch.eye(4).unsqueeze(0) | ||
| default_poses[0, 2, 3] = 0.1 # Default offset | ||
|
|
||
| grasp_affordance = GraspPose( | ||
| object_label=label, | ||
| poses=default_poses, | ||
| grasp_types=["default"], | ||
| ) | ||
|
|
||
| # Default interaction points | ||
| interaction_affordance = InteractionPoints( | ||
| object_label=label, | ||
| points=torch.zeros(1, 3), | ||
| point_types=["contact"], | ||
| ) |
There was a problem hiding this comment.
SemanticAnalyzer.analyze creates tensors (torch.eye, torch.zeros) on the default device (CPU). If the rest of the action stack operates on GPU, downstream ops like object_pose @ grasp_pose can fail with device mismatch. Consider constructing these tensors on the engine/analyzer device (or explicitly moving affordance tensors to self.device).
| from embodichain.lab.sim.atomic_actions import AtomicAction | ||
|
|
||
| class TestAction(AtomicAction): | ||
| def execute(self, target, **kwargs): | ||
| return PlanResult(success=True) | ||
|
|
There was a problem hiding this comment.
PlanResult is used in these test-only AtomicAction implementations, but it is never imported in this test module. This will raise NameError when the test class methods are executed; import PlanResult from embodichain.lab.sim.planners or reference it via the module where it is defined.
| # Get current state if not provided | ||
| if start_qpos is None: | ||
| start_qpos = self._get_current_qpos() | ||
|
|
There was a problem hiding this comment.
ReachAction.execute calls self._get_current_qpos(), but AtomicAction in core.py does not define this helper (it exists in the design doc but not in the implementation). This will raise AttributeError at runtime unless all concrete actions re-implement it; consider adding _get_current_qpos() to AtomicAction (likely using robot.get_qpos(name=self.control_part)[0]).
Description
This PR introduces an atomic action abstraction layer for embodied AI motion generation. The implementation provides a unified interface for atomic actions like reach, grasp, move, etc., with support for semantic object understanding and extensible custom action registration.
Key Components
Core Classes (
core.py):Affordance- Base class for affordance data (GraspPose, InteractionPoints)ObjectSemantics- Semantic information about interaction targetsActionCfg- Configuration class for atomic actionsAtomicAction- Abstract base class for all atomic actionsAction Implementations (
actions.py):ReachAction- Reach to target pose or objectGraspAction- Execute grasp motionReleaseAction- Release graspMoveAction- Move to target poseAction Engine (
engine.py):AtomicActionEngine- Execution engine for atomic actionsDesign Principles
MotionGenerator,PlanResult, and IK/FK solversDesign Document
See
docs/superpowers/specs/2026-04-17-atomic-action-abstraction-design.mdfor detailed design specification.Type of change
Testing
Unit tests provided in
tests/sim/atomic_actions/test_core.pycovering:All 12 tests passing.
Checklist
black .command to format the code base.🤖 Generated with Claude Code