DRIFT

Deep Reinforcement Learning for Intelligent Floating Platforms Trajectories

Autonomous Space Robotics PhD Research Project

Project Overview

This investigation introduces a novel deep reinforcement learning-based suite to control floating platforms in both simulated and real-world environments. Floating platforms serve as versatile test-beds to emulate microgravity environments on Earth, useful to test autonomous navigation systems for space applications. Our approach addresses the system and environmental uncertainties in controlling such platforms by training policies capable of precise maneuvers amid dynamic and unpredictable conditions. Leveraging Deep Reinforcement Learning (DRL) techniques, our suite achieves robustness, adaptability, and good transferability from simulation to reality. Our deep reinforcement learning framework provides advantages such as fast training times, large-scale testing capabilities, rich visualization options, and ROS bindings for integration with real-world robotic systems. Being open access, our suite serves as a comprehensive platform for practitioners who want to replicate similar research in their own simulated environments and labs.

Framework Overview

DRIFT Framework Overview

Framework Employed for Training and Evaluation: On the left, we depict the agent’s interaction during both training and evaluation phases with the simulation environments, highlighting the incorporation of disturbances in the loop. On the right, we illustrate the deployment of the trained policy, while performing open-loop control on the real FP system.

Key Results

Simulation Performance

Simulation performed using Omniverse Isaac Sim, showcasing the platform's ability to quickly (a model is trained in parallel across multiple environments) and converges in less than 10 minutes.

DRIFT Simulation Performance DRIFT Simulation Performance
DRIFT Simulation Performance DRIFT Simulation Performance DRIFT Simulation Performance

Real Laboratory Validation

Direct transfer of simulation-trained policies to the physical floating platform demonstrates successful sim-to-real capabilities. The system maintains stable control despite real-world uncertainties including air currents, sensor noise, and mechanical variations.

DRIFT Simulation Performance DRIFT Simulation Performance
DRIFT Simulation Performance DRIFT Simulation Performance
DRIFT Real Experiments

Performance Highlights

±2.5cm
Position Accuracy
±1.2°
Orientation Precision
5Hz
Control Frequency
95%
Success Rate

Related Publications

DRIFT: Deep Reinforcement Learning for Intelligent Floating Platforms Trajectories

Matteo El-Hariry, Antoine Richard, Vivek Muralidharan, Matthieu Geist, Miguel Olivares-Mendez

IROS '24 (oral presentation) & MASSpace'24 (International Workshop on Autonomous Agents and Multi-Agent Systems for Space Applications)

← Previous: REACT Back to Portfolio Next Project: ICE2THRUST →