Deep reinforcement learning (DRL) is a powerful method for local motion planning in automated driving. However, training of DRL agents is difficult and subject to instability. We propose regularized adversarial fine-tuning (RAFT), an adversarial DRL training framework, and test it in an automated parking (AP) scenario in the car learning to act (CARLA) simulator. Results show that RAFT enhances the performance of a state-of-the-art agent in its original operational design domain (ODD) (static parking, without adversary), by improving its robustness, as evidenced by an increase in all measured metrics. The success rate rises, the mean alignment error shrinks, and the gear reversal rate drops. Notably, we achieved this result not by designing an ad-hoc reward function, but simply by adding a general regularization term to the baseline adversary reward. The results open up new research perspectives for extending the ODD of DRL-based AP to dynamic scenes.

RAFT: Regularized Adversarial Fine-Tuning to Enhance Deep Reinforcement Learning for Self-Parking

Pighetti A.;Bellotti F.;Berta R.;Lazzaroni L.;
2025-01-01

Abstract

Deep reinforcement learning (DRL) is a powerful method for local motion planning in automated driving. However, training of DRL agents is difficult and subject to instability. We propose regularized adversarial fine-tuning (RAFT), an adversarial DRL training framework, and test it in an automated parking (AP) scenario in the car learning to act (CARLA) simulator. Results show that RAFT enhances the performance of a state-of-the-art agent in its original operational design domain (ODD) (static parking, without adversary), by improving its robustness, as evidenced by an increase in all measured metrics. The success rate rises, the mean alignment error shrinks, and the gear reversal rate drops. Notably, we achieved this result not by designing an ad-hoc reward function, but simply by adding a general regularization term to the baseline adversary reward. The results open up new research perspectives for extending the ODD of DRL-based AP to dynamic scenes.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11567/1274021
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact