Learning from experience is a fundamental capability of intelligent agents. Autonomous systems rely on sensors that provide data about the environment and internal situations to their perception systems for learning and inference mechanisms. These systems can also learn Self-Aware and Situation-Aware generative modules from these data to localize themselves and interact with the environment. In this paper, we propose a self-aware cognitive architecture capable to perform tasks where the interactions between the self-state of an agent and the surrounding environment are explicitly and dynamically represented. We specifically develop a Deep Learning (DL) based Self-Aware interaction model, empowered by learning from Multi-Modal Perception (MMP) and World Models using multi-sensory data in a novel Multi-Agent Self-Awareness Architecture (MASAA). Two sub-modules are developed, the Situation Model (SM) and the First-Person model (FPM), that address different and interrelated aspects of the World Model (WM). The MMP model, instead, aims at learning the mapping of different sensory perceptions into Exteroceptive (EI) and Proprioceptive (PI) latent information. The WM then uses the learned MMP model as experience to predict dynamic self-behaviors and interaction patterns within the experienced environment. WM and MMP Models are learned in a data-driven way, starting from the lower-dimensional odometry data used to guide the learning of higher-dimensional video data, thus generating coupled Generalized State Hierarchical Dynamic Bayesian Networks (GS-HDBNs). We test our model on KITTI, CARLA, and iCab datasets, achieving high performance and a low average localization error (RMSE) of 2.897%, when considering two interacting agents.

Modeling Interactions Between Autonomous Agents in a Multi-Agent Self-Awareness Architecture

Abrham Shiferaw Alemaw;Giulia Slavic;Pamela Zontone;Lucio Marcenaro;Carlo Regazzoni
2025-01-01

Abstract

Learning from experience is a fundamental capability of intelligent agents. Autonomous systems rely on sensors that provide data about the environment and internal situations to their perception systems for learning and inference mechanisms. These systems can also learn Self-Aware and Situation-Aware generative modules from these data to localize themselves and interact with the environment. In this paper, we propose a self-aware cognitive architecture capable to perform tasks where the interactions between the self-state of an agent and the surrounding environment are explicitly and dynamically represented. We specifically develop a Deep Learning (DL) based Self-Aware interaction model, empowered by learning from Multi-Modal Perception (MMP) and World Models using multi-sensory data in a novel Multi-Agent Self-Awareness Architecture (MASAA). Two sub-modules are developed, the Situation Model (SM) and the First-Person model (FPM), that address different and interrelated aspects of the World Model (WM). The MMP model, instead, aims at learning the mapping of different sensory perceptions into Exteroceptive (EI) and Proprioceptive (PI) latent information. The WM then uses the learned MMP model as experience to predict dynamic self-behaviors and interaction patterns within the experienced environment. WM and MMP Models are learned in a data-driven way, starting from the lower-dimensional odometry data used to guide the learning of higher-dimensional video data, thus generating coupled Generalized State Hierarchical Dynamic Bayesian Networks (GS-HDBNs). We test our model on KITTI, CARLA, and iCab datasets, achieving high performance and a low average localization error (RMSE) of 2.897%, when considering two interacting agents.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11567/1263458
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? ND
social impact