AITHENA

31/10/2025

    Connected and Cooperative Automotive Mobility (CCAM) solutions have emerged thanks to novel Artificial Intelligence (AI) which can be trained with huge amounts of data to produce driving functions with better-than-human performance under certain conditions.

    General abstract

    The race on AI keeps on building HW/SW frameworks to manage and process even larger real and synthetic datasets to train increasingly accurate AI models.

    However, AI remains largely unexplored with respect to explainability (interpretability of model functioning), privacy preservation (exposure of sensitive data), ethics (bias and wanted/unwanted behaviour), and accountability (responsibilities of AI outputs). These features will establish the basis of trustworthy AI, as a novel paradigm to fully understand and trust AI in operation, while using it at its full capabilities for the benefit of society.

    AITHENA will contribute to build Explainable AI (XAI) in CCAM development and testing frameworks, researching three main AI pillars: data (real/synthetic data management), models (data fusion, hybrid AI approaches), and testing (physical/virtual XiL set-ups with scalable MLOps).

    A human-centric methodology will be created to derive trustworthy AI dimensions from user identified group needs in CCAM applications.

    AITHENA will innovate proposing a set of Key Performance Indicators (KPI) on XAI, and an analysis to explore trade-offs between these dimensions.

    Demonstrators will show the AITHENA methodology in four critical use cases: perception (what does the AI perceive, and why), situational awareness (what is the AI understanding about the current driving environment, including the driver state), decision (why a certain decision is taken), and traffic management (how transport-level applications interoperate with AI-enabled systems operating at vehicle-level).

    Created data and tools will be made available via European data sharing initiatives (OpenData and OpenTools) to foster research on trustworthy AI for CCAM.

    Global Objectives

    • Creation of a comprehensive methodology for development and testing of AI-based CCAM systems. Address aspects identified from CCAM user group needs and use cases of the CCAM layers (perception, situational awareness, decision and mobility).
    • AITHENA will research and develop example explainable AI (XAI) models for CCAM solutions. The human-centric methodology will be followed.
    • AITHENA will investigate on AI ethics to design, develop, implement and operate trustworthy AI systems for CCAM applications. Human factors will be the central pillar to define the RTD areas. The goal is to evaluate user acceptance taking into account different user types.
    • AITHENA shall specify, design and develop a life-cycle management framework for ML algorithms and CCAM applications. The platform shall provide and connect DevOps tools to the data scientist experience with the CCAM specific needs and requirements about data provenance and privacy preserving. The platform shall enable automation of ML workflows to accelerate and trace model building, training and experiments, scaling up with increasing amounts of data, while keeping performance and trustworthiness KPIs.

    • Utilisation of existing data resources (open datasets) in the context of CCAM use cases, including edge-case focused datasets, as the baseline upon which build new synthetic data. A Digital Twin of the vehicle, sensors and environment will be created using simulation engines using standardised languages to produce data.

    • AITHENA aims to adopt the HEADSTART methodology for testing highly automated and connected driving vehicles and extend it to add explicit support and management of the validation process of AI-based functions. The “EU Artificial Intelligence Act” brings the opportunity to incorporate non-deterministic function behaviour into the validation chain, addressing functional safety and safety of the intended function (SOTIF) to the MLOps and AI development pipelines.

    • Define new metrics (beyond accuracy) for the trustworthy AI features (privacy, robustness, explainability, accountability, ethics) and analyse trade-offs between them, to create a new paradigm in Key Performance Indicators (KPI) for AI-based unit testing in CCAM validation. These metrics shall be the translation of high-level regulatory and industrial concepts into practical principles easy to follow and interpretable by AI developers (bridging the gap between legal/ethical definitions, wanted/unwanted behaviour into engineering methods).

     

    This project has received funding from the European Union’s Horizon 2020 Research and Innovation Programme under Grant Agreement number 101076754.

     

    Applus+ uses first-party and third-party cookies for analytical purposes and to show you personalized advertising based on a profile drawn up based on your browsing habits (eg. visited websites). You can accept all cookies by pressing the "Accept" button or configure or reject their use.. Consult our Cookies Policy for more information.

    Cookie settings panel