DC11 Project:Enhancing Trustworthy AI Integration in Safety-Critical Systems

 

Doctoral CandidateDina Tri Utari

 

Dina Tri Utari, MSc

 

Dina Tri Utari received her Master’s degree in Mathematical Science, majoring in Statistics and Data Science from Universitas Gadjah Mada, Indonesia. Her research interests include Artificial Intelligence (AI), Machine Learning (ML), and Data Analysis, particularly in creating data-driven systems that support accurate decision-making and prediction. Her work integrates statistical theory with computational methods to address complex real-world problems in data-centric AI, closely aligned with the goals of the TUAI project.

 

Dina Tri_Utari_Self-presentation.pdf

 

Main Supervisor: Shen Yin (NTNU)

Co-Supervisors: David Camacho (UPM), Volker Stolz (HVL), Francesco Piccialli (UNINA)

R&D cooperation: AIUT

 

Objectives: 

The overarching goal of WP5 is to explore sustainable and trustworthy AI approaches that can be safely and efficiently deployed in cyber-physical systems (CPS), particularly in safety-critical industrial domains such as automotive systems.

This doctoral project aims to bridge the gap between academic research on trustworthy AI and its practical deployment in safety-critical CPS environments. The work will focus on improving the transparency, fault tolerance, and dependability of AI models used for perception, prediction, and decision-making. The project will also address issues related to sustainability - ensuring that models are efficient and resource-conscious - while maintaining compliance with industrial safety standards.

Expected accomplishments include:

  1. Development of a methodological framework for assessing trustworthiness and sustainability of AI in CPS.
  2. Design and evaluation of explainable and fault-tolerant AI modules for safety-critical scenarios.
  3. Doctoral work is expected to contribute to WP5 deliverables, including D5.1 and D5.2, as described in the TUAI project plan.
  4. Validation of methods through experimental work at NTNU and joint testing with Continental.

Key Research Objectives:

  1. Identify the main challenges and requirements for trustworthy AI in safety-critical CPS, focusing on explainability, robustness, and safety compliance.
  2. Develop interpretable AI models with enhanced fault tolerance and performance stability under uncertainty.
  3. Propose an evaluation and assurance framework that aligns with industrial standards (e.g., ISO 26262).
  4. Integrate sustainability considerations into AI model design and deployment.
  5. Collaborate with industrial and academic partners to validate results in practical applications.

 

Expected Results:  

Framework and metrics for assessing AI trustworthiness and sustainability.

Prototype algorithms and software tools for trustworthy AI integration.

Experimental case studies conducted with NTNU and Continental testbeds.

Peer-reviewed publications and presentations at international conferences.

Contributions to open-source implementations or datasets, following TUAI’s open science policy.

 

Planned secondments: UPM(4 months); HVL(4 months); UNINA (4 months)                   

 

Enrolment in Doctoral degree: NTNU