Project

Trustworthy AI from a Traffic Safety Perspective

Period
10 January–31 August 2022
Project manager
Else-Marie Malmek

The automotive industry is developing and introducing automated vehicles in the transport/mobility system, and these systems have to be both legally and ethically compliant, in particular when it comes to data handling. The automated driving systems (ADS) in the EGO vehicles consist of a “system of systems” from several tiers and the EGO vehicles also need to cooperate with ECO-systems outside e.g. the infrastructure. These systems collect huge amounts of data. The initial “in the moment” use poses less of a challenge than the secondary uses of the data from a legal and ethical perspective, and that is thus our primary focus.
AI entails great opportunities but also poses significant integrity risks. AI can be used to increase efficiency, improve safety and prevent traffic accidents. The security risks can and will however affect individuals, corporations, governments and society as a whole negatively unless handled in an effective and appropriate way. If AI systems used in self-driving vehicles fail, it can lead to loss of trust and people will not accept using such vehicles. If, for example, algorithms and machine learning contain bias, companies can expose individuals to human rights violations e.g. discrimination, surveillance and breaches of integrity and violations relating to personal data. The automated transport system must be both safe and effective, while still fulfilling the legal and ethical requirements. The benefits and the risks of AI and ADS must therefore be balanced. Trustworthy AI creates transparency, predictability and takes responsibility for how the algorithms are scaled in a broader ethical context.

We will re-use “The SEVS Way”, a strategic analysis methodology and process to handle complexity in a systematically way to address these issues.

Purpose:
• Enhance Trustworthy AI through legal and ethical compliance in the context of assisted and automated system solutions.
• Form a theory of where the line should be drawn between public and private interests.
• Investigate which data that is collected by critical AI applications that must and should be filtered out from retention and second hand use from a legal and ethical perspective.
• To further develop and strengthen SAFER’s Open Innovation Platform SEVS, www.sevs.se.

Expected results:
• Gain knowledge about data collection to enable Trustworthy AI as a basis for national funding and to possible international collaborations and funding.
• Identify societal and technical “difficult questions” and challenges á la SEVS methodology, as well as business risks and opportunities related to automation and data handling.
• A stakeholder analysis.
• A broad commitment from a multi-disciplinary team which can form the basis for a strong consortium for an application for a larger research project

Contribution to SAFER:
• To make SAFER (and their partners) the main leading actor regarding Trustworthy AI.
• Trustworthy AI requires a multi-disciplinary, holistic and system-based approach that takes into account ethical and legal aspects right from the start, the reliability of all actors and the processes that are part of the socio-technical context of the system throughout its life cycle. Therefore, this project will strengthen SAFER as an open Innovation platform for cooperation around Traffic Safety.

Short facts

Research area
Systems for accident prevention and AD
Financier(s)
SAFER Pre-Studies Phase 5
Partners
Zenseact
Volvo Cars
Malmeken
REVERE
Blackbird Law
Project type
SAFER Pre-study