Meet the keynotes at SCSSS!

Sep, 19 2023

The Scandinavian Conference on Systems and Software Safety is gearing up for its 11th edition, scheduled to be held in the city of Stockholm from November 21 to 22. The event promises to be a gathering of minds from various sectors, offering a unique opportunity to delve into the intricate world of system and software safety in electronic systems.

In an era where electronic systems are pervasive across industries and integral to critical societal infrastructure, the need for robust safety measures has never been more pronounced. As these systems grow increasingly complex, interconnected, and autonomous, coupled with the continuous expansion of software components, new challenges emerge. Even well-established organizations are grappling with these challenges, necessitating innovative approaches that go beyond conventional best practices.

The importance of shared experiences cannot be overstated. Many organizations face similar hurdles in the realm of system and software safety, making the exchange of insights and solutions absolutely vital. The Scandinavian Conference on Systems and Software Safety has evolved into a pivotal rendezvous for safety experts hailing from industry, public sectors, and academic institutions. The conference serves as a platform for networking, knowledge-sharing, and forging new connections. 

Building on the success of previous editions, this year's conference is poised to bring together a diverse mix of participants, showcasing presentations and insights from various industries and academic domains. The event is co-hosted by KTH, SAFER, and Addalot.

Notably, the 11th edition of the conference boasts a lineup of distinguished keynote speakers:

  • Nancy Leveson, MIT: An esteemed authority in the field of systems and software safety.
  • Ibrahim Habli, University of York: Renowned for his contributions to the domain of safety research.
  • Lena Kecklund, MTO Säkerhet AB:An expert in safety practices with a wealth of experience.

With their expertise and insights, these keynote speakers are set to enrich the conference and inspire attendees as they navigate the evolving landscape of system and software safety.

Meet the experts:


NancyNancy Leveson is Jerome C. Hunsaker Professor of Aeronautics and Astronautics at MIT. Her Ph.D. was in Computer Science, but she has also studied math, cognitive psychology, and management. Dr. Leveson conducts research on all aspects of system safety including modeling and analysis, design, operations, management, and human factors and the general area of system engineering. Her research is used in a wide variety of safety-critical industries including aerospace, transportation, chemical plants, nuclear power, healthcare, and many others. She has been involved in many accident investigations in a variety of industries. One particular common element throughout all her work is an emphasis on applying systems theory and systems thinking to complex systems.

Dr. Leveson has received many honors, most recently the 2020 IEEE Medal for Environmental and Safety Technologies. She was elected to the National Academy of Engineering in 2000. She is the author of three books: Safeware: System Safety and Computers (1995), Engineering a Safer World (2012), and a new general textbook titled Introduction to System Safety Engineering (2023)


The traditional assumptions about the cause of accidents/losses has served us well for several hundred years. But the world has changed enough that these assumptions are no longer true. Previously unparalleled levels of complexity and new technology, particularly computers, have created new causes of accidents. Traditional models of causality and the tools based on them are not effective in understanding and preventing these new types of losses. In this talk, I will explain why something new is needed and suggest that systems theory can provide the basis for an expanded model of causality (called STAMP) that better fits today’s world.



We are happy to present Ibrahim Habli, Professor of Safety-Critical Systems and Deputy Head of the Department of Computer Science at the University of York, with responsibility for research. He currently leads the research activities of the £12M Assuring Autonomy International Programme (AAIP), funded by Lloyd’s Register Foundation, which is developing and applying methods for assuring AI-based autonomous systems in multiple industries.

He is the PI on the UKRI-funded project AR-TAS (Assuring Responsibility for Trustworthy Autonomous Systems) and a Co-I on the UKRI TAS Node in Resilience (REASON). In 2015, he was awarded a Royal Academy of Engineering Industrial Fellowship and collaborated with the National Health Service in England on evidence-based methods for assuring complex digital interventions. His research on safe and ethical assurance of AI systems has involved multidisciplinary collaborations including with law, economics, and philosophy.

Abstract for his talk
An assurance case is a structured argument, typically produced by safety engineers, to communicate confidence that a critical or complex system, such as an aircraft, will be acceptably safe within its intended context. Assurance cases often inform third party approval of a system. One emerging proposition within the trustworthy AI and autonomous systems (AI/AS) community is to use assurance cases to instil justified confidence that specific AI/AS will be ethically acceptable when operational in well-defined contexts. This talk brings together the assurance case methodology with a set of ethical principles to structure a principles-based ethics assurance argument pattern. The principles are justice, beneficence, non-maleficence, and respect for human autonomy, with the principle of transparency playing a supporting role. The argument pattern—shortened to the acronym PRAISE—is described. The objective of the proposed PRAISE argument pattern is to provide a reusable template for individual ethics assurance cases, by which engineers, developers, operators, or regulators could justify, communicate, or challenge a claim about the overall ethical acceptability of the use of a specific AI/AS in a given socio-technical context.


Lena Kecklund, Ph D, is a senior consultant and CEO at MTO Säkerhet. Lena's areas of expertise include safety culture, the interaction between humans, technology, organization (HTO) and safety management. She has extensive experience from a number of safety-critical operations such as rail, road, aviation, shipping, patient safety and industry. Lena works, among other things with investigation, training and advice to managers in Sweden and internationally. She was responsible for MTO Säkerhets's participation in the revision of the European railway safety legislation carried out on behalf of European Agency for Railways (ERA). Lena has participated in more than fifteen accident investigations for the National Swedish Accident Authority.  She has written the book  ”Den (o)mänskliga faktorn” with prof em Bengt Sandblad.


The increasing use of digital technology is changing the circumstances and conditions for human work in the workplace as well as in society. The systems that humans must master becomes more and more complex and increased automatization leads to greater dependency on machines or robots to conduct parts of or all the work.

Systems that fail to interact with humans will lead to poor safety and inadequate work environments, which will impact negatively on business development. The inclusion of the human factor is paramount and should be seen as a strength rather than a weakness. This is addressed in this book emphasizing how to develop successful effective, safe, and sustainable digital systems, with focus on principles, methods, and approaches to improve work life.

System Safety must therefore be based on addressing the interaction between Human, Technology and Organizations. You will learn how to apply the HTO framework in different domains.