Use our registration form to request the joining instructions for upcoming seminars. 

Please ensure you register by 12:00 GMT on the day of the seminar.

_____________________________________


Upcoming Events


Resilience Talk 40 - Monday 25 March 2024,  15:00-16:00 GMT

Resilient federated learning: Where performance meets constraints

Dr. Yang Lu, Lancaster University


Federated learning is increasingly employed in multi-agent systems such as cyber-physical systems applications, enhancing network intelligence. However, such applications in turn raise new constraints and challenges to existing frameworks. Critical challenges include, e.g., user mobility, data heterogeneity, sparse communication topology, absence of a central server, open and insecure communication links, and external and internal attackers. In this talk, I will discuss three of my recent works in attempting to achieve high-quality federated learning while tackling these challenges. Specifically, I will cover adaptive federated learning over dynamic and heterogeneous users, robust federated learning against poisoning attacks, and privacy-preserving peer-to-peer federated learning.

_____________________________________



Upcoming TAS Hub Events



TAS Hub Doctoral Training Network Seminars

Find out more to join the DTN seminars


TAS Node in Governance and Regulation Seminars

https://governance.tas.ac.uk/seminars-and-events/


TAS Node in Security Seminars

https://www.lancaster.ac.uk/security-lancaster/

_____________________________________



Recent Past Events

Resilience Talk 39 - Monday 12 February 2024,  15:00-16:00 GMT

Supervision of Intelligent Cyber-physical Systems

Dr. Mario Gleirscher, University of Bremen


Supervision has always been a key element in the control of property-critical systems.  Supervision techniques have evolved over the decades, especially with the advent of complex and interconnected cyber-physical systems involving artificial intelligence techniques in their critical components (e.g. neural network-based sensor systems). This talk will provide a conceptual overview of supervision followed by a discussion of designing and testing supervisors (i.e. controllers responsible for supervision) and their components.


Resilience Talk 38 - Monday 29 January 2024,  15:00-16:00 GMT

Sociotechnical Synergy: How Social Psychology can Help the Development of Trustworthy Autonomous Systems

Dr. Anastasia Kordoni, Lancaster University


In the pursuit of advancing autonomous systems, there has been a predominant focus on technical capabilities to ensure trustworthiness and resilience, often at the expense of considering social dynamics. In this talk, I will explore a sociotechnical approach that accentuates the pivotal contribution of social psychology in the development process. While existing research has extensively examined human factors influencing development, this talk transcends individual differences to delve into the complexities of social groups and their behaviours. I will discuss the challenges associated with understanding and operationalizing group behaviours and group processes in a way that is intelligible to autonomous systems. I will highlight recent techniques and examples that address these challenges, demonstrating the integration of social understanding into the development of autonomous systems and elucidating its implications. 

Resilience Talk 37 - Monday 15 January 2024,  15:00-16:00 GMT

Engineering Trustworthy AI Systems

Prof. Foutse Khomh, Polytechnique Montréal


Nowadays, we are witnessing an increasing adoption of Machine Learning (ML) for solving complex real-world problems. However, despite some reports showing that ML models can produce results comparable and even superior to human experts, they are often vulnerable to carefully crafted perturbations and are prone to bias and hallucinations. Ensuring the trustworthiness of software systems enabled by machine learning is a very challenging task. In this talk, I will discuss challenges that we should overcome to build trustworthy ML-enabled systems and present some recent techniques and tools that we have proposed to improve the trustworthiness of autonomous robotic systems

Resilience Talk 36 - Monday 4 December 2023,  15:00-16:00 GMT

Can AI ever be safe?

Dr. Colin Paterson, University of York


This question is increasingly being posed by the public, and those working in safety critical contexts, where AI has been proposed as a solution to problems observed in human intensive and resource constrained services.


In this presentation, Dr. Colin Paterson will explore what it means for a system to be safe and the challenges that modern AI possess for system safety before presenting a framework for assuring the safety of ML components to be deployed for use in autonomous systems.

Resilience Talk 35 - Monday 20 November 2023,  15:00-16:00 GMT

Specification and Verification of Social, Legal, Ethical, Empathetic and Cultural (SLEEC) Requirements for Resilient Autonomous Systems

Dr Sinem Getir Yaman, University of York


Autonomous systems are increasingly being proposed for use in healthcare, assistive care, autonomous driving, and other application domains governed by complex human-centric norms. To ensure compliance with these norms, the rules they induce for an application under development need to be unambiguously defined, checked for consistency, and used to verify the autonomous agent delivering that application.  In this talk, I will address this need by introducing a framework for the formal specification and verification of social, legal, ethical, empathetic and cultural (SLEEC) requirements for autonomous agents. Our framework comprises: (i) a language for specifying SLEEC requirements as rules and rule defeaters; (ii) a formal semantics (defined in the process algebra tock-CSP) for the language; and (iii) methods for detecting conflicts and redundancy within a set of SLEEC rules, and for verifying the compliance of an autonomous agent with such rules. We show the applicability of our SLEEC-rule specification, validation and verification framework for two autonomous agents from different application domains: a firefighter unmanned aerial vehicle, and an assistive-care robot.

Resilience Talk 34 - Monday 6 November 2023,  15:00-16:00 GMT

The Uncertainty Interaction Problem in Adaptive Systems

Dr Javier Camara Moreno, University of Malaga


The problem of mitigating uncertainty in self-adaptation has driven much of the research proposed in the area of software engineering for self-adaptive systems in the last decade. Although many solutions have already been proposed, most of them tend to tackle specific types, sources, and dimensions of uncertainty (e.g., in goals, resources, adaptation functions) in isolation.  However, different uncertainties are rarely independent and often compound, affecting the satisfaction of goals and other system properties in subtle and often unpredictable ways. Hence, there is still limited understanding about the specific ways in which uncertainties from various sources interact and ultimately affect the properties of self-adaptive, software-intensive systems. In this talk, I will introduce the Uncertainty Interaction Problem (UIP) as a way to better qualify the scope of the challenges with respect to representing different types of uncertainty while capturing their interaction in models employed to reason about self-adaptation. 

Resilience Talk 33 - Monday 19 June 2023,  15:00-16:00 BST

Evaluating Safety-Critical Systems: A (Conservative) Bayesian's View

Dr. Xingyu Zhao, University of Warwick


In this talk, I will introduce a part of my research from the last 10 years (since the start of my PhD in 2013) on Bayesian techniques that I developed for evaluating safety-critical systems (SCSs).


During my PhD studies, I was trained to be a faithful Bayesian statistician, and since then, I have been applying Bayesian statistical inference techniques to assess SCSs. This includes systems such as nuclear power plant protection systems, autonomous vehicles, robots, and their AI/ML components. For example, I have built statistical models to handle ultra-high reliability claims based on operational testing data and developed Bayesian estimators for more robust runtime verification of robots.


Although the Bayesian approach is not a silver bullet and often faces criticism, such as the requirement for prior knowledge, reliance on conjugacy, and assumptions of independent and identically distributed (i.i.d.) trials, we have endeavored to address these fundamental problems with new ideas to relax those constraints at the price of being conservative (which is not necessarily bad for SCSs.).

Resilience Talk 32 - Monday 22 May 2023,  15:00-16:00 BST

A Human Factors Approach to Resilience in Automated Systems

Katie Parnell, University of Southampton


This talk will present the work that has been conducted by the team at the University of Southampton as part of the REASON project. It will present the Human Factors toolkit that has been developed and applied to a transport scenario, involving the interaction between autonomous vehicles and cyclists on the road. It will also present the application of these methods to an additional REASON case study, a dressing robot scenario. Here the interdisciplinary work that has sought to integrate Human Factors methods with computer science approaches will be presented. Ongoing and future work will also be covered, including user data collection methods.

Resilience Talk 31 - Monday 24 April 2023, 15:00-16:00 BST

Quantum choice models: A flexible approach for understanding moral and normative decision-making

Thomas Hancock, University of Leeds


There has been an increasing effort to improve the behavioural realism of mathematical models of choice, resulting in efforts to move away from standard random utility maximisation (RUM) models. Quantum probability, first developed in theoretical physics, has recently been successfully used in cognitive psychology to model data from experiments that previously resisted effective modelling by classical methods. This has led to the development of choice models based on quantum probability, which have greater flexibility than standard models due to the implementation of, for example, complex phases or ‘quantum rotations'. We test whether these new models can also capture choice modification under explicit or implicit ‘changing perspectives’ in choice contexts with salient moral attributes. We apply these models to three distinctly different stated preference case studies, finding that the additional flexibility allows the models to accurately capture and formally explain choices across the differing contexts. 

Resilience Talk 30 - Monday 13 March 2023, 15:00-16:00 GMT

Developing Trustworthy Autonomous System through Understanding and Mitigating Uncertainties

Xinwei Fang, University of York


Autonomous systems must operate efficiently and resiliently in environments where changes are commonly observed. To meet this requirement, they must continuously monitor their surroundings using onboard sensors, analyse the data they gather, and make decisions on their actions. Uncertainties that originate at the beginning of this process can propagate and may have a significant impact on decision-making. In this talk, I will present my findings on the sources of uncertainty in data collection and how they can propagate within the system. I will also share our recent work to reduce these uncertainties, and finally discuss open challenges and future research directions for developing trustworthy autonomous systems.

Resilience Talk 29 - Monday 13 February 2023, 15:00-16:00 GMT

Architecting Safer Autonomous Aviation Systems

Jane Fenn, BAE Systems


The aviation literature gives relatively little guidance to practitioners about the specifics of architecting systems for safety, particularly the impact of architecture on allocating safety requirements, or the relative ease of system assurance resulting from system or subsystem level architectural choices. As an exemplar, this paper considers common architectural patterns used within traditional aviation systems and explores their safety and safety assurance implications when applied in the context of integrating artificial intelligence (AI) and machine learning (ML) based functionality. Considering safety as an architectural property, we discuss both the allocation of safety requirements and the architectural trade-offs involved early in the design lifecycle. This approach could be extended to other assured properties, similar to safety, such as security. We conclude with a discussion of the safety considerations that emerge in the context of candidate architectural patterns that have been proposed in the recent literature for enabling autonomy capabilities by integrating AI and ML. A recommendation is made for the generation of a property-driven architectural pattern catalogue. The seminar is based on the research paper  [2301.08138] Architecting Safer Autonomous Aviation Systems (arxiv.org).

Resilience Talk 28 - Monday 30 January 2023, 15:00-16:00 GMT

​Piecemeal knowledge acquisition for computational normative reasoning

Ilaria Canavotto, University of Maryland, US


We present a hybrid approach to knowledge acquisition and representation for machine ethics or more generally, computational normative reasoning.  Building on recent research in artificial intelligence and law, our approach is modeled on the familiar practice of decision-making under precedential constraint in the common law.  We first provide a formal characterization of this practice, showing how a body of normative information can be constructed in a way that is piecemeal, distributed, and responsive to particular circumstances. We then discuss two possible applications: first, a robot childminder, and second, moral judgment in a bioethical domain.

Resilience Talk 27 - Monday 16 January 2023, 15:00-16:00 GMT

Towards a Conceptual Characterisation of Antifragile Systems

Raffaela Mirandola, Politecnico di Milano, Italy


Antifragility has recently emerged as a design principle for the realisation of systems that remain trustworthy despite the occurrence of changes during their operations. In this work, we intend to support the vision that an effective application of this principle requires a clear  understanding of the implications of its adoption and of its relationships with other approaches sharing a similar objective. To this end, we argue that a proper conceptual characterisation of antifragility can be achieved through its inclusion within the consolidated dependability taxonomy, which was proposed in the recent past with the goal of providing a reference framework to reason about the different facets of the general concern of designing dependable systems.

Resilience Talk 26 - Monday 5 December 2022, 15:00-16:00 GMT

Why Should Robots Trust Humans?

Prof Chris Baber, Birmingham University, UK


Trust is often seen as dispositional. That is, trust is either an emotional state in the 'trustee' or a perception (by the trustee) of the intentions of the 'trustor'.  Often this would be measured through self-report questionnaires.  This makes it tricky for robots to be considered trustees (because they might not be able to answer the questionnaires) and also because this suggests that trust has a requirement for a theory of mind.  Assuming that a robot might find it difficult to form a theory of mind of its human teammates, there is a question of what it might mean for a robot to have 'trust'.  I approach this through the proposal that trust should be considered as transactional rather than dispositional.  A transactional model of trust would be computationally tractable and could potentially be applied to human and robot teammates.  I propose that trust consists of three aspects -  capability, predictability and integrity - with which team members can evaluate the activity of their teammates.  Using a simple maze-searching task and models derived from Prisoner's Dilemma, I consider the circumstances under which it makes sense for a robot to trust (or not trust) its human teammates.

Resilience Talk 25 - Monday 21 November 2022, 15:00-16:00 GMT

From Pluralistic Normative Principles to Autonomous-Agent Rules

Dr Bev Townsend, University of York, UK


With recent advancements in systems engineering and artificial intelligence, autonomous agents are increasingly being called upon to execute tasks that have normative relevance. These are tasks that directly—and potentially adversely—affect human well-being and demand of the agent a degree of normative-sensitivity and -compliance. Such norms and normative principles are typically of a social, legal, ethical, empathetic, or cultural (‘SLEEC’) nature. Whereas norms of this type are often framed in the abstract, or as high-level principles, addressing normative concerns in concrete applications of autonomous agents requires the refinement of normative principles into explicitly formulated practical rules.


This paper develops a process for deriving specification rules from a set of high-level norms, thereby bridging the gap between normative principles and operational practice. This enables autonomous agents to select and execute the most normatively favourable action in the intended con- text premised on a range of underlying relevant normative principles. In the translation and reduction of normative principles to SLEEC rules, we present an iterative process that uncovers normative principles, addresses SLEEC concerns, identifies and resolves SLEEC conflicts, and generates both preliminary and complex normatively-relevant rules, thereby guiding the development of autonomous agents and better positioning them as normatively SLEEC-sensitive or SLEEC-compliant.

Resilience Talk 24 - Monday 7 November 2022, 15:00-16:00 GMT

Verification and analysis of automotive system perception components

Lina Marsso, University of Toronto, Canada


Autonomous driving technology is safety-critical and thus requires thorough validation. In particular, the probabilistic algorithms and machine vision components (MVCs) employed in perception systems of autonomous vehicles (AV) are notoriously hard to validate due to the wide range of possible critical behavioural scenarios and safety-critical changes in the environment. Such critical behavioural scenarios can not be easily addressed with current manual validation methods, thus there is a need for an automatic and formal validation technique. To this end, we propose a new approach for perception component verification that, given a high-level and human-interpretable description of a critical situation, generates relevant AV scenarios and uses them for automatic verification. 

19th International Conference on Software Engineering and Formal Methods  - 6 to 10 December 2021.*  

SEFM 2021 was jointly organised  by Carnegie Mellon University (US), Nazarbayev University (Kazakhstan) and University of York (UK) and aimed to bring together researchers and practitioners from academia, industry and government, to advance the state of the art in formal methods, to facilitate their uptake in the software industry, and to encourage their integration within practical software engineering methods and tools. 

The SEFM main conference proceedings are published in the Formal Methods subline of Springer's Lecture Notes in Computer Science, and can be accessed at this link.

UKRI Trustworthy Autonomous Systems in Health and Social Care Workshop

Find out more about the workshop