Use our registration form to request the joining instructions for upcoming seminars.
Please ensure you register by 12:00 GMT on the day of the seminar.
Resilience Talk 33 - Monday 19 June 2023, 15:00-16:00 BST
Xingyu Zhao, University of Liverpool
Upcoming TAS Hub Events
First International Symposium on Trustworthy Autonomous Systems - 11-12 July 2023
Heriot-Watt University, Edinburgh
Discounted tickets will be available until 07 June 2023, after which time only full-priced tickets will be available.
TAS Hub Doctoral Training Network Seminars
Find out more to join the DTN seminars
TAS Node in Governance and Regulation Seminars
TAS Node in Security Seminars
Recent Past Events
Resilience Talk 32 - Monday 22 May 2023, 15:00-16:00 BST
A Human Factors Approach to Resilience in Automated Systems
Katie Parnell, University of Southampton
This talk will present the work that has been conducted by the team at the University of Southampton as part of the REASON project. It will present the Human Factors toolkit that has been developed and applied to a transport scenario, involving the interaction between autonomous vehicles and cyclists on the road. It will also present the application of these methods to an additional REASON case study, a dressing robot scenario. Here the interdisciplinary work that has sought to integrate Human Factors methods with computer science approaches will be presented. Ongoing and future work will also be covered, including user data collection methods.
Resilience Talk 31 - Monday 24 April 2023, 15:00-16:00 BST
Quantum choice models: A flexible approach for understanding moral and normative decision-making
Thomas Hancock, University of Leeds
There has been an increasing effort to improve the behavioural realism of mathematical models of choice, resulting in efforts to move away from standard random utility maximisation (RUM) models. Quantum probability, first developed in theoretical physics, has recently been successfully used in cognitive psychology to model data from experiments that previously resisted effective modelling by classical methods. This has led to the development of choice models based on quantum probability, which have greater flexibility than standard models due to the implementation of, for example, complex phases or ‘quantum rotations'. We test whether these new models can also capture choice modification under explicit or implicit ‘changing perspectives’ in choice contexts with salient moral attributes. We apply these models to three distinctly different stated preference case studies, finding that the additional flexibility allows the models to accurately capture and formally explain choices across the differing contexts.
Resilience Talk 30 - Monday 13 March 2023, 15:00-16:00 GMT
Developing Trustworthy Autonomous System through Understanding and Mitigating Uncertainties
Xinwei Fang, University of York
Autonomous systems must operate efficiently and resiliently in environments where changes are commonly observed. To meet this requirement, they must continuously monitor their surroundings using onboard sensors, analyse the data they gather, and make decisions on their actions. Uncertainties that originate at the beginning of this process can propagate and may have a significant impact on decision-making. In this talk, I will present my findings on the sources of uncertainty in data collection and how they can propagate within the system. I will also share our recent work to reduce these uncertainties, and finally discuss open challenges and future research directions for developing trustworthy autonomous systems.
Resilience Talk 29 - Monday 13 February 2023, 15:00-16:00 GMT
Architecting Safer Autonomous Aviation Systems
Jane Fenn, BAE Systems
The aviation literature gives relatively little guidance to practitioners about the specifics of architecting systems for safety, particularly the impact of architecture on allocating safety requirements, or the relative ease of system assurance resulting from system or subsystem level architectural choices. As an exemplar, this paper considers common architectural patterns used within traditional aviation systems and explores their safety and safety assurance implications when applied in the context of integrating artificial intelligence (AI) and machine learning (ML) based functionality. Considering safety as an architectural property, we discuss both the allocation of safety requirements and the architectural trade-offs involved early in the design lifecycle. This approach could be extended to other assured properties, similar to safety, such as security. We conclude with a discussion of the safety considerations that emerge in the context of candidate architectural patterns that have been proposed in the recent literature for enabling autonomy capabilities by integrating AI and ML. A recommendation is made for the generation of a property-driven architectural pattern catalogue. The seminar is based on the research paper [2301.08138] Architecting Safer Autonomous Aviation Systems (arxiv.org).
Resilience Talk 28 - Monday 30 January 2023, 15:00-16:00 GMT
Piecemeal knowledge acquisition for computational normative reasoning
Ilaria Canavotto, University of Maryland, US
We present a hybrid approach to knowledge acquisition and representation for machine ethics or more generally, computational normative reasoning. Building on recent research in artificial intelligence and law, our approach is modeled on the familiar practice of decision-making under precedential constraint in the common law. We first provide a formal characterization of this practice, showing how a body of normative information can be constructed in a way that is piecemeal, distributed, and responsive to particular circumstances. We then discuss two possible applications: first, a robot childminder, and second, moral judgment in a bioethical domain.
Resilience Talk 27 - Monday 16 January 2023, 15:00-16:00 GMT
Towards a Conceptual Characterisation of Antifragile Systems
Raffaela Mirandola, Politecnico di Milano, Italy
Antifragility has recently emerged as a design principle for the realisation of systems that remain trustworthy despite the occurrence of changes during their operations. In this work, we intend to support the vision that an effective application of this principle requires a clear understanding of the implications of its adoption and of its relationships with other approaches sharing a similar objective. To this end, we argue that a proper conceptual characterisation of antifragility can be achieved through its inclusion within the consolidated dependability taxonomy, which was proposed in the recent past with the goal of providing a reference framework to reason about the different facets of the general concern of designing dependable systems.
Resilience Talk 26 - Monday 5 December 2022, 15:00-16:00 GMT
Why Should Robots Trust Humans?
Prof Chris Baber, Birmingham University, UK
Trust is often seen as dispositional. That is, trust is either an emotional state in the 'trustee' or a perception (by the trustee) of the intentions of the 'trustor'. Often this would be measured through self-report questionnaires. This makes it tricky for robots to be considered trustees (because they might not be able to answer the questionnaires) and also because this suggests that trust has a requirement for a theory of mind. Assuming that a robot might find it difficult to form a theory of mind of its human teammates, there is a question of what it might mean for a robot to have 'trust'. I approach this through the proposal that trust should be considered as transactional rather than dispositional. A transactional model of trust would be computationally tractable and could potentially be applied to human and robot teammates. I propose that trust consists of three aspects - capability, predictability and integrity - with which team members can evaluate the activity of their teammates. Using a simple maze-searching task and models derived from Prisoner's Dilemma, I consider the circumstances under which it makes sense for a robot to trust (or not trust) its human teammates.
Resilience Talk 25 - Monday 21 November 2022, 15:00-16:00 GMT
From Pluralistic Normative Principles to Autonomous-Agent Rules
Dr Bev Townsend, University of York, UK
With recent advancements in systems engineering and artificial intelligence, autonomous agents are increasingly being called upon to execute tasks that have normative relevance. These are tasks that directly—and potentially adversely—affect human well-being and demand of the agent a degree of normative-sensitivity and -compliance. Such norms and normative principles are typically of a social, legal, ethical, empathetic, or cultural (‘SLEEC’) nature. Whereas norms of this type are often framed in the abstract, or as high-level principles, addressing normative concerns in concrete applications of autonomous agents requires the refinement of normative principles into explicitly formulated practical rules.
This paper develops a process for deriving specification rules from a set of high-level norms, thereby bridging the gap between normative principles and operational practice. This enables autonomous agents to select and execute the most normatively favourable action in the intended con- text premised on a range of underlying relevant normative principles. In the translation and reduction of normative principles to SLEEC rules, we present an iterative process that uncovers normative principles, addresses SLEEC concerns, identifies and resolves SLEEC conflicts, and generates both preliminary and complex normatively-relevant rules, thereby guiding the development of autonomous agents and better positioning them as normatively SLEEC-sensitive or SLEEC-compliant.
Resilience Talk 24 - Monday 7 November 2022, 15:00-16:00 GMT
Verification and analysis of automotive system perception components
Lina Marsso, University of Toronto, Canada
Autonomous driving technology is safety-critical and thus requires thorough validation. In particular, the probabilistic algorithms and machine vision components (MVCs) employed in perception systems of autonomous vehicles (AV) are notoriously hard to validate due to the wide range of possible critical behavioural scenarios and safety-critical changes in the environment. Such critical behavioural scenarios can not be easily addressed with current manual validation methods, thus there is a need for an automatic and formal validation technique. To this end, we propose a new approach for perception component verification that, given a high-level and human-interpretable description of a critical situation, generates relevant AV scenarios and uses them for automatic verification.
19th International Conference on Software Engineering and Formal Methods - 6 to 10 December 2021.*
SEFM 2021 was jointly organised by Carnegie Mellon University (US), Nazarbayev University (Kazakhstan) and University of York (UK) and aimed to bring together researchers and practitioners from academia, industry and government, to advance the state of the art in formal methods, to facilitate their uptake in the software industry, and to encourage their integration within practical software engineering methods and tools.
The SEFM main conference proceedings are published in the Formal Methods subline of Springer's Lecture Notes in Computer Science, and can be accessed at this link.
UKRI Trustworthy Autonomous Systems in Health and Social Care Workshop