Use our registration form to request the joining instructions for upcoming seminars.

Please ensure you register by 12:00 GMT on the day of the seminar.

_____________________________________


Upcoming Events


Resilience Talk 27 - Monday 16 January 2023, 15:00-16:00 GMT

Title TBC

Raffaela Mirandola, Politecnico di Milano, Italy


Resilience Talk 28 - Monday 30 January 2023, 15:00-16:00 GMT

Title TBC

Ilaria Canavotto, University of Maryland, US


Resilience Talk 29 - Monday 13 February 2023, 15:00-16:00 GMT

Title TBC

Jane Fenn, BAE Systems

_____________________________________



Upcoming TAS Hub Events


TAS Hub Doctoral Training Network Seminars

Find out more to join the DTN seminars


TAS Node in Governance and Regulation Seminars

https://governance.tas.ac.uk/seminars-and-events/


TAS Node in Security Seminars

https://www.lancaster.ac.uk/security-lancaster/

_____________________________________



Recent Past Events

Resilience Talk 26 - Monday 5 December 2022, 15:00-16:00 GMT

Why Should Robots Trust Humans?

Prof Chris Baber, Birmingham University, UK


Trust is often seen as dispositional. That is, trust is either an emotional state in the 'trustee' or a perception (by the trustee) of the intentions of the 'trustor'. Often this would be measured through self-report questionnaires. This makes it tricky for robots to be considered trustees (because they might not be able to answer the questionnaires) and also because this suggests that trust has a requirement for a theory of mind. Assuming that a robot might find it difficult to form a theory of mind of its human teammates, there is a question of what it might mean for a robot to have 'trust'. I approach this through the proposal that trust should be considered as transactional rather than dispositional. A transactional model of trust would be computationally tractable and could potentially be applied to human and robot teammates. I propose that trust consists of three aspects - capability, predictability and integrity - with which team members can evaluate the activity of their teammates. Using a simple maze-searching task and models derived from Prisoner's Dilemma, I consider the circumstances under which it makes sense for a robot to trust (or not trust) its human teammates.

Resilience Talk 25 - Monday 21 November 2022, 15:00-16:00 GMT

From Pluralistic Normative Principles to Autonomous-Agent Rules

Dr Bev Townsend, University of York, UK


With recent advancements in systems engineering and artificial intelligence, autonomous agents are increasingly being called upon to execute tasks that have normative relevance. These are tasks that directly—and potentially adversely—affect human well-being and demand of the agent a degree of normative-sensitivity and -compliance. Such norms and normative principles are typically of a social, legal, ethical, empathetic, or cultural (‘SLEEC’) nature. Whereas norms of this type are often framed in the abstract, or as high-level principles, addressing normative concerns in concrete applications of autonomous agents requires the refinement of normative principles into explicitly formulated practical rules.


This paper develops a process for deriving specification rules from a set of high-level norms, thereby bridging the gap between normative principles and operational practice. This enables autonomous agents to select and execute the most normatively favourable action in the intended con- text premised on a range of underlying relevant normative principles. In the translation and reduction of normative principles to SLEEC rules, we present an iterative process that uncovers normative principles, addresses SLEEC concerns, identifies and resolves SLEEC conflicts, and generates both preliminary and complex normatively-relevant rules, thereby guiding the development of autonomous agents and better positioning them as normatively SLEEC-sensitive or SLEEC-compliant.

Resilience Talk 24 - Monday 7 November 2022, 15:00-16:00 GMT

Verification and analysis of automotive system perception components

Lina Marsso, University of Toronto, Canada


Autonomous driving technology is safety-critical and thus requires thorough validation. In particular, the probabilistic algorithms and machine vision components (MVCs) employed in perception systems of autonomous vehicles (AV) are notoriously hard to validate due to the wide range of possible critical behavioural scenarios and safety-critical changes in the environment. Such critical behavioural scenarios can not be easily addressed with current manual validation methods, thus there is a need for an automatic and formal validation technique. To this end, we propose a new approach for perception component verification that, given a high-level and human-interpretable description of a critical situation, generates relevant AV scenarios and uses them for automatic verification.

19th International Conference on Software Engineering and Formal Methods - 6 to 10 December 2021.*

SEFM 2021 was jointly organised by Carnegie Mellon University (US), Nazarbayev University (Kazakhstan) and University of York (UK) and aimed to bring together researchers and practitioners from academia, industry and government, to advance the state of the art in formal methods, to facilitate their uptake in the software industry, and to encourage their integration within practical software engineering methods and tools.

The SEFM main conference proceedings are published in the Formal Methods subline of Springer's Lecture Notes in Computer Science, and can be accessed at this link.

UKRI Trustworthy Autonomous Systems in Health and Social Care Workshop

Find out more about the workshop