UKRI Trustworthy Autonomous Systems in Health and Social Care Workshop - Detailed Programme
Tuesday 7th June 2022
12:00-13:00 Participant arrival, registration and buffet lunch , Computer Science Building
13:00-14:00 Keynote (Chair: Ibrahim Habli), Ron Cooke Hub
Engineering NHS Resilience in the Pandemic First Wave, Tom Lawton – Bradford Teaching Hospitals NHS Foundation
Abstract - The demands placed on the NHS by the COVID-19 pandemic at the start of 2020 were unprecedented in the life of the NHS, but the response showed the best of what can be achieved when people are united by a single purpose. Using the real-life example of Bradford Royal Infirmary, this talk will discuss human resilience, the engineering response, and discuss how lessons from the pandemic may be relevant to healthcare Artificial Intelligence.
Tom Lawton is a Consultant in Critical Care and Anaesthesia and Head of Clinical Artificial Intelligence at Bradford Teaching Hospitals NHS Foundation Trust, and Director of Clinical Analytics at the Improvement Academy in the Bradford Institute for Health Research. He is a former computer programmer and current NHS-R fellow, and an affiliated member of the Yorkshire and Humber Patient Safety Translational Research Centre. As a Visiting Fellow of the Assuring Autonomy International Programme he is actively involved in the responsible introduction of AI into healthcare. He was awarded an MBE in 2020 for his work on COVID-19.
14:00-15:00 Session 1 : AI Solutions for Emergency, and Resilience of Network Services for Care (Chair: Victoria Hodge), Ron Cooke Hub
14:00- 14:30 AI-Assisted A&E Triage, Tunde Ashaolu – York and Scarborough NHS Trust, Ibrahim Habli and Billy Lyons – University of York
Abstract - Much of the early care pathway when patients present at emergency departments worldwide consists of various stages of data collection, prioritisation, and additional information requests. One of the primary concerns in triage is the act of inappropriately triaging a patient and additional complications arising, prior to approved treatment from the treating clinician. Automating the collection and initial processing stages with an expert system ensures that the agent continues to be compliant with the strict requirements, both medical and legal, which define the provision of healthcare interventions. By automating the collection and processing of subjective and objective data, we hope to relieve the clinical staff of the heavy data burden, providing explainable and interpretable diagnosis and treatment suggestions. In this talk, we will present the preliminary results of 'Diagnostic AI System for Robot-Assisted A&E Triage' (DAISY), a recently started UKRI Trustworthy Autonomous Systems project.
14:30 -15:00 Network Service Resilience and Its Role in Autonomous Healthcare/Social Care Systems Reliability, Poonam Yadav – University of York
Abstract - Resiliency and reliability are two fundamental requirements of Autonomous Systems (ASs). For different applications, these requirements vary from non-critical (best delivery efforts) to safety-critical with time-bounded guarantees. The network connectivity of the edge devices in AS deployments remains the central critical component that needs to meet the time-bounded Quality of Service (QoS) and fault-tolerance guarantees of the applications that are running on the network infrastructure. To meet these requirements, we need to investigate a fundamental question: how to meet applications mixed-criticality QoS requirements using state-of-the-art communication technologies? Therefore, in this presentation, I will present recent work in this direction.
15:00-15:30 Coffee break, Computer Science Building
15:30-17:30 Session 2: Trustworty Autonomous Systems for Mental Healthcare and Disability Support (Chair: Mohammad Mousavi), Ron Cooke Hub
15:30- 16:00 Use of Robots for the Rehabilitation and Education of Autistic Children and Children with Learning Difficulties, Maria Jose Galvez Trigo – University of Lincoln
16:00- 16:30 Reimagining Trustworthy Autonomous Systems with Disabled Young People, Lauren White – University of Sheffield, and Harry Gordon – Greenacre School
Abstract - Trustworthy Autonomous Systems (TAS) in assistive contexts offer the promise of revolutionising the lives of disabled young people in their personal lives, education and employment. However, despite such promissory technology and anticipated futures, disabled young people are frequently excluded from such spaces and, significant to this project, shaping the conversation. Our research project seeks to confront this by not only thinking about the promises of assistive and autonomous technologies with disability in mind, but centralises disabled young people as those who can lead the way in productive and disruptive conversations on TAS as a whole. Our project brings together a team of researchers of social scientists, community partners, computer scientists and engineers. Central to our collaborative and interdisciplinary project is co-production where disabled young people are leading, shaping and driving the research agenda on TAS. In this presentation, we will outline the plans for our project workshops where we will be tackling key questions of equality, diversity and inclusion as they relate to TAS, together. We will detail our aspirations for the project and beyond, and what we believe will contribute to understanding TAS in health and social contexts.
16:30- 17:00 Kaspar Explains: assessing added value of explanation in interaction with assistive technology, Marina Sarda Gou – University of Hertfordshire
Abstract - Children with Autism Spectrum Disorder (ASD) often struggle with their Visual Perspective Taking (VPT) skills, which relate to the ability to see the world from another person's perspective, taking into account what they see and how they see it. One of the methods that could help develop these skills is by introducing causal explanations into social interactions. By using social robots as tools, care givers (e.g. therapists, teachers, parents etc) can build on the interest and attraction children with autism display towards the robots and use the robots as mediators, tailoring the interaction to the specific needs of the children at any given time. Keeping this in mind, we designed several games for autistic children to play with Kaspar introducing explicit causal explanations related to VPT skills. These explanations are first given by the robot and then can be repeated and re-enforced by the care giver, who also can encourage the child to re-consider their action when needed. By doing this, we expect to see an improvement in the children’s VPT skills; both in the understanding that other people might have a different line of sight to themselves, and in the understanding that two people viewing the same item from different points in space may see different things.
17:00- 17:30 Trustworthy Assurance in Digital Mental Healthcare, Christopher Burr and Rosamund Powell – Alan Turing Institute
Abstract - Digital mental healthcare technologies, such as AI chatbots or decision support systems for psychiatry, have become more prominent and widespread since the start of the COVID-19 pandemic. As such, many organisations across the public, private, and third sectors have been grappling with the challenges of deploying these technologies in a trustworthy and responsible manner. In this presentation, we will look at how the methodology of argument-based assurance can be adapted to the context of digital mental healthcare, and discuss what it means to have an assurance case that is oriented towards establishing trust and communicating processes of responsible research and innovation. Although assurance cases have been utilised in safety-critical domains for several decades, their suitability for communicating additional properties or goals that have a more ethical focus remains to be explored. In the context of digital mental healthcare, however, these wider ethical goals are vital for establishing trust and communicating responsibility. To support our presentation we will look at a) findings from several participatory design and engagement workshops we conducted with stakeholders and users of digital mental healthcare, and b) a prototype platform for building trustworthy assurance cases.
17:30-19:30 Reception, Computer Science Building
Wednesday 8th June 2022
09:00-10:00 Keynote (Chair: Radu Calinescu), Ron Cooke Hub
Human-centric data driven resilience for assistive robots, Sanja Dogramadzi – University of Sheffield
Abstract - A major barrier to deployment of RAS systems in healthcare sectors is the assurance of safety in such systems, and the ability to ensure confidence in their design and use. A critical step in making machines safe to engage in physical contact with humans is to endow them with human-like sensing to adapt their response to external stimuli. Autonomous robots cannot be safely adopted in the healthcare domain without making significant advances in physical and cognitive adaptation to the ever changing dynamic environments in which they have to operate. This is of utmost importance in robotic applications that require close physical human-robot interaction where resilient operation is a precursor for safety. Resilience in this context requires the continuous monitoring of the user and environmental states, and using the observations from this monitoring to predict and detect failures, and to adapt the robot’s behavior proactively and efficiently. In the context of having users in the loop, communication with the autonomous agent through multi-modal sensing can ensure safe task execution and consequently build trustworthy human-robot interaction.
Sanja Dogramadzi is a Professor of Medical Robotics at the University of Sheffield. She has over 20 years of research experience in surgical and physically assistive robots, safe human-robot interaction and soft robotic structures. She has led numerous EPSRC, Horizon 2020, NIHR and Innovate UK projects as PI. She has expertise in soft robotics, sensing, image-guided control, haptics, teleoperation and safety in close physical human-robot interaction.
10:00-10:30 Session 3 : Assistive robotics (Chair: Maria Galvez Trigo), Ron Cooke Hub
10:00-10:30 Connected and Collaborative – designing assistive robots that change the dynamics in health and social care, Praminda Caleb-Solly – University of Nottingham
Abstract - Assistive robots offer the potential to transform people's ability to manage their own health, particularly those with the greatest need and lack of adequate support. Furthermore, connecting robots with different types of sensors, environmental and physiological, which provide real-time information to not only support self-management, but also enable tracking and diagnoses of healthcare conditions, will impact on how care is delivered and decisions about interventions are made. As such, these disruptive technologies will lead to new models of care, where the dynamics and relationships between the end-users, carers and clinicians will fundamentally change. In her talk Prof Caleb-Solly will explore the challenges and opportunities that these changing dynamics will bring, and the implications of these on the design, development and deployment of assistive robots.
10:30-11:00 Coffee break, Computer Science Building
11:00-13:00 Demonstrations, posters, guided Institute of Safe Autonomy tours (activities not available to online participants), ISA Building
AI-Assisted A&E Triage (demo), Tunde Ashaolu – York and Scarborough NHS Trust, and Billy Lyons – University of York
Ethical assurance methodology and interactive platform (demo), Christopher Burr and Rosamund Powell – Alan Turing Institute
Scheduling of Missions with Constrained Tasks for Heterogeneous Robot Systems (demo), Gricel Vazquez - University of York
Human emotion understanding with XAI for trustworthy HRI (poster), Chuang Yu - University of Manchester
Towards simulation-based safety validation of assistive robots using assertion checking (poster), Chris Harper - Bristol Robotics Laboratory
Trust and Proxemics in Autonomous Medical Delivery Robots (poster), Charles Fox - University of Lincoln
Certified Reinforcement Learning (poster), Chao Huang – University of Liverpool
12:30-13:30 Buffet lunch, ISA Building
13:30-15:00 Session 4 : Explainability and regulation of AI for health and social care (Chair: Beverley Townsend), Ron Cooke Hub
13:30 -14:00 Do explanations enhance trust in healthcare applications? Benedicte Legastelois – King's College London
Abstract - In this presentation, we will discuss the impact of using explainable AI on users' trust, compared to AI without explanations. We will start with an overview of different studies published on this subject in the healthcare domain. Then, we will reflect on the issues raised in these studies and argue that existing computer science research in XAI does not address the real trust-related concerns of healthcare professionals. We will, therefore, conclude by proposing a number of new research questions to address these concerns.
14:00 -14:30 Certified Reinforcement Learning, Chao Huang – University of Liverpool
Abstract - There has been increasing interest in applying machine learning techniques, especially reinforcement learning, in health care systems as a decision maker. Due to the complexity of both machine learning algorithms and highly dynamic environments with significant uncertainties and disturbances, it is critical yet challenging to formally ensure the correctness of such learning-enabled health care systems, and thus hinders the adoption of reinforcement learning in most safety-critical scenarios, e.g., surgery. In this talk, I will introduce our recent work on certified reinforcement learning, i.e., providing safety and stability guarantees for reinforcement learning. First, we proposed safety verification techniques for a trained neural network based on Taylor model abstraction, i.e., design-then-verification. Then we integrated the verification techniques with the learning process, such that the properties can be automatically satisfied by learning itself, i.e., design-while-verify. We show by experiments that our approaches can significantly outperform the state of the arts regarding safety and stability.
14:30 -15:00 Regulating AI in health and care: 3D regulation for 4D technologies, Phoebe Li – University of Sussex
Abstract - Artificial intelligence (AI) has the power to transform health delivery and scale up mass population diagnosis, such as diabetic retinopathy. However, the deployment of AI systems is still fraught with risks and uncertainties. In the UK, uncertainties in existing legal instruments governing medical devices have been compounded by leaving the EU. The presentation will focus on three pillars of regulating AI technologies: market approval and surveillance, liability, and patentability, examining issues around regulating AI in software as medical device (SaMD), product liability, and ethical patenting.
15:00-15:30 Coffee break, Computer Science Building
15:30-17:00 Session 5: Co-design and deployment of TAS for health and social care (Chair: Poonam Yadav), Ron Cooke Hub
15:30- 16:00 COdesigning Trustworthy Autonomous Diabetes Systems (COTADS), Chris Duckworth – University of Southampton
Abstract - Chronic medical conditions, such as type-1 diabetes (T1D), place a significant emotional and physical burden on individuals and their families. People with T1D face a daily balance to keep their blood glucose levels within safe levels (i.e., in-range), with risk of severe complications common if they do not. In COTADS we brought together young adults with T1D and their caregivers, clinicians, and data scientists to develop AI solutions for T1D through codesign.
16:00- 16:30 A co-design framework for empowering future care workforces, Cian O'Donovan – University College London
Abstract - As governments invest in post-pandemic digital transformation, ensuring workers are empowered and not excluded by technology is more urgent than ever. We know that care professionals are empowered when they have a diverse set of human capabilities available to them. For instance, being able to undertake their care tasks safely and with empathy; having confidence to safely configure technologies for use on wards and in people's homes. In this paper we will present an emergent co-design framework for scoping what kinds of human-digital capabilities help a diverse group of nurses, physiotherapists, occupational therapists and unwaged carers perform tasks with greater skill, fluency and proficiency. Using a Capabilities Approach, this framework is intended to: (i) generate insight for configuring, verifying and validating systems to best match the needs of staff in dynamic care context. (ii) Inform the development of future capability building programmes such as continuing professional development courses.
16:30- 17:00 Intersectional Approaches to Design And Deployment of Trustworthy Autonomous Systems, Mohammad Naiseh – University of Southampton
Abstract - For Trustworthy Autonomous Systems (TAS) to contribute to the creation of an inclusive, fair and just world, researchers and practitioners need to address intersectional inequalities. Intersectionality is a theory/praxis that uncovers how institutional inequalities shape experiences of discrimination or disadvantage based on how multiple aspects of a person’s identity (gender, ethnicity, disability, and so on) come together at one time and place. Our project researches how to translate and operationalise intersectionality into the design and deployment of TAS. We focus specifically on the healthcare and maritime sectors, where TAS are actively used and considered, but intersectional inequalities are not always meaningfully addressed. This presentation will show the research problem, research method and our expected output from this project.
17:00 Participant departure