Here are some example of topics on which I welcome inquiries from possible research students.
My interests cover assessing how reliable, safe, secure systems are; and how to assess and improve the methods for making them more so, especially by redundancy and diversity.
The systems in question vary in size between small pieces of software and whole socio-technical systems, as could be air traffic control systems or hospitals.
I am interested in helping the developers, the assessors and by the regulators who have to accept or reject critical systems.
You can contact me at: L.Strigini@csr.city.ac.uk


Studies in automation bias

When computers advise people in their decisions, paradoxical "over-reliance" situations may arise in which the computer's help makes people more likely to make mistakes rather than less likely. Theses are available to study by experiment and mathematical modelling how these situations arise, so that designers of systems from medical diagnosis aids to collision-alert systems for vehicles can forecast these situations and avoid creating them. This is an interdisciplinary topic straddling computing and psychology. To learn more about the open research questions, you may be interested in this paper, and in our previous work on a medical application.


Measuring resilience

When computer based systems have critical functions, it is natural to test how well they would resist to internal faults, accidental misuse or deliberate attacks, by artificially causing these kinds of stress. Many techniques in current use in dependability and security apply this approach: for instance, fault injection, robustness testing, fuzzing, red teaming, penetration testing, mutation testing, dependability benchmarking. But how much should one be reassured if these tests are passed successfully? How difficult is it to create the "right" stresses? These question require a mix of mathematical and empirical work. If interested in these issues, you may wish to look at discussions of open research questions for instance in this paper about software robustness or this paper about resilience of complex systems or this keynote talk. Theses in these area can aim at assessing by experiment the real predictive accuracy of these forms of measurement in specific applications (e.g. software security or robustness to error) and to prove by formal mathematical treatment what conclusions about a specific system can be drawn from this kind of measurements.


Combination of formal proof with statistical testing of software

Formal verification and statistical testing are two approaches to achieving confidence that software will perform as required. However, for most complex systems neither method, alone, is sufficient. Confidence from proofs is limited by complexity of real-life requirements and of implementations, and of the proofs themselves; confidence from statistical testing depends on assumptions about the system being tested and the test process, which must be verified. At the Centre for Software Reliability we have contributed to the state of the art in both these areas and have experience of their limits in concrete applications with critical software. We are keen to supervise theses that will produce progress in combining the two categories of methods so that they support each other.


Effectiveness of alerting systems for security

There is a booming business in developing advanced computer-based systems that help security and police forces to recognise criminals or dangerous situations and prompt intervention. These systems automatically analyse inputs from security cameras or other sensors and alert staff to consider reacting. Systems designed on similar principles alert computer system administrators to cyber intrusion. Designing these systems presents fascinating technical challenges, but the trickiest ones may be the socio-technical challenges, involving how the users react to these systems. What is the risk of the computer causing false arrests or massive defence operations by false alarms? Or of it flagging a real danger but being ignored? These depend not only on how "clever" the computer is but also on how well it is matched to the abilities and decision making heuristics of the people using it. This topic, related to our work on automation bias, requires interdisciplinary work at the intersection of system and software engineering with psychology, with possible interesting explorations of legal and organisational aspects, and may suit people with various backgrounds in hard sciences, computing or psychology.