Trustworthiness and resilience of AI

Deep learning has made significant progress in recent years. However, there are concerns on whether it can be applied to safety-critical systems. This project aims to study the adaptation of software engineering techniques onto machine/deep learning systems to improve the safety and trustworthiness of machine learning models. The study is also extended to consider robotics and autonomous systems where deep learning systems are perception or control components.

This research is conducted under the auspices of UK DSTL funded “TCM: Test coverage metrics for Artificial Intelligence - v2.0” Project, in collaboration with the University of Liverpool, Lancaster University, and Queen’s University Belfast. As such, it focuses on developing a new method for the verification and validation of deep learning to understand and improve the safety and trustworthiness of learning-enabled autonomous systems. As a major outcome, it has developed a software tool, named DeepConcolic, which is able to test a deep learning model through a coverage-guided testing method. This testing tool can work with both convolutional neural networks and recurrent neural networks.

This is a 21-month project, started in July 2019, and is aligned with the Artificial Intelligence, Machine Learning and Data Analytics research theme of the IDT.

The following publications provide further insights into this research:

W. Huang, Y. Sun, X. Zhao, J. Sharp, W. Ruan, J. Meng, and X. Huang, “Coverage Guided Testing for Recurrent Neural Networks”, IEEE Transactions on Reliability, 2021.

W. Huang, Y. Zhou ,Y. Sun, A. Banks, J. Meng, J. Sharp, S. Maskell, and X. Huang, “Formal Verification of Robustness and Resilience of Learning-Enabled State Estimation Systems for Robotics”, 2020

For further info, please follow https://cgi.csc.liv.ac.uk/~acps/projects-post/TCM/ 

Or contact Dr Jie Meng.