Artificial Intelligence (AI) and specifically Machine Learning (ML) have enabled many new classes of intelligent devices, from self-driving and lane correcting automobiles, to passenger and military airplanes, Magnetic Resonance Imaging devices and other healthcare advances. Early research addressed basic properties, structure and applications of AI. However, as AI has become more pervasive, more treatment has been given to its security and robustness. Many of these enabling ML constructs are vulnerable to be manipulated by adversarial examples to produce incorrect results.
Defenses against adversarial attacks on AI have received a great deal of attention recently, particularly for the image recognition domain. Attacks against AI include crafting adversarial samples at test time and poisoning the training set, the latter including “backdoor” attacks which attempt to plant insidious trojans into AIs.
This research has two main research thrusts, or opportunities:
1. This thrust seeks both to apply and to extend defenses to protect AIs for data domains of interest, including video sequences, time series, LIDAR, and network traffic data, and for commonly used recurrent neural network architectures such as LSTMs. Of interest are related problems of data security and integrity, and application of adversarial AI and other machine learning techniques to problems such as verifying AIs that are used as numerical solvers for systems of PDEs (model verification), e.g., for solving power system equations, and to use of AI for software verification. This includes (1) development of theory and defense against adversarial attacks to AI and ML, and (2) cyber/physical protection for cloud computing, centralized and extended to the edge.
2. Aerial avionic cyber-physical systems (CPS) are dynamic data driven systems which rely on various sensors (e.g., electro optical, synthetic aperture radar and infrared) and statistical machine learning models with real-time feedback loops, to operate. To empower cybersecurity and information security of aerial mission and control systems, new tools and paradigms must be developed for next generation of cyber-assurance technology. Safety, security and enabled functionality of high-assurance aerial CPS must be guaranteed by means of information assurance, information security, data quality and resiliency to adversarial attacks. This thrust seeks to research and develop mitigations to (1) attacks perturbing the input to the target ML system at different acquiring stages, and (2) mitigations with consideration to finite computational resources and real-time system response times, and (3) analytics pertaining to target detection, shape recognition and tracking.
Sun, L., Tan, M., & Zhou, Z. (2018). A survey of practical adversarial example attacks. Cybersecurity, 1(1), 1-9.
Zhang, J., & Li, C. (2019). Adversarial examples: Opportunities and challenges. IEEE transactions on neural networks and learning systems, 31(7), 2578-2593.
Gong, Y., Li, B., Poellabauer, C., & Shi, Y. (2019). Real-time adversarial attacks. arXiv preprint arXiv:1905.13399.
Xiang, Z., Miller, D. J., & Kesidis, G. (2020). Detection of Backdoors in Trained Classifiers Without Access to the Training Set. IEEE Transactions on Neural Networks and Learning Systems.
Machine Learning; Real-Time Systems; Adversarial Examples; Artificial Intelligence; Deep Learning; Attacks on Machine Learning Systems