Opportunity at National Institute of Standards and Technology (NIST)
Explainable Artificial Intelligence
Information Technology Laboratory, Software and Systems Division
Please note: This Agency only participates in the February and August reviews.
As of today, there is a plethora of cyber-physical instruments consisting of physical sensing (e.g., microscopy imaging) and cyber (digital) Artificial Intelligence (AI)-based predictions. These instruments raise concerns about safety because they rely on black-box AI models and do not contain any guardrails if physical and/or digital parts of the instrument fail or are attacked by an adversary. We would like to address the safety concerns by researching a metrology for establishing digital references , safety zones (boundaries), validation methods for AI risk management, and baselines for traceability of physical and digital parts of AI-enabled instruments . Our research is applied to instruments used in regenerative medicine and cancer research .
 OpenAI Microscope, a collection of visualizations of every significant layer and neuron of 13 important vision models, URL
 Peter Bajcsy et al., “AI Model Utilization Measurements For Finding Class Encoding Patterns”, arXiv, Dec. 2022, URL
 Nicholas J. Schaub et al., “Deep learning predicts function of live retinal pigment epithelium from quantitative microscopy,” Journal of Clinical Investigation. November 14, 2019. DOI
Artificial Intelligence; Computer Vision; Confidence Estimation;
Open to U.S. citizens
Open to Postdoctoral applicants