Opportunity at National Institute of Standards and Technology NIST
Training, Optimization and Benchmarking of Hardware Neural Networks
Physical Measurement Laboratory, Applied Physics Division
Please note: This Agency only participates in the February and August reviews.
|Sonia Mary Buckley
|Adam Nykoruk McCaughan
Machine learning algorithms, such as deep learning, have revolutionized our ability to solve pattern recognition and other traditionally "human" problems. However, such algorithms still do not capture all of the attributes of intelligence and are very power and resource hungry when implemented on traditional computers. These issues have compelled engineers to develop new hardware for AI based on a diverse set of emerging devices, which can include photonic, memristive, magnetic and superconducting materials. Many of these systems are designed with either analog or mixed signal processing instead of digital processing, greatly increasing operating speeds and energy efficiencies. Despite these important advances, there are still important research challenges to overcome before these hardware platforms can be adapted. One of the biggest challenges is the incompatibility of the new hardware with traditional machine learning algorithms such as backpropagation. The goal of this project is to develop and demonstrate a general training technique that can be natively implemented in this diversity of hardware neural networks. Research opportunities include the implementation of physical models of different hardware platforms, extension of the technique to spiking hardware such as Intel's Loihi or commercial FPGAs, and implementation of newly developed continual learning and few-shot learning benchmark tasks.
 A. N. McCaughan, B. G. Oripov, N. Ganesh, S. W. Nam, A. Dienstfrey, and S. M. Buckley, “Multiplexed gradient descent: Fast online training of modern datasets on hardware neural networks without backpropagation,” APL Mach. Learn. 1, 026118 (2023).
 J. Yik et al, “NeuroBench: Advanced neuromorphic computing through collaborative, fair and representative benchmarking,” arXiv:2304.04640 (2023).
AI; hardware for AI; spiking neural networks; semiconductors; machine learning; lifelong learning
Open to U.S. citizens
Open to Postdoctoral applicants