NRC Research and Fellowship Programs
Fellowships Office
Policy and Global Affairs

Participating Agencies

  sign in | focus

RAP opportunity at Naval Air Warfare Center Weapons Division     NAWCWD

Trustworthy, Efficient, and Interpretable AI Systems for Multi-Agent Decision Support and Deployment

Location

Naval Air Warfare Center Weapons Division Research Dept., Physics Division

opportunity location
34.01.02.C1042 China Lake, CA 93555

Advisers

name email phone
Josh Wilkerson joshua.l.wilkerson4.civ@us.navy.mil 858 978 2502

Description

Opportunity Overview
This research opportunity focuses on the development of scalable, modular, and interpretable AI systems for use in mission-critical, resource-constrained, and operationally dynamic environments. Specifically, it supports ongoing efforts to develop:

  • Multi-agent AI frameworks that leverage the complementary strengths of multiple large language models (LLMs) for collaborative decision-making and tool use.
  • Mechanisms for model compression (e.g., quantization, distillation) that enable advanced AI models to run locally in low-SWaP (Size, Weight, and Power) platforms.
  • Mechanistic interpretability techniques for tracing and auditing internal model circuits to ensure safety and reliability of compressed models.

The research aims to create AI systems that are not only effective and efficient, but also explainable, auditable, and trustworthy.

Research Focus Areas
This opportunity supports one or both of the following research threads:

1. Multi-Agent AI Systems
Applicants will contribute to the development of a multi-agent orchestration framework built using tools like Microsoft AutoGen and LangChain. This system enables:

  • Role-specialized agents (e.g., planners, verifiers, developers) to collaborate through structured dialogue.
  • Natural language-based workflows, dynamic tool use, and domain-specific reasoning via retrieval-augmented generation (RAG).
  • Modularity for agents to be swapped, specialized, or critiqued by others in the team.

Key challenges include task decomposition, peer oversight, tool integration, and managing memory/history across agents.

2. Compression and Interpretability
Research in this area will explore:

  • Applying and benchmarking compression techniques such as GPTQ, AWQ, SmoothQuant, and LoRA on transformer-based LLMs.
  • Developing methods to compare full-precision and compressed model internals using sparse autoencoders, logit lenses, and activation tracing.
  • Creating lightweight runtime monitors (“circuit guards”) to detect anomalous activation patterns in real-time.

This work bridges the gap between cutting-edge interpretability research and applied system safety.

Impact
This research directly supports the Navy’s goal of deploying reliable, interpretable AI systems on autonomous platforms, operational planning tools, and embedded systems. By emphasizing safety, adaptability, and explainability, this work lays the foundation for trusted AI systems that operate in the field with minimal oversight, even under changing mission conditions or degraded inputs.

Experience and Skills
This opportunity is ideal for candidates with experience in one or more of the following areas:

  • Transformer-based LLM architectures and agent-based AI
  • Machine learning model compression (quantization, pruning, distillation)
  • AI tool use and orchestration (AutoGen, LangChain, HuggingFace tools)
  • Mechanistic interpretability (e.g., circuit tracing, sparse autoencoders)
  • Retrieval-augmented generation (RAG)
  • Python-based ML development (PyTorch, LangChain, Transformers)


Selected Relevant Works

key words
AI Safety; Model Compression; Mechanistic Interpretability; LLMs; Multi-Agent Systems; RAG; Trustworthy AI; Tool-Using Agents; Explainable AI; Edge Deployment

Eligibility

Citizenship:  Open to U.S. citizens
Level:  Open to Postdoctoral applicants

Stipend

Base Stipend Travel Allotment Supplementation
$80,000.00 $3,000.00

Experience Supplement

Postdoctoral awardees will receive an appropriately higher stipend based on the number of years of experience past their PhD.

Additional Benefits

Relocation

Awardees who reside more than 50 miles from their host laboratory and remain on tenure for at least six months are eligible for paid relocation to within the vicinity of their host laboratory.

Health insurance

A group health insurance program is available to awardees and their qualifying dependents in the United States.

Copyright © 2024. National Academy of Sciences. All rights reserved.Terms of Use and Privacy Policy