|Timothy S Kroecker
The increased reliance on human-computer interactions, coupled with dynamic environments where outcomes and choice are ambiguous, creates opportunities for ethical decision making situations with serious consequences where errors could cost loss of life. We are developing approaches that make autonomous system decisions more apparent to its users, and capabilities for a system to tailor the amount of automation based on the situation and input from the decision maker. This allows for dynamically adjustable human/machine teaming addressing C2 challenges of Autonomous Systems, Manned/Unmanned Teaming, and Human Machine Interface and Trust. The work focuses on developing a system for modeling and supporting human decision making during critical situations, providing a mechanism for narrowing choice options for ethical decisions faced by military personnel in combat/non-combative environments.
We propose developing software (an “ethical advisor”) to identify and provide interventions in situations where ethical dilemmas arise and quick, reliable decision making is efficacious. Our unique approach combines behavioral data and model simulation in the development of an interactive model of decision making that emphasizes the human element of the decision process. In the long term, understanding the fundamental aspects of human ethical decision making will provide key insights in designing fully autonomous computational systems with decision processes that consider ethics. As autonomous systems emerge and military applications are identified, we will work to provide verifiable assurance that our autonomous systems are making decisions that reflect USAF moral and ethical values. The first step towards realizing this vision is focusing on human decision processes and clarifying those values in a quantifiable model. The team has developed an ethical framework and preliminary model of ethical decision making that will be more fully developed with the Air Force Academy (AFA) and Air University (AU). In Year 1, we will articulate the individual psychological characteristic and situational factors impacting ethical dilemmas and develop realistic ethical dilemmas and situations. These scenarios will use computational agents employing AI and military personnel, requiring ethical decisions to be made by personnel in combat and non-combative environments. In year 2, we will develop the Ethical Advisor prototype, test the individual psychological characteristics and situational factors, refine the scenarios, and establish and implement collaborations across different commands/services. In year 3, we will test and integrate the model and Ethical Advisor into a mission system, and conduct joint war game testing.
We are seeking individuals from a variety of educational disciplines (Psychology, Philosophy, Computer Science) with experience in data gathering and summarization techniques, programming, and testing. The gathered data would be used for developing algorithms and programming to begin enabling software to mimic human decision making in complex ethics-laden situations.
Yilmaz, L., Franco-Watkins, Anna, Kroecker, Timothy S. Computational models of ethical decision-making: A coherence-driven reflective equilibrium model, Cognitive Systems Research, February 2017,
Blais, A., & Thompson, M. M. (2013). What Would I Do? Civilians’ Ethical Decision Making in Response to Military Dilemmas. Ethics and Behavior, 23(3), 237--249.
Fried, B.H. (2012). What does matter? The case for killing the Trolley Problem (or letting it die). The Philosophical Quarterly, 62 (248), 506--529.