‘I think one of… the things we sought to accomplish in the course of the update is clarifying the language to ensure a common understanding both inside and outside the Pentagon of what the directive says,’ said Michael Horowitz, director of the Pentagon’s Emerging Capabilities Policy Office.
The post DoD’s Update To Autonomous Weapons Policy Accounts For AI’s ‘Dramatic’ Future Role appeared first on Above the Law.

The Pentagon has updated its guidance for autonomous weapons. (Getty images)

WASHINGTON — The Pentagon today updated its decade-old guidance on autonomous weapon systems to include advances made in artificial intelligence, a new senior-level oversight group and clarification about the roles different offices within the department will take.

“I think one of… the things we sought to accomplish in the course of the update is clarifying the language to ensure a common understanding both inside and outside the Pentagon of what the directive says,” Michael Horowitz, director of the Pentagon’s Emerging Capabilities Policy Office, told reporters today ahead of the revised directive’s release, calling it “not a major policy change.” “The directive does not prohibit the development of any particular weapon system. It lays out requirements for autonomous and semi autonomous weapon systems.”

DoD directive 3000.09, initially signed on Nov. 21, 2012 by then-Deputy Secretary of Defense Ash Carter, “establishes DoD policy and assigns responsibilities for the development and use of autonomous and semi-autonomous functions in weapon systems, including manned and unmanned platforms.” 

Last May Breaking Defense was first to report some details of the coming revision. At the time, Horowitz said in an interview that the “fundamental approach in the directive remains sound, that the directive laid out a very responsible approach to the incorporation of autonomy and weapons systems.”

Still, one of the biggest things the revised directive [PDF] accounts for is the “dramatic, expanded vision” for the role of AI in future military operations, he added. The revisions reflect DoD’s work on its “responsible AI” and AI ethical principles initiatives.

“And for autonomous weapons systems that incorporate artificial intelligence… the directive now specifies that they, like any system that uses artificial intelligence, whether a weapon system or not, would need to follow those guidelines as well,” Horowitz said, referring to DoD’s responsible and ethical AI initiatives. “So, you know, part of the motivation here was to ensure the inclusion of those AI policies, as part of this directive, even though the directive itself is about autonomous weapon systems, which are certainly not synonymous with artificial intelligence.”

The directive also further requires additional senior-level reviews for the development and fielding of autonomous weapon systems and “continues to require that autonomous and semi-autonomous weapon systems be designed to allow commanders and operators to exercise appropriate human judgment over the use of force,” Horowitz told reporters.

The senior-level review will come in the form of the new Autonomous Weapon Systems Working Group, which consists of several different offices within the department — like acquisition and sustainment and research and engineering — that will support the Office of the Undersecretary of Defense for Policy. 

“This is part of what we view as good governance,” Horowitz said, referring to the working group. “And… the directive… doesn’t change the approval requirements. You still have senior-level reviewers that are ultimately making the call on these systems… . What the autonomous weapons working group does is facilitate aggregating the information that senior leaders would need to be able to effectively make decisions, to essentially put the paper package together, to be able to have an effective review process, to ensure that either prior to development or prior to fielding that a proposed autonomous weapon system will fit with the requirements laid out in the in the directive.”

The updated document also designates responsibilities to DoD offices, including offices that didn’t exist when the directive was approved in 2012. For example, the Chief Digital and AI Office, established last year, will be responsible for several efforts, including monitoring and evaluating AI capabilities and cybersecurity for both autonomous and semi-autonomous weapon systems and advising the Secretary of Defense on those matters. 

The CDAO will also work with the Office of the Undersecretary of Defense for Research and Engineering to “formulate concrete, testable requirements” for implementing DoD’s responsible and ethical AI initiatives, among other things.

In May, Horowitz predicted the core of the autonomous policy wouldn’t change that much.  But, he said, “You know, it has been a decade. And it’s entirely plausible that there are some updates and clarifications that would be helpful.”