Under Secretary Jenkins’ Remarks at the Launch Event for the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy
HomeUnder Secretary for Arms Control and International Security Remarks & Releases…Under Secretary Jenkins’ Remarks at the Launch Event for the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy hide Under Secretary Jenkins’ Remarks at the Launch Event for the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy Remarks November 13, 2023 As prepared Thank you, Ambassador Wood. Let me start by echoing our thanks to all of you for joining us today. Only nine months ago, at the Responsible AI in the Military Domain Summit in The Hague, I reflected on a recent speech by President Biden, in which he said, “we are not bystanders to history.” Vice President Harris, who sends her regrets that she could not be here today, echoed a similar sentiment in London in advance of the UK AI Summit, when she said, “history will show that this was the moment when we had the opportunity to lay the groundwork for the future of AI”. President Biden and Vice President Harris have led U.S. Government’s efforts to collaborate with the private sector, civil society, and international partners to promote the advancement and adoption of AI in a manner that protects the public from potential harm and that ensures that everyone is able to enjoy its benefits. In honor of today’s event, the Vice President addressed a letter to the endorsers, copies of which are available in the room. Let me highlight a few points from that letter: I quote, “Ensuring the responsible development and use of AI by militaries requires global action. This Political Declaration is designed to meet this moment by jointly committing us to a set of critical norms…” “The Declaration marks the beginning of an important dialogue among responsible states regarding the implementation of these foundational principles and practices. The United States is committed to doing our part, and we look forward to working together with our partners on this endeavor for years to come.” End quote. Colleagues, as the Vice President’s letter affirms, endorsers should justifiably feel proud of their leadership role in seizing this moment. We are establishing a set of measures for responsible behavior, as well as a mechanism for States to discuss and address the challenges militaries face while adopting AI systems. We will not only discuss these issues but help each other identify solutions to them — solutions that will benefit the entire world. We will lead in setting norms and good practices regarding responsible military use of AI. We cannot predict how AI technologies will evolve or what they might be capable of in a year or five years. However, we know that there are steps States can take now to put in place the necessary policies and to build the technical capacities to enable responsible development and use, no matter the technological advancements. We need, therefore, to come together as an international community around a set of strong norms for responsible development and deployment. Norms that will enable nations to harness the potential benefits of AI systems in the military domain while encouraging steps that avoid irresponsible, destabilizing, and reckless behavior. Today I am excited to announce that the States endorsing this Declaration have taken this important step forward toward this objective. We are thrilled to have 45 endorsers joined together to seize the opportunity to lay the groundwork for the future of responsible military use of AI and autonomy. And I hope even more of you see the value in this initiative. For those in the room that have not formally joined this initiative, we welcome you here today and encourage you to become part of the discussion going forward. It is important for us to have broad participation, from as many endorsers as possible, so as to achieve a shared understanding on these issues. This Declaration breaks new ground in several respects. To begin, it launches the first multilateral instrument and State dialogue addressing the full range of uses of AI and autonomy in the military domain – from logistics to personnel management; from intelligence collection to decision making processes. It also provides a basis for a much more concrete dialogue on what “responsible” means in practice. What does an effective testing and assurance process look like? How do you exercise “appropriate care” in a range of practical applications? This effort is complementary to the important, ongoing discussions in the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems. LAWS are one facet of a much larger set
Under Secretary Jenkins’ Remarks at the Launch Event for the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy
As prepared
Thank you, Ambassador Wood.
Let me start by echoing our thanks to all of you for joining us today.
Only nine months ago, at the Responsible AI in the Military Domain Summit in The Hague, I reflected on a recent speech by President Biden, in which he said, “we are not bystanders to history.”
Vice President Harris, who sends her regrets that she could not be here today, echoed a similar sentiment in London in advance of the UK AI Summit, when she said, “history will show that this was the moment when we had the opportunity to lay the groundwork for the future of AI”.
President Biden and Vice President Harris have led U.S. Government’s efforts to collaborate with the private sector, civil society, and international partners to promote the advancement and adoption of AI in a manner that protects the public from potential harm and that ensures that everyone is able to enjoy its benefits.
In honor of today’s event, the Vice President addressed a letter to the endorsers, copies of which are available in the room. Let me highlight a few points from that letter:
I quote, “Ensuring the responsible development and use of AI by militaries requires global action. This Political Declaration is designed to meet this moment by jointly committing us to a set of critical norms…”
“The Declaration marks the beginning of an important dialogue among responsible states regarding the implementation of these foundational principles and practices. The United States is committed to doing our part, and we look forward to working together with our partners on this endeavor for years to come.” End quote.
Colleagues, as the Vice President’s letter affirms, endorsers should justifiably feel proud of their leadership role in seizing this moment. We are establishing a set of measures for responsible behavior, as well as a mechanism for States to discuss and address the challenges militaries face while adopting AI systems.
We will not only discuss these issues but help each other identify solutions to them — solutions that will benefit the entire world. We will lead in setting norms and good practices regarding responsible military use of AI.
We cannot predict how AI technologies will evolve or what they might be capable of in a year or five years. However, we know that there are steps States can take now to put in place the necessary policies and to build the technical capacities to enable responsible development and use, no matter the technological advancements.
We need, therefore, to come together as an international community around a set of strong norms for responsible development and deployment. Norms that will enable nations to harness the potential benefits of AI systems in the military domain while encouraging steps that avoid irresponsible, destabilizing, and reckless behavior.
Today I am excited to announce that the States endorsing this Declaration have taken this important step forward toward this objective. We are thrilled to have 45 endorsers joined together to seize the opportunity to lay the groundwork for the future of responsible military use of AI and autonomy.
And I hope even more of you see the value in this initiative. For those in the room that have not formally joined this initiative, we welcome you here today and encourage you to become part of the discussion going forward.
It is important for us to have broad participation, from as many endorsers as possible, so as to achieve a shared understanding on these issues.
This Declaration breaks new ground in several respects.
To begin, it launches the first multilateral instrument and State dialogue addressing the full range of uses of AI and autonomy in the military domain – from logistics to personnel management; from intelligence collection to decision making processes.
It also provides a basis for a much more concrete dialogue on what “responsible” means in practice. What does an effective testing and assurance process look like? How do you exercise “appropriate care” in a range of practical applications?
This effort is complementary to the important, ongoing discussions in the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems. LAWS are one facet of a much larger set of issues and implications of AI in the military domain.
We will continue to strongly support these important ongoing discussions in Geneva, as we believe the LAWS GGE is the appropriate multilateral forum for discussions on LAWS. But we also saw a critical need both to widen the aperture to consider the full range of military applications and focus more explicitly on AI rather than just autonomy.
This effort also builds on the laudable work of numerous States to elevate these issues on the international agenda, including that of the Netherlands and Republic of Korea, co-hosts of the REAIM Summit.
This Declaration was launched in the spirit of the REAIM Call to Action, and we look forward to continuing to participate in REAIM, which provides a premier forum for advancing multi-stakeholder dialogue on military uses of AI.
The Declaration articulates a set of specific practices to guide endorsers’ development and deployment of military AI and autonomy.
For example, the Declaration says States should ensure that military AI capabilities have explicit, well-defined uses and that they are designed and engineered to fulfill those intended functions. This is a foundational practice for developing AI capabilities and testing their reliability.
The Declaration calls on States to take proactive steps to minimize unintended bias in military AI capabilities. This issue will be crucial for further discussion because bias can take very different forms with different consequences.
Examples include an unintended bias in a personnel management system might favor candidates based on personal characteristics like gender, race, or ethnicity. Or an unintended bias in a logistics system might, for instance, prioritize certain types of military vehicles over others.
The Declaration calls for military AI capabilities to be developed with methodologies, data sources, design procedures, and documentation that are transparent to, and auditable by, their relevant defense personnel.
That way, if something goes wrong, States are fully equipped internally to trace back through the chain of events to determine the root cause and enable corrective actions to prevent repeating that failure in the future.
Further, the Declaration underscores the importance of safety, calling for appropriate safeguards to mitigate risks of failures in military AI capabilities. Safeguards, such as the ability to detect and avoid unintended consequences, can be technically demanding to implement, and thus this is a crucial area in which endorsers can collaborate.
This brings me to the second novel aspect of this initiative – we envision this collaboration among endorsers to be far more robust than simply committing to high-level principles. The Declaration is a foundation for collaboration and exchanges, such as sharing best practices, expert-level exchanges, and capacity-building activities.
With this in mind, I would note that where the Declaration uses broad terms, it is by design not by accident. The Declaration provides a starting point for endorsers with different national approaches to have a discussion, for instance, on expanding the guidance or working through different views on what oversight by senior officials should look like.
We did not endeavor to unilaterally resolve every point of divergence or elaborate in technical detail every practice because these are areas in need of much more discussion – and in some cases, best practices will continue to be developed and refined. The Declaration is designed to be flexible to allow for endorsers to make progress in our shared understanding of these issues.
The potential outcomes of this process will be more than just a set of practices. The success of this initiative should be measured in the progress we make in enabling and motivating States to implement these practices.
For instance, we may conduct expert-level exchanges on how to conduct legal reviews for AI-enabled or autonomous capabilities, or host seminars on technical challenges such as tools and techniques for mitigating unintended biases.
This brings me to the final feature of this initiative, which is its inclusivity. Having a large and diverse group of States involved is a major strength of this initiative.
Many of us will encounter similar challenges when developing and deploying these systems, so we will all benefit from exchanges and collaboration, such as the sharing of approaches or lessons learned.
We, therefore, endeavor to bring together a diverse group of endorsers from different regions and with different perspectives on these issues.
We share a common goal of reducing the potential risks and uncertainties brought by the introduction of AI tools into the military domain—whether a military is developing their own AI capabilities or acquiring them from commercial services—and of strengthening compliance with international humanitarian law.
While we may not agree on every single aspect of this complex issue, it’s important that our differences don’t impede progress in areas where we can agree.
On that note, I want to briefly address one particularly conspicuous difference between the initial version of the Declaration released in February and the current version.
The February version included a statement based on a commitment the United States made together with France and the United Kingdom to “maintain human control and involvement for all actions critical to informing and executing sovereign decisions concerning nuclear weapons employment.”
In our consultations with others, it became clear that, while many welcomed this statement of assurance, the nexus between AI and nuclear weapons is an area that requires much more discussion. We removed this statement so that it did not become a stumbling block for States to endorse.
We believe that the Declaration text you see here today — the result of our extensive consultations with States — is a strong statement of shared norms of responsible behavior, and an excellent foundation for further dialogue and engagement with the international community.
Let me again commend the endorsers for your collective leadership by joining this initiative. The Declaration is already a much stronger document from the invaluable feedback many of you have us. We look forward to continuing these discussions and bringing others into this dialogue.