Seven leading AI companies were brought together by Joe Biden at the White House to discuss voluntary commitments for safe, secure, transparent AI technology.
Heaptalk, Jakarta — Seven leading AI companies, spanning Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI, have agreed on voluntary commitments at the White House to amplify the safe, secure, and transparent development of AI technology (07/21).
Within these commitments, there are several measures that need to be applied, including committing to internal and external security testing of AI systems prior to the release as well as watermarking AI-generated content to reduce the risks of fraud.
“These commitments are a promising step but we have a lot more work to do together,” said US President Joe Biden, cited by Reuters.
Accordingly, OpenAI affirmed that the Company will continue to invest in research in areas that can help inform regulation, including techniques for assessing potentially dangerous capabilities in AI models as stated on Company’s official website.
“Policymakers around the world are considering new laws for highly capable AI systems. Today’s commitments contribute specific and concrete practices to that ongoing discussion. This announcement is part of our ongoing collaboration with governments, civil society organizations, and others around the world to advance AI governance,” said VP of Global Affairs at OpenAI Anna Makanju.
Supporting a pilot of the National AI Research Resource
Similarly, Microsoft’s Vice Chair and President Brad Smith delivered that his party is expanding its safe and responsible AI practices, working alongside other industry leaders by endorsing all the voluntary commitments presented by President Biden and independently committing to several others that support these critical goals.
“Microsoft’s additional commitments focus on how we will further strengthen the ecosystem and operationalize the principles of safety, security, and trust. From supporting a pilot of the National AI Research Resource to advocating for the establishment of a national registry of high-risk AI systems, we believe that these measures will help advance transparency and accountability,” said Smith.
Key points of voluntary commitments from leading AI companies to manage AI risks:
Ensuring products are safe before introducing them to the public
- The companies commit to internal and external security testing of their AI systems before their release.
- The companies commit to sharing information across the industry and with governments, civil society, and academia on managing AI risks.
Building systems that put security first
- The companies commit to investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights.
- The companies commit to facilitating third-party discovery and reporting of vulnerabilities in their AI systems.
Earning the public’s trust
- The companies commit to developing robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system.
- The companies commit to publicly reporting their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use.
- The companies commit to prioritizing research on the societal risks that AI systems can pose, including avoiding harmful bias and discrimination and protecting privacy.
- The companies commit to developing and deploying advanced AI systems to help address society’s greatest challenges.