card

Google, Meta, Microsoft, Amazon make pledge on AI safety and security

11.08.2023
349

In a significant move towards ensuring the safety and security of AI technologies, major tech giants including Google, Meta (formerly Facebook), Microsoft, Amazon, OpenAI, Anthropic, and Inflection came together for a crucial meeting with US President Joe Biden. During the meeting, they collectively pledged to prioritise "safety, security, and trust" in the development and deployment of AI systems.

Under the category of safety, these companies committed to conducting rigorous testing of their AI systems, subjecting them to external evaluations, and making the results of these assessments public. They also vowed to assess the potential biological, cybersecurity, and societal risks associated with their AI technologies.

Regarding security, the companies promised to safeguard their AI products against cyber and insider threats, and they will collaborate in sharing best practices and standards to prevent misuse, reduce risks to society, and protect national security.

One significant agreement reached during the meeting was centred around building trust. The companies pledged to make it easier for individuals to determine whether images are original, altered, or generated by AI. Furthermore, they committed to preventing AI from promoting discrimination or bias, protecting children from harm, and utilising AI to tackle global challenges like climate change and cancer.

The introduction of OpenAI's ChatGPT in late 2022 marked the beginning of a wave of generative AI tools released by major tech companies to the general public. OpenAI followed up with the launch of GPT-4 in mid-March, the latest version of the powerful language model that powers the ChatGPT AI chatbot. However, with the increasing adoption of such tools, concerns have arisen regarding potential problems, including misinformation spreading and exacerbating biases and inequality. Hide Expert VPN gets you covered and secured always. Check-in and learn more about what we offer.

In response to the pressing issues, the companies expressed their views and actions on the matter. Meta, for instance, welcomed the White House agreement and recently released its AI language model, Llama 2, as a free and open-source tool. Microsoft, a partner in Meta's Llama 2 project, reiterated its commitment to AI safety and expressed that the White House agreement would help maintain a balance between AI's promise and potential risks.

OpenAI, on the other hand, emphasised its ongoing collaboration with various stakeholders, including governments and civil society organisations, to advance AI governance. Amazon also supported the voluntary commitments, emphasising its dedication to innovation while ensuring necessary safeguards are in place to protect consumers and customers.

Anthropic pledged to announce its plans concerning cybersecurity, red teaming, and responsible scaling in the coming weeks. At the same time, Inflection AI's CEO highlighted the importance of safety as a core mission of their company.

Google's President of Global Affairs, Kent Walker, hailed the commitments as a milestone that will align efforts with the G7, the OECD, and national governments to maximise AI's benefits and minimise its risks. Google had previously launched its chatbot called Bard and committed to watermarking AI-generated content through its AI model, Gemini.

Despite the notable collective effort, some companies, such as Elon Musk's xAI and Apple, were not part of the discussions, with reports suggesting Apple has developed its chatbot and large language model framework.

The Biden-Harris administration is taking further measures to address AI safety concerns by developing an executive order and seeking bipartisan legislation to ensure the safety of Americans. Additionally, the US Office of Management and Budget plans to release guidelines for federal agencies using or procuring AI systems.

Overall, the collective effort of these tech giants, along with the involvement of government and civil society, marks a significant step towards creating a safer and more secure landscape for AI technologies. The focus on transparency, collaboration, and public welfare underscores the industry's determination to maximise the benefits of AI while mitigating potential risks.