Last week, US President Joe Biden announced that several artificial intelligence (AI) companies, including OpenAI, Alphabet, and Meta Platforms, have made voluntary commitments to the White House to implement measures such as watermarking AI-generated content to help make the technology safer.

A Promising Step

Biden praised the companies’ commitments as a promising step, but acknowledged that there is still much work to be done. At a White House event, he addressed growing concerns about the potential for AI to be used for disruptive purposes and emphasized the need for vigilance in the face of threats from emerging technologies to US democracy.

Companies’ Commitments

The companies, which also include Anthropic, Inflection, Amazon.com, and OpenAI partner Microsoft, pledged to thoroughly test systems before releasing them and share information about how to reduce risks and invest in cybersecurity. This move is seen as a win for the Biden administration’s efforts to regulate the rapidly growing technology.

Microsoft welcomed the president’s leadership in bringing the tech industry together to work on concrete steps to make AI safer, more secure, and more beneficial for the public.

Growing Concerns About AI

As generative AI, which uses data to create new content like ChatGPT’s human-sounding prose, has become increasingly popular this year, lawmakers around the world have begun considering how to mitigate the dangers of this emerging technology to national security and the economy.

The US lags behind the EU in tackling AI regulation. In June, EU lawmakers agreed on a set of draft rules that would require systems like ChatGPT to disclose AI-generated content, help distinguish deepfake images from real ones, and ensure safeguards against illegal content.

US Senate Majority Leader Chuck Schumer has called for comprehensive legislation to advance and ensure safeguards on AI. Congress is currently considering a bill that would require political ads to disclose whether AI was used to create imagery or other content.

Watermarking AI-Generated Content

Seven companies have committed to developing a system to watermark all forms of content generated by AI, including text, images, audio, and video. This will allow users to know when AI technology has been used.

The watermark will be embedded in the content in a technical manner and will presumably make it easier for users to spot deepfake images or audio that may show violence that has not occurred or distort a photo of a politician in an unflattering light. However, it is unclear how the watermark will be evident when information is shared.

Protecting Users’ Privacy

In addition to watermarking content, the companies also pledged to focus on protecting users’ privacy as AI develops and ensuring that the technology is free of bias and not used to discriminate against vulnerable groups. Other commitments include developing AI solutions for scientific problems such as medical research and mitigating climate change.

President Biden hosted executives from the seven companies at the White House on Friday and stated that he is also working on developing an executive order and bipartisan legislation on AI technology. He emphasized the rapid pace of technological change and expressed his amazement at how much has changed in recent years.

By Shamiso Miracle

Shamiso Miracle completed her degree in journalism and media studies at the University of Zimbabwe before honing her skills at Savanna News. She then went on to work at iHarare News, becoming a voice for everyday SA citizens who wanted to share their stories. When she's not writing news that entertains and inspires ,Shamiso is an avid reader and a wellness bunny.

One thought on “OpenAI, Google, others pledge to watermark AI content for safety”

Leave a Reply

Your email address will not be published. Required fields are marked *