Sponsored By

Google, Meta, other tech companies to establish AI “safeguards”

AI legislation is still a ways off, but companies such as Microsoft and Amazon say they have committed themselves to transparency and independent third-party testing for their individual AI technology.

Justin Carter, Contributing Editor

July 21, 2023

3 Min Read
Legion in BioWare's Mass Effect 2.

According to a new report from NPR, various tech companies are set to reveal their commitments to sharing, developing, and releasing AI technology.

It's claimed those companies, which include the likes of Google, Meta, and OpenAI, will reveal their voluntary agreements at the White House. Those commitments are reportedly planned to be transparent for both the government and U.S. citizens, and range from privacy protection to a watermark to signify an AI creation.

As game developers gradually incorporate AI into different parts of development, these commitments could have an effect on studios (such as Ubisoft) that have already invested in the technology.

Concerns about AI and its affect on the general public have been raised for some time. Last week, the FTC launched an investigation into OpenAI's ChatGPT bot to discern if it's breached any consumer protection laws. At time of writing, it's not clear if the US regulator's actions had a hand to play in this decision, or if it's involved at all. 

NPR reports that these companies have also agreed to have their AI technology tested by an independent third party prior to release. White House staff chief Jeff Zients called these measures "just a start. The key here is implementation and execution in order for these companies to perform and earn the public's trust."

"U.S. companies lead the world in innovation," he continued, "But they have an equal responsibility to ensure that their products are safe, secure and trustworthy."

What AI regulation may mean for the game industry

As it relates to the creative field (video games included), it's previously been reported that AI is heavily impacting areas such as art and voice acting. Earlier in the week, voice actor Victoria Atkin told Game Developer about how modders have been using AI to replicate her voice.

Part of the reason she's striking alongside the WGA and SAG-AFTRA is because AI "stole her voice" for the role of Assassin's Creed Syndicate's Evie Frye. That concern has been previously voiced by other voice actors in the past year, including SungWon Cho (Borderlands 3) and Yuri Lowenthal (Marvel's Spider-Man 2).

Some voice actors have been allegedly made to sign their voices away for the purpose of machine learning, while others only learn their voice has been cloned by AI after a mod makes the rounds. AI is one of the big points of contention in the SAG-AFTRA strike, and voice acting shows the technology's potential issues.

If the White House does secure some form of AI regulation, it would likely be a great benefit to the entertainment industry. But, at time of writing, there aren't specific ways to hold tech companies accountable, nor are there penalties drawn up for if a company fails to meet those agreements. 

The White House is reportedly working with Congress to develop actual legislation in the coming weeks. But for now, Zients said the Biden administration would "use every lever that we have in the federal government to enforce these commitments and standards."

About the Author(s)

Justin Carter

Contributing Editor, GameDeveloper.com

A Kansas City, MO native, Justin Carter has written for numerous sites including IGN, Polygon, and SyFy Wire. In addition to Game Developer, his writing can be found at io9 over on Gizmodo. Don't ask him about how much gum he's had, because the answer will be more than he's willing to admit.

Daily news, dev blogs, and stories from Game Developer straight to your inbox

You May Also Like