As artificial intelligence explodes as a way for tech to accomplish humanlike tasks, it brings new risks including fraud, privacy violations and job loss. It also raises questions about copyright and concerns about the spread of misinformation — and, as some experts have warned, that humans will lose control over AI as it begins to evolve on its own.
In response, the White House on Monday issued an executive order on “Safe, Secure, and Trustworthy Artificial Intelligence,” setting standards to protect Americans from a range of potential threats related to AI. The order includes requirements for more oversight of and openness from tech companies, the development of systems to protect private data, and concrete efforts to prevent the spread of misinformation.
The last involves establishing standards for detecting AI-generated content. AI can be used to create fake images and other content — a particular problem for democracy during political campaigns, says Dan Evon, senior manager of education design with the News Literacy Project. “AI image generators can not only create convincing visuals of nonexistent events, but the widespread availability of these tools also create distrust about genuine photos and videos,” he says.
AI is also a notoriously useful tool for criminals perpetrating a range of scams; the technology helps them easily clone voices to pretend to be, say, a loved one in danger and needing money, or it could be used to impersonate an agent from the Internal Revenue Service demanding that you pay back taxes immediately. (Read our story “Chatbots and Voice-Cloning Fuel Rise in AI-Powered Scams,” and listen to this recent episode of AARP’s podcast The Perfect Scam for more on these crimes.)
Because government impostor scams are so prevalent, the White House also wants to “make it easy for Americans to know that the communications they receive from their government are authentic” by using “content authentication and watermarking to clearly label AI-generated content.”