Artificial intelligence has always been a technological advancement in societies throughout history, and recently, there has been a surge in its use.
AI has begun to be implemented into various apps and social media, offering artificial intelligence options to users on all sorts of platforms, including Snapchat, X (formerly known as Twitter), and even studying tools for students, like Quizlet and Grammarly.
With the increase in AI usage, several problems have arisen, including deepfake AI, a type of AI that is used to create images, audio, and video hoaxes of anything, and are incredibly convincing to the untrained eye.
Some artificial intelligence deep fakes that have circulated the internet include ones that spread false misinformation, such as a video of soccer player David Beckham speaking nine languages, when in reality he only speaks one, and another where Richard Nixon gave a speech about how NASA Apollo 11 mission failed, and all astronauts involved passed away.
Most recently, deepfakes on American pop singer-songwriter Taylor Swift have spurred Congress into action to protect against harmful AI.
The pictures of Swift included NSFW and non-consensual photos of the popstar, all of which circulated on the Internet in late January. These photos were met with outcries from Swift’s dominating fanbase and even garnered the attention of the White House.
While AI has been helpful to many companies, it is also feared by civil rights organizations because of the drasticness that can occur from it. After the AI incident with Swift, lawmakers have spoken up about their fear of AI abuse, including a statement from Rep. Joe Morelle, who has begun to renew efforts to ensure jail time and fines for explicitly altered images in the digital sense.
On January 10, a few weeks before Swift’s AI incident, a group of U.S. House lawmakers and representatives introduced the No AI Fraud Act, where they stated they hoped to create federal protection for citizens against AI abuse and continue to uphold First Amendment rights online.
According to Rep. Maria Elvira Salazar, who is leading the act, “What happened to Taylor Swift is a clear example of AI abuse. My bill, the No AI Fraud Act, will punish bad actors using generative AI to hurt others.”
With the continued improvement of AI technologies, it has only become easier to have access to them, allowing it to be easy for people to create potential harm for others and spread false information, such as what happened with Beckham, Nixon, and Swift.
The act, if passed, will allow people to be protected while still having creative freedom.
The reps are hoping that the AI incident with Swift will push the No AI Fraud Act to gain traction and finally be established. Since 2017, only seventeen states have enacted laws on the use of AI, but according to ABC News, it is not enough, and laws “at the state level… are inconsistent.”