Deepfakes in Artificial Intelligence
The term "deepfake" originated on Reddit in late 2017. A user that went by the name of "deepfakes" started sharing videos where celebrities' faces were swapped onto other people's bodies. Deepfakes, a type of artificial media created using AI techniques, have been a recent craving sparked by the introduction of artificial intelligence. These deepfakes can generate highly realistic but fake images, videos, or audio by manipulating content to mimic real people. They can be so high-quality and lifelike that they're difficult to distinguish from genuine material. These open AI websites allow anyone to create deepfakes, meaning if they fall into the wrong hands, it can cause immense drama.
Returning to Reddit, the "deepfake" user shared something more obscene rather than typical face-swapped celebrities. These were primarily pornographic content aimed at putting celebrities in a bad light, purposely causing drama and controversy. They used technology called generative adversarial networks, a machine-learning technique that allows for creating highly realistic synthetic images. The user would take the faces of celebrities and digitally insert them into sexually explicit content material without their consent. Why was this done? There is no honest answer for this besides people wanting to start drama and hate the world.
There is no real reason behind wanting to post a fake pornographic video or photo of someone and post it on the internet for the world to see. Not only does this create controversy for the person it's done on, but it also has numerous detrimental effects on the viewer and platforms. These include violating the person's consent or privacy boundaries. This also creates emotional and psychological harm to the person in the form of depression and anxiety. This also damages the person's reputation both online and in person. Recently, numerous states have created laws surrounding the use of deepfakes, so depending on the state in which deepfakes are made, there could be serious legal and ethical issues. An example of one of these laws can be taken from Louisiana's Act 457, which" criminalizes deepfakes involving minors engaging in sexual conduct." Laws have also been created to deal with deepfakes in this recent election between Donald Trump and Kamala Harris. For example, Texas SB 751 "makes it a criminal offense to fabricate a deceptive video with intent to injure a candidate or influence the outcome of an election." Seeing a deepfake online, especially false information related to elections, creates an erosion of trust in the media for the viewer.
Even recently, deepfakes of celebrities have been made. In January 2024, deepfake images of Taylor Swift circulated around X. There was no rhyme or reason why it was done besides the fact that haters were pursuing to destroy her career and reputation. Thankfully, due to her massive number of followers and the public being more informed on AI and deepfakes, the posts were taken down without Taylor Swift herself having to step in and do anything. However, these fakes are still susceptible to older, non-informed people. For example, my mother recently got a video of Atlanta Falcons quarterback Kirk Cousins on her Facebook feed explaining how much he "fucking hates the Chiefs and the referees." Since my mom was uninformed, she thought it was real and sent it to me and my father in absolute disgust. I also get many of these videos, including Joe Burrow's very real post-game interview after a loss shootout against the Ravens. These videos are in good taste. They are only made to create humor, having NFL players say blasphemous things that they should never say, but that doesn't automatically mean that everyone understands the joke or knows what is genuinely AI.
No comments:
Post a Comment