Twitter has promised that it will introduce new rules in an effort to fight against deepfakes. Despite gaining attention for its innovation, content of this type has been met with concern.
Why so serious?
Deepfake media has been doing the rounds in the news for the its potentially serious disinformation abilities. The technology involved can manipulate visuals, pictures and audio so much that it can mislead viewers to believing that they are real.
There have been cases of people making deepfake content for revenge porn, humiliating those involved. Along with this, many videos have misrepresented politicians, creating opposition by false means. Furthermore, deepfakes can be used for scamming purposes, fooling people to trust those in the content.
Therefore, Twitter’s safety profile has made a statement that addresses the rise of deepfakes, which it calls synthetic media.
“We’re always updating our rules based on how online behaviors change. We’re working on a new policy to address synthetic and manipulated media on Twitter – but first we want to hear from you,” said Twitter Safety.
Twitter states that this media has been significantly altered or created in a way that changes the original meaning of purpose. The social media powerhouse says this content makes it seems like certain events took place that did not ever happen.
The company then went on to share that new policy will address this media because it could impact someone’s physical safety. However, it wants to hear from its users before confirming the new rules.
“Why are we doing this?
1. We need to consider how synthetic media is shared on Twitter in potentially damaging contexts.
2. We want to listen and consider your perspectives in our policy development process.
2. We want to be transparent about our approach and values.”
Engadget reports that other online giants are also addressing the rise of synthetic media. Amazon, and Microsoft joined Facebook’s Deepfake Detection Challenge. This move saw Facebook invest $10 million to create open-source tools, which institutions can use to identify manipulated content.
These companies have partnered with some of the world’s most profound universities. Researchers from MIT and the University of Oxford will work together to draw up an effective counter.
Facebook also announced that it is putting measures in place to block posts ahead of the 2020 elections. This overhaul comes after criticisms of the platform’s use for spreading misinformation during the previous campaign.
As the public still gets used to the rise of deepfakes, it is positive news that tech companies are acknowledging their role. However, it will be interesting to see if their policies will also impact the spread of harmless content of this media type.
What do you think about the battle against deepfakes? Let us know your thoughts in the comment section.