Facebook has moved to respond to criticism over political advertising, by banning what it describes as ‘manipulated media’ ahead of the 2020 election. But further scratching beneath the surface reveals that the move is a pretty moderate measure in reality.
Effectively, Facebook is banning the ‘deepfakes’ which are now relatively straightforward to achieve via technology such as Photoshop. Although Facebook claims that these videos remain relatively rare, the company also acknowledges that they present a significant challenge for the social media industry.
With this in mind, Facebook has outlined its approach for addressing deepfakes, and all forms of manipulated media. Facebook will investigate AI-generated content, and attempt to remove deceptive behaviours such as fake accounts from the platform.
Additionally, the company has also signalled its intention to partner with academic, government, and industry bodies in order to expose those behind such deepfake efforts. There aren’t too many details of how this will work at the time of writing, but clearly Facebook is looking to enlist some experts in the medium in order to weed out this deceptive practice.
But critics of Facebook are unlikely to be particularly impressed by this latest initiative. Firstly, it is notable that Facebook has stopped short of dealing with misinformation directly. And, secondly, many people would like to see Facebook address the morass of often poorly founded political advertising that currently plagues its platform.
This is certainly something that Facebook may have to address in the future, as the Cambridge Analytica scandal in particular has undoubtedly damaged the public image of the company. However, this is a tricky conundrum for Facebook to wrestle with, as political advertising income is massively lucrative for the company.
The Facebook policy also enables misleading videos to be made with less advanced video manipulation methods, and still be distributed on Facebook platform. And it is generally regarded that videos of this nature have had a much more significant impact on political discourse and ultimate voting intention.
Facebook has also confirmed that parody and satire is excluded from the new regulations, as is any video that has been edited with the sole intention of omitting or changing the order of words. These are other forms of fundamental manipulation that many would like to see outlawed, or at least addressed in regulatory terms.
Manipulation still allowed
To give one example, the White House infamously shared a video in November, 2018, of CNN reporter Jim Acosta apparently chopping his hand down on a White House intern’s arm. But it was later conceded that the footage had been deliberately accelerated in order to make Acosta appear more aggressive than he actually was, while his apologetic statement of “pardon me, ma’am” was also intentionally removed from the footage.
Considering that it would still be perfectly possible for such a video to be legitimately posted and shared on Facebook, it is clear that the social media giant still has some way to go in order to properly address manipulation.