On the other hand, there is a risk that with the rise of deepfakes, those accused of misconduct could discredit legitimate photos and videos by alleging that they are manipulated.
This presents certain challenges. For instance, if a whistleblower reports evidence of wrongdoing by a corporate entity, the company in question could claim that the content is fake. Public uncertainty over truthfulness could result in diminishing levels of trust, increased scepticism and even cynicism about information online.
PREVENTATIVE MEASURES
Advances in AI will make identifying deepfakes more difficult, further empowering them for malicious uses. Greater understanding of AI capabilities and the danger of deepfake sextortion will go a long way.
When all our lives are online, there is an abundance of content available for malicious actors to exploit. We can be more cautious of what we post online or limit our privacy setting on social media accounts to trusted friends and people we know. Reporting any sextortion attempts or activity to the police and relevant social media platforms is also a good first step.
In discerning whether something we see online is real or not, we can try to ascertain the motivation behind its creation and dissemination. One of the best strategies is to question content that elicits an emotional reaction.
As deepfake technology evolves and malicious actors adapt, it is crucial that we stay updated on the latest developments and remain vigilant to such online threats.
Dymples Leong is an Associate Research Fellow with the Centre of Excellence for National Security (CENS) at the S Rajaratnam School of International Studies (RSIS), Nanyang Technological University, Singapore.