Web Stories Saturday, December 21

Mr Vinod from Accenture said that another source of demand is the black market.

Researchers from his firm found a 223 per cent increase in deepfake-related tools being traded on dark-web forums between the first quarter of 2023 and first quarter of 2024.

“The financial incentives from creating deepfake content fuel a vicious circle of supply and demand, driving the evolution of these threats,” he added.

Dealing with deepfakes is not as simple as a blanket ban, because the technology has its advantages.

Mr Emil Tan, director and co-founder of Infosec in the City, an international cybersecurity network that runs annual cybersecurity conference SINCON, said that deepfake technology could be used for education and training purposes, such as having an AI-generated tutor that can adapt to a learner’s needs.

“In healthcare and accessibility, deepfake technology helps create synthetic faces and voices for people who have lost the ability to speak,” he added, citing voice banking as an example.

BUSINESSES, FINANCIAL INSTITUTIONS AT RISK

As deepfake technology integrates with other AI advancements such as generative models for text and voice, it could enable “multi-modal” attacks that combine fake visuals, speech and context to create highly convincing fabrications, Asst Prof Lee said.

The Monetary Authority of Singapore (MAS) said that deepfakes pose risks in three areas:

  • Compromising biometric authentication such as facial recognition
  • Facilitating social engineering techniques for phishing and scams
  • Propagating misinformation or disinformation

Several businesses globally have already fallen for scams using deepfakes.

A financial worker in a multi-national corporation in Hong Kong transferred more than US$25 million to scammers after they used deepfake technology to pose as the company’s chief financial officer and other colleagues in a video-conferencing call.

Financial institutions in Hong Kong approved US$25,000 worth of fraudulent loans in August 2023, after a syndicate used eight stolen identity cards to make loan applications and bank account registrations and used deepfakes to bypass facial recognition verification steps.

Mr Steve Wilson, chief product officer at cybersecurity firm Exabeam, said that what adds to the dangers of deepfakes is how people often think that “seeing is believing”, so deepfakes can bypass natural scepticism.

“People trust video; it’s visceral, it’s emotional,” he added. His firm has predicted that video deepfakes would become more pervasive in 2025.

Without a doubt, financial and banking sectors are the main targets for deepfake scams. 

Mr Shanmuga Sunthar Muniandy, director of architecture and chief evangelist for Asia Pacific at data management provider Denodo, said: “The banking sector has been undergoing extensive digital transformation in the past few years, but having so much of banking processes existing online now means that hackers find it easier than ever to impersonate individuals to cheat consumers or businesses out of substantial amounts of money.” 

He pointed out that accounting firm Deloitte’s Center for Financial Services had predicted that generative AI could enable fraud losses to reach US$40 billion in the United States in 2027, up from US$12.3 billion in 2023.

Overcoming such fraud would be tough, Mr Wilson said.

“Imagine hopping onto a Zoom call with what looks and sounds exactly like your chief financial officer, asking you to approve an urgent transfer. How many people would hesitate to comply,” he questioned, warning that scams will get bigger as deepfakes become easier to generate.

“Attackers won’t stop at one transaction. They’ll orchestrate complex schemes where deepfakes impersonate auditors, executives and even regulators to legitimise their requests.” 

Mr Wilson added that his firm had detected a deepfake candidate during a hiring interview recently.

“The voice sounded like it had walked out of a 1970s Godzilla movie – mechanical, misaligned and, frankly, a little eerie,” he said of the deepfake candidate. “The giveaway was subtle, but in the future, these tricks will be seamless.”

Even smaller firms are also under “serious threat”, the Association of Small and Medium Enterprises in Singapore told CNA TODAY.

“The informal nature in which small- and medium-sized enterprises (SMEs) often perform their financial processes makes them particularly susceptible to deepfakes,” it said. 

“For example, bosses may give a WhatsApp video call to their staff members to approve and send money to an unknown third party. Such informal processes make it easy for cybercriminals to impersonate higher management to request unauthorised financial transactions or confidential information, making traditional security measures insufficient.”

The association said that it has thus stepped up to educate SMEs on tackling cybersecurity threats such as through its Cyber Shield Series articles online and by organising AI Festival Asia next month.

In the meantime, MAS said that it is working with financial institutions to strengthen the resilience of multi-factor authentication measures.

It also released a paper in July this year to raise awareness about how deepfakes and other generative AI can pose as a threat, and publishes information through its national financial education programme MoneySense.

Share.

Leave A Reply

Exit mobile version