Sunday, 24 November 2024
26.2 C
Singapore

How the rise of deepfakes AI contribute to cybersecurity risk?

In April 2018, comedian and director Jordan Peele used Barrack Obama as a subject for a PSA he released. “Barrack Obama” voiced his opinion about Black Panther and directed insult to President Donald Trump. This is an example of how deepfakes AI is propagating disinformation, which can lead to confusion from the public. Deepfakes come from the […]

In April 2018, comedian and director Jordan Peele used Barrack Obama as a subject for a PSA he released. “Barrack Obama” voiced his opinion about Black Panther and directed insult to President Donald Trump. This is an example of how deepfakes AI is propagating disinformation, which can lead to confusion from the public.

Deepfakes come from the term “deep-learning” and “fake.” It is an AI-technology used to create fake videos and audios that look and sound relatively realistic. Its origin started in 2017, from a Reddit community, where users employed this technology to swap faces of celebrities with other characters. The ease of usage and accessibility has increasingly made it even harder to detect and a threat to cybersecurity. 

In recent years, the tools are readily available on GitHub with anyone who can experiment and create their version of deepfakes, using the code available on the git repository hosting service. However, the result may look out of place with its awkward facial expressions and reaction lag. With deepfakes, users can imitate and beat AI detection that can easily fool viewers with more iterations. 

Its impact on the economy

One of the reasons why there is a rise of deepfakes might be due to the reduced hassled to go through the grind of compromising a system. To put it simply, cybercriminals do not need powerful hacking skills to attack a system. With that, hackers can easily corrupt an organization’s health simply with a piece of fake information. In 2013, more than US$130 in stock are wiped out when a fake tweet about explosions in the White House that has injured former US president Barrack Obama was published. This shows that by spreading inaccurate information or data, the market can easily be manipulated, leading to instability in an organization’s financial health. It could lead to an inability to secure investors from other companies.

Using deepfakes for political agenda

Politically, deepfakes can pose a danger to voters as the impact of fake videos, and audio may shift the voting result. As the phrase goes, “seeing is believing” voters will trust whatever that is publicized on the network. If information can be inaccurately grappled and misuse, it allows hackers to use this weakness to portray a specific impression of the candidates. One of the prominent examples is when fraudsters use AI to mimic the CEO’s voice in an unusual cybercrime. The CEO of U.K. – based energy company firm thought he was speaking on a phone with his boss and subsequently transfer €220,000 (approx. S$355,237) to the bank account of a Hungarian supplier. Only after scrutinizing, the CEO recognized that the call had been made from an Austrian phone number. Due to the close resemblance of the subtle German accent in the man’s voice and the way of speech, the CEO failed to detect any suspicion. This shows how deepfakes can imitate an authority figure seamlessly and manipulate actions that can be dangerous and unethical. 

Advancement of technology brought about many changes, many being a solution and an asset for a better-connected world. The concept of deepfakes can also be put into good use, such as a memorial service of someone important or a way of paying respect for someone. It is highly similar to the holographic technology to project a 3D image that looks real from any angle. Unfortunately, many unethical cybercriminals choose to use it in a way that poses a threat to the community. As the ideas and activities become more interconnected, does that mean that we will adopt a “zero trust” policy to safeguard our interest? And in a “zero trust” world, how can we be more interconnected?

Hot this week

Canon Singapore and Temasek Polytechnic join forces to boost security training

Canon Singapore partners with Temasek Polytechnic to establish a Security Technology Experience Centre, enhancing training for security professionals in Singapore.

AMD accelerates exascale computing to new heights with El Capitan

AMD’s El Capitan supercomputer, powered by AMD Instinct MI300A APU, becomes the world’s fastest, marking a milestone in exascale computing.

Google may unify Chromebooks and Android into one platform

Google may merge ChromeOS with Android, aiming to create a unified platform for Chromebooks and tablets, challenging Apple’s market dominance.

ASUS-built supercomputer with NVIDIA HGX H100 ranked among the world’s top supercomputers

ASUS and Ubilink build a supercomputing facility ranked 31st on TOP500 and 44th on Green500, delivering 45.82 PFLOPS and unmatched efficiency.

UGREEN Surge Protector Power Strip review: Fast charging meets smart safety

The UGREEN Surge Protector Power Strip offers fast charging, 10-device support, and surge protection but faces durability concerns.

Nvidia’s bold 1997 rivalry with Intel revealed in new book

Nvidia CEO Jensen Huang’s bold 1997 statement reveals the company’s early rivalry with Intel, as detailed in a new book, The Nvidia Way.

Steam sets stricter rules and better support for season pass content

Steam introduces stricter rules for season passes, requiring precise content details and refunds for undelivered DLC, improving fairness for players.

Anti-deepfake declaration faces scrutiny over possible AI involvement

Minnesota's anti-deepfake law faces controversy as an affidavit supporting it shows signs of AI-generated text with non-existent citations.

Google reportedly cancels Pixel Tablet 2 and exits tablet market again

Google cancels the Pixel Tablet 2, signalling another exit from the tablet market. Poor sales and competition from Apple may be to blame.

Related Articles

Popular Categories