OPINION: AI is fueling deepfake porn crisis in Korea. What's behind it – and how can it be fixed?

By Sungshin (Luna) Bae Posted : September 25, 2024, 08:11 Updated : September 25, 2024, 08:13
Activists wearing eye masks, hold posters reading "Repeated deepfake sex crimes, the state is an accomplice too?" during a protest against deepfake porn in Seoul on Aug. 30, 2024. AFP-Yonhap
SEOUL, September 25 (AJP) - It's difficult to talk about artificial intelligence without talking about deepfake porn – a harmful AI byproduct that has been used to target everyone from Taylor Swift to Australian school girls.

But a recent report from startup Security Heroes found that out of 95,820 deepfake porn videos analyzed from different sources, 53 percent featured Korean singers and actresses – suggesting this group is disproportionately targeted.

So, what's behind Korea's deepfake problem? And what can be done about it?

Teenagers and minors among victims

Deepfakes are digitally manipulated photos, video or audio files that convincingly depict someone saying or doing things they never did. Among Korean teenagers, creating deepfakes has become so common that some even view it as a prank. And they don't just target celebrities.

On Telegram, group chats have been made for the specific purpose of engaging in image-based sexual abuse of women, including middle-school and high-school students, teachers and family members. Women who have their pictures on social media platforms such as KakaoTalk, Instagram and Facebook are also frequently targeted.

The perpetrators use AI bots to generate the fake imagery, which is then sold and/or indiscriminately disseminated, along with victims’ social media accounts, phone numbers and KakaoTalk usernames. One Telegram group attracted some 220,000 members, according to a Guardian report.

A lack of awareness

Despite gender-based violence causing significant harm to victims in Korea, there remains a lack of awareness on the issue.

Korea has experienced rapid technological growth in recent decades. It ranks first in the world in smartphone ownership and is cited as having the highest internet connectivity. Many jobs, including those in restaurants, manufacturing and public transport, are being rapidly replaced by robots and AI.

But as Human Rights Watch points out, the country's progress in gender equality and other human rights measures has not kept pace with digital advancement. And research has shown that technological progress can actually exacerbate issued of gender-based violence.

Since 2019, digital sex crimes against children and adolescents in Korea have been a huge issue – particularly due to the "Nth Room" case. This case involved hundreds of young victims (many of whom were minors) and around 260,000 participants engaged in sharing exploitative and coercive intimate content.

The case triggered widespread outrage and calls for stronger protection. It even led to the establishment of stronger conditions in the Act on Special Cases Concerning the Punishment of Sexual Crimes 2020. But despite this, the Supreme Prosecutors' Office said only 28 percent of the total 17,495 digital sex offenders caught in 2021 were indicted — highlighting the ongoing challenges in effectively addressing digital sex crimes.

In 2020, the Ministry of Justice's Digital Sexual Crimes Task Force proposed about 60 legal provisions, which have still not been accepted. The team was disbanded shortly after the inauguration of President Yoon Suk Yeol's government in 2022.

During the 2022 presidential race, Yoon said "there is no structural gender discrimination" in Korea and pledged to abolish the Ministry of Gender Equality and Family, the main ministry responsible for preventing gender-based violence. This post has remained vacant since February of this year.

Can technology also be the solution?

But AI isn't always harmful – and Korea provides proof of this too. In 2022, a digital sex crime support center run by the Seoul metropolitan government developed a tool that can automatically track, monitor and delete deepfake images and videos around the clock.

The technology – which won the 2024 UN Public Administration Prize – has helped reduce the time taken to find deepfakes from an average of two hours to three minutes. But while such attempts can help reduce further harm from deepfakes, they are unlikely to be an exhaustive solutions, as effects on victims can be persistent.

For meaningful change, the government needs to hold service providers such as social media platforms and messaging apps accountable for ensuring user safety.

Unified efforts

On Aug. 30, the Korean government announced plans to push for legislation to criminalize the possession, purchase and viewing of deepfakes in Korea.

However, investigations and trials may continue to fall short until deepfakes in Korea are recognized as a harmful form of gender-based violence. A multifaceted approach will be needed to address the deepfake problem, including stronger laws, reform and education.

Korean authorities must also help to enhance public awareness of gender-based violence, and focus not only on supporting victims, but on developing proactive policies and educational programs to prevent violence in the first place.

-------------------------------------------------------------------------------------------------------------------------

Sungshin (Luna) Bae is a PhD student at Monash University in Australia.

This article was republished under a Creative Commons license with The Conversation. The views and opinions in this article are solely those of the author.

https://theconversation.com/ai-is-fuelling-a-deepfake-porn-crisis-in-south-korea-whats-behind-it-and-how-can-it-be-fixed-238217

Copyright ⓒ Aju Press All rights reserved.

기사 이미지 확대 보기
닫기