‘Leeluda’ : What is it ?
Have you ever heard of Lee Luda? It is a deep learning AI(Artificial Intelligence) chatbot created through the process of program analysis and selflearning data. These interactive AIs self-learn people’s utterances and aspects, thus becoming more and more human. In particular, Lee Luda was popular enough to have more than 300,000 users because it used a friendly tone and had specific characters than other interactive chatbots. However, due to the controversy over Lee Luda, the production company has suspended the Lee Luda service. What kind of controversy was there?
Controversy of ‘Leeluda’ : sexual harassment
Lee Luda gained a lot of popularity after its launch, but there was a lot of controversy when it was discovered that it learned hateful and discriminatory remarks. When asked to perform deep learning, including certain words about minorities in our society, such as sexual orientation (homosexuality) and disability, Lee Luda responded with prejudice. In addition, some of the users made sexual harassment remarks to Lee Luda, and its service was suspended due to concerns such as: “Is Lee Luda learning sexual harassment?” Lee Luda is a chatbot operated by deep learning, so it is affected by the manufacturer’s provided data. In other words, this is because the developers have not been able to properly check the data used as the basis for AI learning. This problem occurred because Lee Luda was released without taking measures and countermeasures to control deep learning.
Controversy of ‘Leeluda’ : privacy complaint
Lee Luda chatbot has raised concerns related to personal information as well as discriminatory remarks. In the process of Lee Luda’s development, 10 billion SNS(Social Network Sercvice) conversation data were included in the deep learning data. These include conversations collected from other applications created by the manufacturer of Lee Luda. This app helps you understand the other person’s psychology through Kakao Talk conversations. In this process, Lee Luda used deep learning to obtain personal information from users of this app. Without prior user agreement, infringement of personal information ended up leading to user resistance, and the manufacturer argued that the anonymous user action prevented an individual from being specificized.
What we need for our safety.
The Lee Luda controversy has amplified people’s fear of AI. If an AI learns all of the gathered personal information, the fear of it will be indescribable. Through this incident, people’s concerns about the abuse and leakage of AI’s personal information are bound to grow.
AI aims at benefiting our lives. As technology develops, it is being used in many ways. However, if this kind of problem continues, there will be countless victims. Of course, it is extremely difficult to have no trial and error when a technology is developed, and I think it is important to solve any errors encountered during the process. For convenience and safety, we should be wary of the development of AI and make efforts to steadily reduce the problems.
An Jeong-ha
(The Department of Primary Education, 20, Gyeong-In National University of Education)