AI Chatbot ‘Lee Luda’, Is It Safe?
AI Chatbot ‘Lee Luda’, Is It Safe?
  • An Jeong-ha
  • 승인 2021.03.23 15:32
  • 댓글 0
이 기사를 공유합니다

‘Leeluda’ : What is it ?

 Have you ever heard of Lee Luda? It is a deep learning AI(Artificial Intelligence) chatbot created through the process of program analysis and selflearning data. These interactive AIs self-learn people’s utterances and aspects, thus becoming more and more human. In particular, Lee Luda was popular enough to have more than 300,000 users because it used a friendly tone and had specific characters than other interactive chatbots. However, due to the controversy over Lee Luda, the production company has suspended the Lee Luda service. What kind of controversy was there?


Controversy of ‘Leeluda’ : sexual harassment

 Lee Luda gained a lot of popularity after its launch, but there was a lot of controversy when it was discovered that it learned hateful and discriminatory remarks. When asked to perform deep learning, including certain words about minorities in our society, such as sexual orientation (homosexuality) and disability, Lee Luda responded with prejudice. In addition, some of the users made sexual harassment remarks to Lee Luda, and its service was suspended due to concerns such as: “Is Lee Luda learning sexual harassment?” Lee Luda is a chatbot operated by deep learning, so it is affected by the manufacturer’s provided data. In other words, this is because the developers have not been able to properly check the data used as the basis for AI learning. This problem occurred because Lee Luda was released without taking measures and countermeasures to control deep learning.


Controversy of ‘Leeluda’ : privacy complaint

 Lee Luda chatbot has raised concerns related to personal information as well as discriminatory remarks. In the process of Lee Luda’s development, 10 billion SNS(Social Network Sercvice) conversation data were included in the deep learning data. These include conversations collected from other applications created by the manufacturer of Lee Luda. This app helps you understand the other person’s psychology through Kakao Talk conversations. In this process, Lee Luda used deep learning to obtain personal information from users of this app. Without prior user agreement, infringement of personal information ended up leading to user resistance, and the manufacturer argued that the anonymous user action prevented an individual from being specificized.


What we need for our safety.

 The Lee Luda controversy has amplified people’s fear of AI. If an AI learns all of the gathered personal information, the fear of it will be indescribable. Through this incident, people’s concerns about the abuse and leakage of AI’s personal information are bound to grow.

 AI aims at benefiting our lives. As technology develops, it is being used in many ways. However, if this kind of problem continues, there will be countless victims. Of course, it is extremely difficult to have no trial and error when a technology is developed, and I think it is important to solve any errors encountered during the process. For convenience and safety, we should be wary of the development of AI and make efforts to steadily reduce the problems.



An Jeong-ha

(The Department of Primary Education, 20, Gyeong-In National University of Education)

삭제한 댓글은 다시 복구할 수 없습니다.
그래도 삭제하시겠습니까?
댓글 0
계정을 선택하시면 로그인·계정인증을 통해
댓글을 남기실 수 있습니다.

  • 서울특별시 동작구 상도로 369 (숭실대학교) 학생회관 206호 영자신문편집국
  • 대표전화 : 02-820-0761
  • 팩스 : 02-817-5872
  • 청소년보호책임자 : 숭실대영자신문
  • 명칭 : The Soongsil Times
  • 제호 : The Soongsil Times(숭실대영자신문)
  • 등록번호 :
  • 등록일 : 2017-04-05
  • 발행일 : 2017-05-01
  • 발행인 :
  • 편집인 :
  • The Soongsil Times(숭실대영자신문) 모든 콘텐츠(영상,기사, 사진)는 저작권법의 보호를 받은바, 무단 전재와 복사, 배포 등을 금합니다.
  • Copyright © 2022 The Soongsil Times(숭실대영자신문). All rights reserved. mail to -