Risks of Artificial Intelligence
Risks of Artificial Intelligence
  • SoongsilTimes
  • 승인 2018.09.11 16:37
  • 댓글 0
이 기사를 공유합니다

The development of AI is changing the world tremendously. The way people live is also rapidly changing. However, the future of AI does not always look so bright as some people may expect. There appears to be a number of problems. ST questions the development of AI.....................................................Ed

 

There are many gossips about the development of A.I. all over the world. These include writing poems, predicting the winning horse at the Kentucky Derby, and even beating the world champions in Go competitions. A.I. has already achieved amazing records. Or, maybe not. People can recognize the negative impacts of A.I. in such cases as self-driving cars, mobile robots for home use and even Skynet, and reports on the singularity of A.I. There are also many experts who are pessimistic about the future of A.I. How should A.I. develop in the future?

In the past, A.I. was considered a convenient tool but many people nowadays fear using it. Professor Kim Dae-sik at KAIST said, “When the public think of A.I., they worry about terminators killing people. But you shouldn’t worry.” First, Deep Learning is a method of studying that began by imitating a human brain. A.I. that threatens the human body is still in the distant future. Humans must be wary of metal A.I. that can overwhelm the human brain.

 

 

An example of metal A.I. is Obama’s fake speech video that went viral on YouTube. The identity of the fake video was traced to A.I. researchers at the University of Washington who programmed the video of Obama’s actual speech to A.I. for 14 hours. Through Deep Learning, A.I. reproduced the shape of Obama’s mouth, his facial expression, muscle movement, and background perfectly through Obama’s voice. This video was so realistic that it could fool anyone. If this kind of technology continued to make fake videos of celebrities like Obama and spread them on social media, the impact would be too significant to ignore. Perhaps, in an important presidential election, a fabricated video clip can be spread and lead to a one-sided defeat in the aftermath of a mysterious image. It can be used implacably not only by a big name, but also by the public. Someday, there may be a need to prove the existence of ‘me’ due to the development of A.I. that can be confused with a real person.

 

Furthermore, Elon Musk, who is pushing ahead with building a space colony to prepare for the threat of human destruction, warns that A.I. is a more serious menace than nuclear weapons. He predicted World War III with high probability. Meanwhile, experiments in the U.S. are under way that A.I. can learn evil concepts, and become a psychopath. Researchers at the Massachusetts Institute of Technology introduced psychopath A.I. “Noman” last April. He learned writings and images that depict death. Unlike the other A.I., when Noman saw one painting, he interpreted it as, “The man is getting an electric shock,” “The man is sucking in the batter,” “The scene in which a husband is shot and killed in front of his screaming wife.” He associated the images with death and murder. If he is a real person, he would be classified as a potential psychopath. The result of the test allowed us to predict the possibility of an indiscriminate murder weapon through A.I.

             A.I. cannot solve all the human problems, but it has plenty of potential to improve our lives. However, rapid progress in scientific techniques must be accompanied by the development of human consciousness. Before fostering A.I., the public and the society must be prepared to accommodate the changes. People should create more ethical A.I. and constantly study its likelihood. Human beings should not stop thinking about developing A.I.

Lee Hae-been (ST Cub-Reporter)

been0503@soongsil.ac.kr


댓글삭제
삭제한 댓글은 다시 복구할 수 없습니다.
그래도 삭제하시겠습니까?
댓글 0
댓글쓰기
계정을 선택하시면 로그인·계정인증을 통해
댓글을 남기실 수 있습니다.

  • 서울특별시 동작구 상도로 369 (숭실대학교) 학생회관 206호 영자신문편집국
  • 대표전화 : 02-820-0761
  • 팩스 : 02-817-5872
  • 청소년보호책임자 : 장채린
  • 명칭 : The Soongsil Times
  • 제호 : The Soongsil Times(숭실대영자신문)
  • 등록번호 :
  • 등록일 : 2017-04-05
  • 발행일 : 2017-05-01
  • 발행인 :
  • 편집인 :
  • The Soongsil Times(숭실대영자신문) 모든 콘텐츠(영상,기사, 사진)는 저작권법의 보호를 받은바, 무단 전재와 복사, 배포 등을 금합니다.
  • Copyright © 2018 The Soongsil Times(숭실대영자신문). All rights reserved. mail to webmaster@ndsoft.co.kr
ND소프트