An open letter has just been sent to Korea Advanced Institute of Science and Technology (KAIST) by 57 experts from 30 countries.
The letter stated that experts would no longer visit or collaborate with the university until KAIST terminated its cooperation with Hanhwa System's weapons company.
Based on the news received by experts, this cooperation aims to develop military weapons equipped with artificial intelligence.
The move is seen as opposed to United Nations opinion seeking to anticipate the threat of autonomous weapons, and can accelerate the realization of the screenplay of The Terminator showing how Skynet's artificial intelligence can turn against humans in an attempt to protect themselves.
"We deeply regret how such a prestigious institution like KAIST is trying to accelerate competition in developing such weapons," the experts said in the letter.
They continue, the autonomous weapon has the potential to become a weapon of terror. Dictators and terrorists can use them against innocent populations, removing all ethical boundaries.
"If it's been opened, this Pandora's box will be hard to close again," they warned.
Lucas Apa, a security consultant with the IOActive cyber security firm, approved expert concerns. Reported from The Independent on Thursday (5/4/2018), he gave an example of how a robot malfunction caused the death of a factory worker in 2016.
"Like any other technology, we find that robotic technology is not safe from many aspects. It is very worrying if we have moved towards the offensive military capability when the security of this system is still lacking. If robotic ecosystems are still vulnerable to hacking, robots can even hurt us, "he said.
In response to this, KAIST president Sung Chul Shin said that this is a misunderstanding.
"Once again I affirm that KAIST will not conduct research activities against human dignity, including autonomous weapons that have no human control," he said.
The letter stated that experts would no longer visit or collaborate with the university until KAIST terminated its cooperation with Hanhwa System's weapons company.
Based on the news received by experts, this cooperation aims to develop military weapons equipped with artificial intelligence.
The move is seen as opposed to United Nations opinion seeking to anticipate the threat of autonomous weapons, and can accelerate the realization of the screenplay of The Terminator showing how Skynet's artificial intelligence can turn against humans in an attempt to protect themselves.
"We deeply regret how such a prestigious institution like KAIST is trying to accelerate competition in developing such weapons," the experts said in the letter.
They continue, the autonomous weapon has the potential to become a weapon of terror. Dictators and terrorists can use them against innocent populations, removing all ethical boundaries.
"If it's been opened, this Pandora's box will be hard to close again," they warned.
Lucas Apa, a security consultant with the IOActive cyber security firm, approved expert concerns. Reported from The Independent on Thursday (5/4/2018), he gave an example of how a robot malfunction caused the death of a factory worker in 2016.
"Like any other technology, we find that robotic technology is not safe from many aspects. It is very worrying if we have moved towards the offensive military capability when the security of this system is still lacking. If robotic ecosystems are still vulnerable to hacking, robots can even hurt us, "he said.
In response to this, KAIST president Sung Chul Shin said that this is a misunderstanding.
"Once again I affirm that KAIST will not conduct research activities against human dignity, including autonomous weapons that have no human control," he said.
Comments
Post a Comment