Ex-Google Executive Warns Of Secret Killer Robots
Ex-top Google executive engineer, Laura Nolan, fears the Google could unleash killer robots on the public. An AI peace movement took shape last year when Google employees learned that their employer was providing technology to the US Air Force to classify drone images. The workers feared this could be a fateful step toward providing technology to automate deadly drone strikes. In response, the company abandoned Project Maven and created an AI Code of Ethics. Heavyweights from science and industry have supported a campaign to ban the use of autonomous weapons.
Nolan, who runs Stop Killer Robots, has spoken before the UN on several occasions, pleading for action.
According to TheGuardian.
Nolan said that killer robots that are not remotely controlled by humans should be eradicated by the same international treaty that prohibits chemical weapons. There is no indication that Google is involved in the development of autonomous weapon systems. Last month, a United Nations panel of experts discussed autonomous weapons and noted that Google does not use AI for weapon systems and best practices.
There can be no tangible result if one of the technologically leading countries sets a precedent for the introduction of autonomous weapon systems. Implementing autonomy, which is primarily software-based, in systems that derive from a dynamic global ecosystem of unmanned vehicles of various shapes and sizes is a technical challenge, but it is feasible for state and non-state actors, especially because much of the hardware and software is dual-use. An uncontrolled arms-based autonomous arms race and the spread of autonomous killing abilities to extremist groups would clearly undermine international peace, stability, and security.
Campaign against killer robots, a coalition of more than 61 groups in 26 countries, coordinated by Human Rights Watch. Its members include Amnesty International, the British Article 36 Group and the International Robot Arms Control Committee, a small network of experts and professionals with recognized academic and practical AI, robotics and arms control skills. However, the military relevance of this intended technical capability is greater than that of glare lasers and therefore this comparative case of a successful ban is relevant only to that extent. Due to the massive commercial interest in autonomous robots is currently being researched in countless university labs and large and small companies.
Finally, automatic systems that target people or automatically fire back at the source of incoming munitions are already raising issues relevant to the autonomy debate. The autonomy debate thus affects existing automatic defense systems but is not primarily concerned with them. Depending on how the CCW ultimately defines autonomous weapon systems, it may make sense to exempt them from regulation or a possible preventive ban if their sole purpose is to protect human life by targeting incoming ammunition only.
In South Korea and Israel, several types of stationary watch guns are used to fire on people and vehicles. Many missile defense systems, such as. Iron Dome, also have autonomous target functions. They have generally been used to protect personnel and equipment from penetrating projectiles.