Sekėjai

Ieškoti šiame dienoraštyje

2024 m. sausio 27 d., šeštadienis

There are many reasons why developing lethal autonomous weapons is a bad idea


"The biggest, as I wrote in Nature in 20154, is that “one can expect platforms deployed in the millions, the agility and lethality of which will leave humans utterly defenceless”. The reasoning is illustrated in a 2017 YouTube video advocating arms control, which I released with the Future of Life Institute (see go.nature.com/4ju4zj2). It shows ‘Slaughterbots’ — swarms of cheap microdrones using AI and facial recognition to assassinate political opponents. Because no human supervision is required, one person can launch an almost unlimited number of weapons to wipe out entire populations. Weapons experts concur that anti-personnel swarms should be classified as weapons of mass destruction (see go.nature.com/3yqjx9h). The AI community is almost unanimous in opposing autonomous weapons for this reason.

Moreover, AI systems might be hacked, or accidents could escalate conflict or lower the threshold for wars. And human life would be devalued if robots take life-or-death decisions, raising moral and justice concerns. In March 2019, UN secretary-general António Guterres summed up this case to autonomous-weapons negotiators in Geneva: “Machines with the power and discretion to take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law” (see go.nature.com/3yn6pqt). Yet there are still no rules, beyond international humanitarian laws, against manufacturing and selling lethal autonomous weapons of mass destruction." [1]


1. AI weapons. Stuart Russell. Nature 614, 620-623 (2023)

Komentarų nėra: