The moral and legal dilemmas of using autonomous systems in armed conflict are of the outmost importance and a worthy subject for the World Economic Forum, which took place last weekend in Davos, Switzerland.
Is it morally justifiable to leave life and death decisions to a machine? Are soldiers and commanders able to meet the complex demands of moral and legal conduct in war by using tele-operated system in battle, and furthermore will these machine become capable of complying with these rules themselves? Can we claim that killer robots are weapons of mass destruction and as such should be banned from military usage as international tribunals assert?
Should we perhaps take a different position? Is it more appropriate to maintain that the development of armed robots is a positive one? If so it is possible to argue that civilian loss of life could be diminished because of the much higher capability rate of an autonomous system over the human soldier on the battlefield.
Read what Stuart Russell, Professor at University of California, Berkeley, has to say on the subject in a recent article for the World Economic Forum. Robots in war: the next weapons of mass destruction?
In conjunction you might want to read about the IIIM’s ethics policy implemented last year, in which the institute takes a strong stance against the development and use of systems with the capacity to take a life.