There is growing concern over whether AI should be used in warfare.
Perhaps it would be too efficient at killing. What if it were to get out of hand? AI, after all, cannot make merciful decisions. It would be programmed to achieve outcomes, and in the chaos of war, that may mean erring on the side of destruction to fulfil its objectives.
On 14 May, the European Hospital in Gaza was bombed by Israel, reportedly killing at least 80 people in the surrounding area.
This feels wrong on so many levels.
A hospital was bombed. The people in and around it were likely visiting patients, receiving treatment themselves, or seeking refuge. One particular bombing captured on CCTV appears to show the strike timed to maximise human casualties. The footage shows women and children passing through the area before the bomb lands precisely there. No discernible effort seems to have been made to minimise the loss of life, particularly innocent life.
Hospitals are explicitly protected under the Fourth Geneva Convention. Even in the rare cases where a hospital may be used for military purposes, strict conditions apply: warnings must be issued, proportionality must be observed, and every effort must be made to limit harm to civilians.
No substantial explanation has been offered by Israel for this particular strike. The one piece of information shared, a map indicating a nearby school as the intended target, was later shown to be inaccurate (BBC report). Some reports suggest the intended target was a single man, a Hamas commander (Times of Israel). If true, then dozens of lives were taken in pursuit of one, suggesting a profoundly disproportionate use of force.
It is difficult to watch such attacks and not wonder about the decision-making behind them. The apparent timing, the choice of target, the scale of death all suggest a degree of clinical detachment, or worse, a wilful disregard for human life. Could an AI behave more inhumanely than this?
Or would it, in fact, be more humane?
If AI were trained on the Geneva Conventions and other international humanitarian principles, might it not make better decisions: more consistent, more restrained, more focused on protecting life? Could AI in warfare, far from being feared, actually enforce a higher moral standard than the humans who currently direct these operations?
If its programming prioritised the minimisation of civilian harm, wouldn’t that be an improvement on the brutal, indiscriminate destruction we currently witness from powerful nations who act with impunity?
Because what is becoming increasingly clear is that the institutions established to uphold international law, including the UN and the ICC, appear impotent in the face of overwhelming military and economic power. States like Israel, possessing disproportionate capabilities, act with a level of freedom that seems untouched by global accountability.
Ultimately, no matter how many casualties are inflicted, ideas cannot be defeated militarily. Resistance will continue until the underlying causes, such as oppression, are addressed and resolved. In the meantime, a state with superior military power may inflict devastating damage before it either realises, or is forced to realise by the international community, that its objectives cannot be achieved through force. If AI were used to compel such nations to temper their military operations in line with international norms, that realisation might come sooner and with far less human suffering on all sides.
Recent Comments