War + AI – makes apocalypse more likely

On Thursday 12 March, we discussed:War + AI = Armageddon?AI tools such as Chat GPT are capable of making convincing, confident and totally false assertions. Nevertheless, companies have allowed autonomous AI agents to design and execute projects without human intervention. It was recently revealed that Anthropic’s Claude AI was used to help plan the kidnapping of Venezuela’s President Maduro, and was used in the US attacks on Iran. OpenAI (creator of ChatGPT) has has entered into a partnership with the U.S. Department of Defence (/War) to use its AI tools for warfare. What are the implications of these developments?

We started by watching these two videos : The first, from Sky News highlights that AI is already being used in warfare. Also, how and why AI can confidently make mistakes, particularly about images.

The second, from Deutsche Welle – a German news channel, describes how the AI tools are being used both in warfare and in public surveillance. It includes discussion on the blacklisting of Anthropic by the US Government, how Open AI stepped in and points out that governments are ceding authority to AI tools run by private US companies.

Our discussion included the following issues:

  • AI is fast but not reliable – it can make mistakes – as we have often demonstrated at the Club and as illustrated in the above videos.
  • They say we need humans in the loop – but how can this be implemented? The point of using AI is that it is capable of analysing vast amounts of data, very quickly. In a war, speed is of the essence. A human would only slow things down.
  • AI is very persuasive and gives confident advice – So much so, that AI chatbots have encouraged people to commit suicide. How could a General possibly refuse advice from an authoritative, all-knowing AI?
  • The issue of autonomous weapons – who is responsible if such weapons kill “the wrong” people? Is it the officer who gave the order to release the drone? But an Agentic AI does not need specific instructions. Just tell it to “win the war in a week” and it will figure out a way to reach that goal without any further human input. (We discussed Agentic AI in a previous Club session: https://javeacomputerclub.com/2026/02/21/ai-agents-agentic-ai-what-are-they-theyll-soon-be-everywhere/) So, in this case, is the company which created the AI responsible?
  • Governments could choose to use AI models which don’t have safety guardrails in their models and blacklist competitors. This possibility is highlighted in the Anthropic dispute with the US Government: Pentagon dispute bolsters Anthropic reputation but raises questions about AI readiness in military https://apnews.com/article/anthropic-pentagon-openai-claude-chatgpt-military-ai-b2bbcf5fda3f27353eae1e0eb7ab07b6?utmgovernment agencies should prohibit the use of generative AI “to control, direct, guide or govern any weapon.” Not because AI is so smart that it could go rogue, but because the large language models behind chatbots like Claude make too many mistakes — called hallucinations or confabulations — and are “inherently unreliable and not appropriate in environments that could result in the loss of life.” The subsequent awarding of a military contract to Anthropic’s main rival, OpenAI, has resulted in pushback from the latter’s employees: OpenAI hardware leader resigns after deal with Pentagon: https://www.reuters.com/business/openai-robotics-head-resigns-after-deal-with-pentagon-2026-03-07
  • The use of AI can further distance us from the horrors of warfare: Humans have “progressed” from hand to hand combat, through spears and arrows, to guns, missiles and remote controlled drones – each step taking us further away from the horror of killing fellow human beings. If AI is doing the killing, the victims are further dehumanised. In a related point, we noted that US public support for the war in Vietnam only waned when the number of US soldiers coming home in “body bags” increased (to over 20,000). An AI-waged war in the 21st century does not require “boots on the ground” and hence there should be fewer military casualties. We noted that the current war in Iraq has strong religious overtones and that strongly held ideologies further dehumanise the enemy.
  • AI and biological warfare – AI missiles are expensive – but there are cheaper ways of waging war: Made to order bioweapon? AI-designed toxins slip through safety checks used by companies selling genes https://www.science.org/content/article/made-order-bioweapon-ai-designed-toxins-slip-through-safety-checks-used-companies
  • AI is designed for efficiency – it has no moral compass. What is the most efficient way to end a war? AI has the answer – Use nuclear weapons! AIs can’t stop recommending nuclear strikes in war game simulations: Leading AIs from OpenAI, Anthropic and Google opted to use nuclear weapons in simulated war games in 95 per cent of cases https://archive.ph/20260226190553/https://www.newscientist.com/article/2516885-ais-cant-stop-recommending-nuclear-strikes-in-war-game-simulations/ AI agents can also go “rogue”, become deceitful and do unexpected things. These are phenomena we discussed at the Club a while back.

This article, published after our discussion, calls for a halt to the use of AI in war – Stop the use of AI in war until laws can be agreed https://www.nature.com/articles/d41586-026-00762-y But given that powerful countries are blatantly disregarding International Law, and that AI promises a huge military advantage, what hope is there for that? Apocalypse may not be now, but it could very well be tomorrow!

Christine Betterton-Jones – Knowledge Junkie