Modern Warfare: The Dark Side of AI
AI has not one, but two faces. One face is what world leaders, such as those gathered at the 2025 G7 Summit, prefer to see and showcase—a bright, optimistic, and symbolic face of progress. This face views AI as an engine of economic growth, an enabler of public services, and a revolutionary tool for solving humanity's complex problems. It is the face that announces multi-billion dollar funds for energy solutions.
But AI also has a second, darker, and overlooked face, one that is taking shape far from the glittering tables of diplomats, amidst the dust and smoke of battlefields. This face is autonomous, calculative, and lethal. It is the AI that identifies targets, directs drones, and makes life-and-death decisions without human intervention. While G7 leaders in Canada were discussing the civilian benefits of AI, this other AI was quietly and permanently transforming the doctrines of warfare from the Middle East to Eastern Europe. This article analyzes this dangerous duality—on one side our public aspirations and on the other, our covert military realities, and how in the dazzle of the former, we are overlooking the existential threat posed by the latter.
The G7's Economic Prism
The G7's 2025 agenda indicates that the world's largest economies primarily view AI as an economic opportunity. The European Union's comprehensive AI Act, the world's first major attempt to regulate the civilian use of technology, symbolizes this approach. Similarly, the UK's estimation of £45 billion in annual savings from AI use in public administration and Canada's announcement to integrate AI into public services are all driven by this economic rationale. This perspective is natural, as AI possesses immense potential to boost productivity, revolutionize healthcare, and solve complex scientific problems.
However, there is a strategic oversight in this economic optimism. It overlooks the fact that military technological development runs parallel to, and often at a much faster pace than, civilian technological development. The explicit exemption of defense and national security matters in the EU's AI Act is the prime example of this problem. This exemption sets a dangerous precedent, where the most lethal applications of technology remain entirely outside regulatory scrutiny. The G7, being primarily an economic bloc, may not be the most suitable forum for military regulation, but as a group of the world's most powerful democracies, it has a moral responsibility to address this issue. By failing to do so, the group is tacitly endorsing the development of weapon systems that could fundamentally alter the nature of future warfare.
The New Reality of the Battlefield
The AI-powered arms race is no longer a future apprehension but a present reality. From the Middle East to Eastern Europe and South Asia, conflict zones have become testing grounds for AI-powered warfare systems.
Middle East
Israel's use of AI systems like 'Habsora,' 'Lavender,' and 'Daddy' in the Gaza Strip is a chilling example of how decision-making in warfare is being delegated to machines. These systems analyze vast repositories of intelligence data to identify potential targets for airstrikes. Reports indicate that these systems made serious errors in target identification and contributed to an unacceptable number of civilian casualties. Here, the biggest ethical question arises: when a target suggested by an algorithm is attacked, whose responsibility is it—the programmer's, the commander's, or the machine's itself? This is a situation of 'human-out-of-the-loop' warfare, challenging the very foundations of international humanitarian law.
Russia-Ukraine War
This conflict is the first major war in modern history to be partially driven by AI. It is a vivid example of drone warfare, where both sides are continuously developing and deploying new autonomous systems. Ukrainian naval drones have successfully targeted Russian warships in the Black Sea, altering the dynamics of traditional naval power. Meanwhile, both armies are using AI for target identification, electronic warfare, and intelligence analysis. This war is proof that AI is no longer just an auxiliary technology, but a central element of military strategy.
India-Pakistan Conflict
The India-Pakistan crisis of 2025 made it clear that AI-powered drone warfare is no longer limited to superpowers. In this conflict, for the first time, both nations extensively used drones for cross-border attacks alongside traditional military operations. India's use of Israeli-made Harop drones and indigenous Nagastra-1 was aimed at neutralizing Pakistan's Turkish-made drones. This conflict also underlines a significant shift in India's defense strategy—moving away from reliance on foreign imports to emphasize the development of indigenous platforms like Hindustan Aeronautics Limited's Combat Warrior and swarm drone systems. This trend signals a regional AI arms race with far-reaching geopolitical consequences.
Multilateral Failure
Despite the rapid pace of military AI deployment, the response from multilateral forums has been slow, fragmented, and largely ineffective. Even significant initiatives like the EU's AI Act keep the defense sector outside their purview. Consequently, these revolutionary changes are occurring in a policy vacuum, where no oversight or international standards exist.
However, some efforts have certainly been made. Summits on 'Responsible Use of AI in the Military' (REAIM) held in the Netherlands (2023) and South Korea (2024), the role of the UN Institute for Disarmament Research (UNIDIR), and forums like the AI Action Summit in Paris have fostered dialogue on this issue. UN Secretary-General António Guterres has called for legally binding rules on autonomous weapons by 2026, and the 'Pact for the Future' in September 2024 also suggested regular assessment of risks associated with military AI.
The problem is that all these initiatives are voluntary and lack enforcement power. Powerful groups like the G7 cannot shirk their responsibility by being absent from these discussions or merely offering formal support. These countries possess the economic and political power to lead the integration of these fragmented efforts into a robust, global regulatory framework.
The Immediate Need for a Dual Path
The G7 Summit of 2025 symbolizes the world's dual and contradictory approach to AI. On one hand, we view AI as the next stage of human progress, capable of bringing economic prosperity and social welfare. On the other hand, we are using the same technology to create more lethal and autonomous warfare systems that could pose an existential threat to humanity.
Focusing solely on economic benefits is a short-sighted and dangerous strategy. Leaving the military use of AI uncontrolled will not only increase global instability but also erode the trust essential for widespread societal adoption of any technology.
It is time for the G7 and other international bodies to adopt a dual approach. They must continue to promote the positive applications of AI, but simultaneously take urgent and concrete steps towards creating a robust, binding, and verifiable international treaty to control its military use. This could include legally establishing the principle of 'human control,' imposing a complete ban on certain types of autonomous weapons (such as systems targeting based on facial recognition), and establishing export control regimes to prevent the proliferation of the technology.
The future of artificial intelligence will be determined not just by the algorithms we create, but also by the ethical and legal boundaries we build around it. If we keep the battlefield out of this discourse, we risk losing the fight for a responsible technological future.