• AI has proved to be a technological transformation and now threatens the moral architecture that sustained those rules.
  • The days of predictable rules that limit violence and regulate behaviour among all actors during conflict will come to an end. 
  • Weaker states will find themselves exposed to forms of warfare unconstrained by legal restraint.
  • Technology cannot remain confined in the hands of the few for long, and not all will be bound by the laws of ethics and morality. 

After the end of the Second World War, the carnage and the cruelty the world witnessed, the international community attempted to place moral limits on the conduct of war. The devastation the world endured revealed the terrifying consequences of unrestrained violence. The response was the creation of the Geneva Conventions, a legal framework intended to preserve a measure of humanity even in the midst of armed conflict.

It was an attempt to regulate war in accordance with the principles of international humanitarian law: the distinction between civilians and combatants, proportionality in the use of force, and the humane treatment of prisoners. These rules represented not the abolition of war but its moral containment. For decades, the Geneva system served as the architecture of global conflicts. Gross violations occurred frequently, but states rarely rejected the legitimacy of the system outright. Powerful nations understood that their legitimacy depended on maintaining at least the appearance of compliance.

However, under U.S. President Donald Trump, this architecture born out of the deaths of millions is visibly eroding. Two contemporary conflicts, the Russia-Ukraine War and the widening confrontation between the United States and Iran, have exposed the fragility of the legal order governing warfare. Simultaneously, emerging technologies such as artificial intelligence and autonomous weapons are transforming the battlefield in ways that the architects of the Geneva system could never have imagined.

Paul Scharre, author of Army of None: Autonomous Weapons and the Future of War, writes in his book that “Autonomous weapons could fundamentally alter the relationship between humans and the use of force, allowing decisions about life and death to be delegated to machines.” He further analyses that “The speed of machine decision-making in warfare may compress human judgment out of the loop, leaving little room for deliberation, accountability, or restraint.

War, Power, and the Limits of Law

There have always been tensions between law and power, and Prussian strategist Carl von Clausewitz famously wrote, “War is merely the continuation of politics with other means.” War, in his view, is not a moral failure but an instrument of political strategy. In theory, legal frameworks are supposed to influence how wars are fought, but they rarely determine whether wars occur or how far states will go when they perceive existential threats. The realist scholar Hans Morgenthau observed that “international law is only as strong as the political will of states to enforce it.” We are seeing that in West Asia right now.

The historian E. H. Carr went further, arguing that “the law of nations is a function of the balance of power.” In other words, international law survives only when it aligns with the strategic interests of powerful states, and in this case, the U.S. has simply abandoned it. What remains to be seen is whether laws will be able to keep up with AI or politics will fail us again.

The Russia–Ukraine War 

The Russia–Ukraine War is the best example of a clear violation of international humanitarian law through attacks on infrastructure, civilian areas, and prisoners. This war illustrates how modern high-intensity warfare blurs the line between civilian and military infrastructure. Energy grids, communication networks, railways, and industrial facilities serve both military and civilian functions. Strikes against such targets often produce large-scale civilian suffering while still being justified under military necessity. Here lie the legal and moral ambiguities and the ugly strategic reality of self-interests and war as a means to an end. Wars are no longer fought in geographically identifiable battlefields but increasingly take place across entire national infrastructures.

American Power and the Crisis of Moral Authority

The Russia – Ukraine war exposed the limits of international humanitarian law, but the expanding confrontation between the United States and Iran raises deeper questions. For decades, Washington has positioned itself as the principal defender of the rules-based international system in principle but not in practice. However, under Donald Trump, it is clear that he has no respect for the rules-based international system, and he believes might is right and that the primacy of American power can never be questioned.

In discussing counterterrorism strategy, Trump famously remarked that the United States should “take out their families,” a statement widely interpreted as endorsing tactics inconsistent with the principle of civilian immunity embedded in the Geneva Conventions. But in his defence, America has done this in the past, but has never boasted about it.

Similarly, Secretary of War Pete Hegseth, a prominent figure, has argued that excessive legal constraints can weaken military effectiveness and that American soldiers are often “handcuffed by lawyers and rules of engagement.” Such rhetoric raises questions about whether strict adherence to humanitarian law is compatible with modern security threats.

Does that mean law is an impediment, a barrier in the post-Cold-War multi-polar world? 

And can the US, in the future, with a new leader, retain the moral authority to lecture others about compliance?

Artificial Intelligence: They never saw it coming 

We are right now living in a technological revolution, and artificial intelligence is rapidly transforming the operational environment of warfare. Throughout our history, information has been power, but with the rapid advancement of technology, modern militaries have faced enormous challenges in collating information and analysing it. What happened was that systems generated enormous quantities of data via satellite imagery, electronic intercepts, drone surveillance feeds, and cyber intelligence, and it became overwhelming. Enter AI systems that have the computing power to process these vast data streams to identify patterns, prioritise threats, and recommend targets.

One would hope that AI could strengthen the humanitarian principles embedded in the Geneva Conventions. Algorithms capable of analysing vast intelligence datasets might improve the identification of legitimate military targets while minimising civilian casualties. But this technology has the potential to become an efficient method for terminating human lives without differentiation.

Machine-learning systems operate probabilistically and generate predictions based on statistical patterns rather than moral reasoning. 

So, what happens when algorithmic models identify individuals as threats based on behavioural data in an active conflict zone? What happens when the distinction between combatant and civilian becomes ambiguous?

The International Committee of the Red Cross has warned that emerging technologies such as artificial intelligence pose profound challenges for the application of humanitarian law. 

So, the question is, can we bring an artificial entity to law and make it accountable?

  • An algorithm that is fast evolving contributes to a targeting decision that results in civilian casualties. Who is responsible? 
  • Did the error originate in flawed data? In software design? Or in human interpretation of algorithmic outputs?
  • With a call for more machine autonomy, clear legal accountability now stands at the crossroads of becoming unenforceable. Will AI make us Incisive or dependent for answers?  

AI Warfare

Modern warfare is now blitzkrieg on steroids. Missile defence systems, cyber operations, and drone swarms operate on timescales that challenge human reaction capacity. Military planners simply don’t want to be overwhelmed and are now relying on the AI to enhance their capability of processing information and making decisions.  Artificial intelligence provides precisely this advantage that will dominate future battlefields.

So, what happens to fairness? 

States will face a technological security dilemma once one major power deploys AI-enabled targeting systems. This will create a global algorithmic arms race, where technological innovation outpaces the legal frameworks designed to regulate conflict.

India and the Strategic Dilemma of Dharma

For countries such as India, which have practised Dharma for thousands of years, this erosion of international convention order creates a complex strategic and moral dilemma. India operates in one of the world’s most challenging security environments. It faces nuclear-armed adversaries, cross-border terrorism supported by Pakistan and China, and contestation in the Indo-Pacific region. So, there will come a time for India, which has historically positioned itself as a defender of international law, to decide whether to maintain strict adherence to international humanitarian norms or violate Dharma to protect Dharma?

Fact is, Dharmo rakshati rakshitah in the traditional Indian sense, is a nuanced, conditional hierarchy of duties where the “lesser wrong” is permissible to prevent a greater evil. Gen Raj Shukla wrote, “The challenge for democracies is to harness AI for operational advantage while preserving the ethical frameworks that govern the use of force.”

The political philosopher Michael Walzer said, “Even when we fight just wars, we are required to fight them justly.” 

Will AI allow it or act as an impediment?

The AI tech divide will be felt most by the Global South, and when there is the erosion of humanitarian laws, their fragile governance and law and order will be challenged. Legal norms historically function as a protective framework for the weak. But these norms will weaken, and technological and military superiority throughout history determines the outcome of conflicts. The days of predictable rules that limit violence and regulate behaviour among all actors during conflict will come to an end. Weaker states will find themselves exposed to forms of warfare unconstrained by legal restraint.

The Conclusion of the Geneva Order

The erosion of the Geneva system has been going on since its inception. States will form a veneer of commitment to humanitarianism as they work on algorithms that will increasingly guide targeting decisions. Autonomous systems will accelerate the pace of conflict, and legal constraints will be seen as weakening military effectiveness. In the words of Thomas Hobbes, “unrestrained conflict is one in which the life of man becomes solitary, poor, nasty, brutish, and short.

The Geneva Conventions were humanity’s attempt to prevent war from descending into medieval barbarism. But AI has proved to be a technological transformation and now threatens the moral architecture that sustained those rules. 

Whether the Geneva system survives the age of artificial intelligence will depend not only on legal reform. 

But the question is, will there be political will?

Will great powers agree that strategic advantage outweighs human cost?

In the book The Age of AI: And Our Human Future, written by Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher, the authors contend, “When machines interpret reality for us, the traditional foundations of strategic decision-making begin to shift. Human judgment has long been the ultimate safeguard in war; the growing autonomy of machines raises the question of whether that safeguard will endure.”

The world will soon move from the threats of nuclear proliferation to AI proliferation. Technology cannot remain confined in the hands of the few for long, and not all will be bound by the laws of ethics and morality. 

Thucydides said, “The strong do what they can, and the weak suffer what they must“. It was his brutal and realistic assessment of power dynamics from the Peloponnesian War. The fact is, we will again fall into the infamous Thucydides’ trap. Mankind suffers the affliction of fear, insecurity and mistrust. The human race will slowly trust a machine and not man. 

References:

  1. Carl von Clausewitz. On War. Edited and translated by Howard M, Paret P. Princeton: Princeton University Press; 1984.
  2. Hans Morgenthau. Politics Among Nations: The Struggle for Power and Peace. 6th ed. New York: Knopf; 1985.
  3. E. H. Carr. The Twenty Years’ Crisis, 1919–1939. London: Macmillan; 1939.
  4. Michael Walzer. Just and Unjust Wars. 5th ed. New York: Basic Books; 2015.
  5. Thomas Hobbes. Leviathan. Oxford: Oxford University Press; 1996.
  6. Geneva Conventions. Geneva Convention Relative to the Protection of Civilian Persons in Time of War. Geneva, 1949.
  7. International Committee of the Red Cross. International Humanitarian Law and the Challenges of Contemporary Armed Conflicts. Geneva: ICRC; 2019.
  8. Trump D. Campaign speech on counterterrorism policy. United States presidential campaign, 2015.
  9. Pete Hegseth. American Crusade: Our Fight to Stay Free. New York: Centre Street; 2020.
  10. Subrahmanyam Jaishankar. The India Way: Strategies for an Uncertain World. New Delhi: HarperCollins India, 2020.
  11. Henry Kissinger, Eric Schmidt, Daniel Huttenlocher. The Age of AI: And Our Human Future. Boston: Little, Brown and Company; 2021.
  12. Raj Shukla. “Artificial Intelligence and the Future of Warfare.” In: India’s Military Strategy in the 21st Century. New Delhi: Observer Research Foundation; 2023.
  13. Paul Scharre. Army of None: Autonomous Weapons and the Future of War. New York: W. W. Norton & Company, 2018.
  14. Stockholm International Peace Research Institute. Boulanin V, Verbruggen M. Mapping the Development of Autonomy in Weapon Systems. Stockholm: SIPRI; 2017.
Spread the love

By Balaji Subramanian

Balaji is a freelance writer with an MA in History and Political science and has published articles on defence and strategic affairs and book reviews. He tweets @LaxmanShriram78. Views expressed are the author’s own.

Related Post

Leave a Reply

Your email address will not be published. Required fields are marked *