I spend most of my time building AI that helps people. AI that banks the unbanked, that predicts crop failure before it happens, that connects athletes to better training science, that puts financial inclusion within reach of workers the formal system has never seen. This is the AI I am passionate about. This is the AI I have given 15 years of my life to.
But I would be dishonest - to you and to myself - if I wrote only about the hopeful side of artificial intelligence without addressing what is happening on the other side of this technology. AI is being deployed in active war zones. Right now. In decisions that end human lives. And the conversation about what that means - for governance, for ethics, for the Caribbean's position in a world being reshaped by AI-powered conflict - is not happening loudly enough or seriously enough.
So let us have it.
What AI in War Actually Looks Like
When most people imagine "AI in war," they imagine something cinematic. Terminator. Drone swarms filling the sky with coordinated lethal intelligence. That framing, while dramatic, obscures the more mundane and therefore more dangerous reality of where AI has already arrived in modern conflict.
AI is being used in active conflicts for target identification - using computer vision and satellite imagery analysis to identify military assets, personnel concentrations, and movement patterns. AI is being used for predictive analysis, feeding intelligence officers with probabilistic assessments of where attacks will come from and when. AI is being used in disinformation and influence operations - generating synthetic media, coordinating narrative campaigns, and identifying psychological vulnerabilities in civilian populations to exploit.
Lethal autonomous weapons systems - weapons that can select and engage targets without a human in the decision loop - are not science fiction. Loitering munitions with AI target-selection capability exist and have been deployed. The line between "AI-assisted targeting" and "AI-autonomous killing" is narrower than official defence communications suggest, and it is getting narrower every year.
The Accountability Vacuum
Here is the governance problem that keeps me up at night: when an AI system makes a decision that kills civilians, who is responsible?
The engineer who built the model? The commander who authorised its deployment? The government that funded it? The company that sold it? Under current international humanitarian law - the laws of war, the Geneva Conventions, the principles of distinction and proportionality - there is a legal requirement for human accountability in the use of lethal force. A machine cannot be held accountable. Its designers can claim they could not have anticipated the specific decision. Its deployers can claim the algorithm performed within specification. Its funders can claim they had no operational control.
The accountability vacuum that AI creates in warfare is not hypothetical. It is already being exploited. Countries deploying AI-assisted targeting are doing so in ways that make it effectively impossible to attribute specific civilian deaths to specific decisions. The fog of war, weaponised by AI, becomes a fog of accountability.
The Bias Problem at Scale
I have spent a significant part of my career working on the problem of bias in AI systems. The Caribbean context makes this concrete: when AI systems are trained on data that does not represent Caribbean populations, they produce systems that fail Caribbean people - in credit scoring, in healthcare, in education. The bias is systemic and often invisible until you are looking for it.
Now apply that same problem to targeting AI. The training data for military AI systems reflects the conflicts, geographies, and actors that the most militarised nations have historically encountered. It reflects their adversaries. It reflects their definitions of "threat indicator." It bakes in their categories and their assumptions.
What happens when that system is deployed in a different context - different demographics, different dress, different architecture, different patterns of movement? The bias does not disappear. It manifests as misidentification. And in a targeting system, misidentification is not an inconvenience. It is a death sentence.
The International Committee of the Red Cross has raised exactly this concern. The researchers who work on AI safety have raised it. The UN Secretary-General has called for a prohibition on lethal autonomous weapons systems. The conversation is happening - just not at the volume and speed the stakes demand.
What This Means for the Caribbean
The Caribbean is not a military power. We do not build weapons systems. We do not have standing armies with autonomous drone programmes. So why does this conversation matter for us?
Three reasons.
First, because Caribbean nations are members of international bodies where these norms are being shaped. CARICOM member states sit in the United Nations. They participate in the Convention on Certain Conventional Weapons discussions where autonomous weapons regulation is being debated. The positions Caribbean governments take in those rooms - or their silence - shapes the international framework that governs how these technologies are used globally. Silence is a vote for the status quo.
Second, because AI-powered surveillance and influence operations do not respect geography. The disinformation systems being deployed in active conflicts - the deepfake videos, the synthetic news, the targeted social media manipulation - are powered by the same AI infrastructure being developed for civilian applications. The Caribbean is not insulated from this. Our elections, our media environments, our social cohesion are all vulnerable to tools that exist because military AI programmes normalised them.
Third, because the Caribbean AI ecosystem I am building operates in a world where AI governance matters. Every AI system StarApple AI deploys, every policy framework I help develop, exists within an international context that is being shaped - right now - by how AI in warfare is regulated or left unregulated. A world with meaningful international AI governance is a better world for deploying AI in healthcare and financial inclusion. A world without it is a more dangerous one for everyone.
The Ethical Line I Hold
I will be direct about my position: I believe that lethal autonomous weapons systems - weapons that select and engage targets without meaningful human control over the kill decision - are morally unacceptable and should be internationally prohibited. Not regulated. Prohibited.
This is not pacifism. I understand the arguments for AI-assisted military systems - faster response times, reduced risk to human soldiers, potentially more precise targeting under ideal conditions. I understand them and I reject their conclusion because the accountability vacuum they create, the bias risks they introduce, and the precedent they set for what AI is permitted to do to human lives are too dangerous to accept.
The same rigour I apply to AI systems in financial inclusion - asking who benefits, who bears the risk, who is accountable when it fails - I apply to AI systems in warfare. And the answers in warfare are not satisfactory.
What Needs to Happen
International agreement on meaningful human control in the use of lethal force. Prohibition of fully autonomous lethal systems. Liability frameworks that hold states and corporations accountable for AI-enabled civilian casualties. Transparency requirements on the AI systems being deployed in conflict zones, including their training data and known failure modes. And Caribbean governments taking a clear, principled position in the international forums where these agreements are being negotiated.
The Caribbean has a history of punching above its weight on international human rights and humanitarian issues. From the work of Caribbean diplomats in crafting international frameworks on climate change to Jamaica's leadership in the Third World movement, this region has a voice that matters beyond its size. That voice needs to be in this conversation. Loudly. Now.
AI can be the most powerful force for human dignity and human flourishing in history. It can also be a tool of dehumanised violence at unprecedented scale. Which future we get depends on the choices we make - and the governance we build - in the next five years. Caribbean voices need to be in that room.