← All Posts Caribbean AI

The LATAMC AI Playbook: Why 15,000 Use Cases Change Everything

Adrian Dunkley March 2026 10 min read
The LATAMC AI Playbook for the Caribbean

For seven years I sat in rooms with ministers, permanent secretaries, and CEOs across the Caribbean, and they all asked some version of the same question. Some phrased it cautiously. Some with genuine urgency. A few with barely concealed embarrassment that they hadn't answered it already. But the question was always the same: where do we actually start with AI?

The honest answer was always: it depends. It depends on your sector, your data infrastructure, your regulatory environment, your institutional capacity. That answer is technically correct. It is also practically useless to a Minister of Finance in Barbados or a Permanent Secretary in Trinidad trying to justify a budget line for AI adoption to a Finance Committee that expects a concrete business case.

The LATAMC AI Playbook exists because I got tired of giving the right but useless answer.

What the LATAMC AI Playbook Actually Is

The LATAMC (Latin America and the Caribbean) AI Playbook is not a report. I want to be clear about that from the start, because the Caribbean has more than enough well-written AI reports sitting on digital shelves, commissioned at significant expense, read by a handful of people, and forgotten by the next budget cycle.

The Playbook is a working tool. A living framework of more than 15,000 evidence-based AI use cases, mapped across every major sector and industry relevant to Caribbean and Latin American economies. Tourism. Agriculture. Financial services. Public health. Education. Criminal justice. Port logistics. Energy management. Disaster response. The full breadth of economic activity in this region is covered.

Every use case in the Playbook came from somewhere real. Actual deployments. Academic literature with verifiable outcomes. Government implementation records. Not speculation, not vendor marketing materials, not "AI could potentially be used to improve X." Evidence. What was built, what it achieved, under what conditions, what resources it required, and where it failed. The failures are in there too, because anyone deploying AI without a clear picture of the failure modes is not doing strategy. They're doing wishful thinking.

Fifteen thousand use cases means that within any sector, you can find specific, validated approaches to specific problems - not categories of possibility, but evidence of what actually works.

The geographic specificity matters enormously. Existing global AI playbooks and policy frameworks are overwhelmingly written from the perspective of economies with deep capital markets, mature data infrastructure, large domestic tech sectors, and regulatory bodies with decades of institutional memory. The Caribbean is none of those things. Our financial systems are smaller and more interconnected. Our data infrastructure is newer and less standardized. Our regulatory bodies are capable but resource-constrained. Our economies are dominated by sectors, particularly tourism and agriculture, that have different AI opportunity profiles than manufacturing or advanced services.

A playbook written for Germany or Singapore is not wrong. It is just irrelevant to Jamaica or Barbados in a way that matters for implementation.

Why 15,000 Is Not Just a Number

When I tell people the Playbook contains over 15,000 use cases, the first reaction is usually somewhere between impressed and skeptical. That is a lot. Is it just volume for volume's sake?

No. Understanding why requires understanding the failure mode of the standard approach.

The default approach to AI adoption in any government or large organization follows a predictable path. Hire a consultancy. Commission a strategy. Receive a 100-page document identifying three to five priority areas. Spend the next eighteen months trying to figure out what any of that means in practice. Stall. Deprioritize. Repeat.

This fails not because the three to five priority areas are wrong. They're usually directionally correct. It fails because it leaves the entire implementation layer empty. You know AI should play a role in financial supervision. But you don't know whether that means transaction monitoring, regulatory compliance automation, early-warning systems for systemic risk, credit risk modelling for micro-lenders, or fraud pattern detection in remittance flows. These are completely different problems. They require different data, different infrastructure, different regulatory approvals, different timelines, and different risk tolerance. Treating them as one thing called "AI in financial supervision" is not strategy. It is a heading with nothing underneath it.

Fifteen thousand use cases means that within "financial supervision" you can find specific, validated approaches to specific problems. You can read what a central bank in a comparable economy achieved with a particular type of transaction monitoring system. You can see what it cost, what data it required, what the main implementation obstacles were, and what the outcome was after two years of operation. That specificity does not just inform strategy. It makes strategy real.

Three Use Cases Every Caribbean Government Should Know About Right Now

The Playbook contains thousands of use cases. But if you are a Caribbean policymaker reading this and want somewhere concrete to start, these three are immediately deployable, have strong evidence bases, and directly address Caribbean-specific economic and social needs.

Tourism Demand Forecasting

Tourism is the Caribbean's largest and most volatile sector. Hurricanes, global health events, economic cycles in source markets, and shifting travel preferences create demand patterns that traditional statistical forecasting handles badly. Revenue management across the region's hospitality sector routinely operates on demand assumptions that are weeks out of date.

AI-powered demand forecasting models, trained on booking data, flight search behavior, social sentiment, weather forecasting, and macroeconomic indicators from key source markets, consistently outperform traditional methods. Implementations in Pacific Island economies with structural similarities to Caribbean SIDS have demonstrated 15 to 20 percent improvements in forecast accuracy over a rolling 90-day horizon. That accuracy improvement translates directly into better staffing decisions, better yield management, better marketing allocation, and reduced food and supply waste. For a mid-size resort property, a 15 percent improvement in demand forecast accuracy represents a material improvement in operating margins.

The data required for a Caribbean-specific deployment is largely already held by tourism boards, national airlines, and larger hospitality operators. The implementation barrier is technical, not strategic, and it is lower than most people assume.

Agricultural Climate Risk Scoring

Caribbean agriculture faces a compounding challenge. Climate variability is increasing. Farmers need credit to manage that variability. Lenders cannot accurately price that credit because they cannot accurately assess field-level climate risk. The result is a credit market that fails precisely the farmers who most need capital to adapt.

AI climate risk scoring models integrate satellite imagery, historical crop performance data, localized climate projections, topographic information, and soil classification to produce field-level risk scores that are dramatically more granular than anything available through conventional agricultural lending assessments. These approaches have opened credit access for smallholder farmers in East African contexts who were previously unfinanceable under traditional models.

A Caribbean-specific version requires localization. A model built for maize farmers in Kenya does not transfer directly to sugarcane producers in Barbados or banana farmers in St Lucia. The methodology transfers. The data work needs to be done here. The Playbook includes implementation guidance for that localization process.

Public Safety Pattern Recognition

Crime in the Caribbean follows patterns. Time of day. Day of week. Proximity to economic stress indicators. Weather. Urban infrastructure. These patterns can be learned from data, and models trained on that data can identify high-risk periods and locations with accuracy that meaningfully exceeds intuition-based resource allocation.

I want to be precise about what this means and what it does not mean. This is not predictive policing in the model that has been applied, and rightly criticized, in the United States, where historical arrest data baked in racial and socioeconomic biases and AI systems then amplified those biases in a self-reinforcing loop. That model is a governance failure that masqueraded as a technology deployment, and we should not replicate it anywhere.

What Caribbean territories can do responsibly is aggregated temporal and spatial pattern recognition for resource deployment planning. Not "this individual is likely to commit a crime." That is harmful and technically unfounded. Rather: "Based on historical patterns, this district on a Friday evening during the week before a public holiday has historically required 40 percent more police presence than the baseline allocation." That is resource optimization. It is useful, it is defensible, and several Caribbean territories have the data infrastructure to deploy it today.

Key Insight

The governance framework for each use case is built into the Playbook itself. Technology deployment and accountability design are treated as inseparable, not sequential. You cannot retrofit responsible AI. It has to be designed in from the start.

The Governance Layer Is Not Optional

One thing I insisted on when we developed the LATAMC AI Playbook was that use cases and governance had to be inseparable. Every sector section includes not just what AI can do but how to deploy it responsibly: what data rights are implicated, what consent frameworks are appropriate, what regulatory approvals are typically required in CARICOM jurisdictions, what algorithmic transparency standards should apply, and what recourse mechanisms must be in place for people affected by automated decisions.

This matters more in the Caribbean than in larger economies, for a reason that sounds counterintuitive. Small states have fewer citizens. That seems to reduce the scale of risk from any given AI deployment. In practice, it means the opposite. A flawed credit scoring system deployed in Jamaica touches a far larger proportion of Jamaican adults than an equivalent system deployed in the United States touches American adults. There is less redundancy. There are fewer alternative credit providers to absorb the people a bad model excludes. The consequences of AI failure concentrate harder in small economies.

That reality should produce more rigorous governance design in the Caribbean, not less. The Playbook is built with that logic throughout.

How to Use the Playbook Right Now

The Playbook is designed to be used in a specific way, and using it differently produces worse results. It is not a document you read from front to back. It is a search and filtering tool.

Start with a problem, not a technology. Identify something your organization or government genuinely needs to do better. Reduce claim fraud. Improve student retention. Forecast public health demand. Optimize port clearance times. Come to the Playbook with that problem and use the sector and problem-type filters to find relevant use cases. Read the evidence behind the use cases that look relevant. Identify two or three that match your data availability and institutional capacity. Build a pilot around the most promising one.

The Playbook includes a readiness assessment framework for exactly this purpose. It evaluates an organization's data maturity, technical capacity, governance structures, and change management capability against the requirements of a given use case. That assessment protects organizations from the most common AI adoption failure mode: deploying technically sound AI into an organization that was not operationally ready for it.

What the Playbook Represents Beyond Its Contents

The Caribbean has a long history of importing frameworks, standards, and strategies designed elsewhere and attempting to adapt them to contexts they were not built for. We have done this with economic policy, with educational curriculum, with healthcare protocols, and with technology regulation. Sometimes adaptation works. Often it introduces subtle misalignments that compound over time into structural problems.

The LATAMC AI Playbook is a deliberate break from that pattern. It was built by people who work in this region, for institutions operating in this region, using evidence from deployments in contexts comparable to our own. The assumptions embedded in it, about data infrastructure, institutional capacity, regulatory environments, economic structure, and social context, are our assumptions. Not borrowed ones.

That matters more than the 15,000 use cases. The use cases are valuable. But the deeper value is that for once, the Caribbean is not waiting for a tool that someone else built. We built this one. The question now is whether we use it.

The window for Caribbean AI leadership is not indefinitely open. Other regions are moving. Investment is flowing to markets that demonstrate clear AI governance frameworks and deployment readiness. Every year that a Caribbean government spends with "AI strategy" as an agenda item rather than an implementation reality is a year of competitive ground given away.

The question of where to start is answered. That obstacle is gone. What remains is the decision.

Get the Weekly Post

New thinking on Caribbean AI, governance, and technology, delivered every week.