← All Posts People and Inclusion

Teaching AI for Free: What 7 Years in a Classroom Taught Me

Adrian Dunkley January 2026 12 min read

In 2018 I started doing something that seemed straightforward at the time and has turned out to be one of the more complicated and rewarding things I have done in my professional life. I started running free AI training sessions for students in Jamaica. Every week. No charge. No prerequisites beyond showing up and being willing to engage.

Seven years later, I am still doing it. The format has evolved. The student population has shifted. The AI landscape has changed dramatically. But the weekly sessions continue, and by now I have spent more hours teaching AI to people with no prior background in it than I have spent doing almost anything else.

What I have learned about how people actually learn AI, and about what the commercial training industry consistently gets wrong, is worth putting down in some form. This is my attempt to do that.

How It Started

The origin was straightforward. I was running StarApple AI and becoming increasingly aware of a problem that was not primarily about technology. The technology to do interesting things with AI existed and was becoming more accessible. What was missing was the human infrastructure to use it: people who understood AI well enough to recognize where it could help them, articulate a problem clearly enough to work with someone who could build a solution, and evaluate whether what got built actually worked.

That is a different and more fundamental skill than programming. It is more like the skill of being a good client for a construction project. You do not need to know how to lay bricks to commission a good building. But you need to understand enough about construction to describe what you need, read the plans, and know when something has gone wrong. The Caribbean had almost nobody with that AI equivalent at the time, and the formal education system was not going to produce it fast enough.

So I started running the sessions. The first cohort was eight people in a rented room in Kingston. Some were university students. Some were working professionals. One was a secondary school teacher who had heard about it through a friend. None of them had any AI background.

I have been doing it weekly ever since. Some sessions have been in person. Many moved online during and after the pandemic and stayed there because online removed the geographical barrier and brought in students from across Jamaica and occasionally from other Caribbean territories. The scale has varied. Some sessions have two people. Some have had forty.

What the Sessions Actually Look Like

The sessions are not lectures. I made that mistake in the early months and adjusted fairly quickly once I understood what was actually happening in the room.

The structure is closer to a workshop built around a problem. I bring a real problem, usually drawn from something StarApple AI or one of the Caribbean organizations I work with is actually trying to solve. I describe the problem in concrete terms. We then work through the problem together: what do we know about it, what information would help us understand it better, what approach might an AI system take to it, what are the failure modes of that approach, and what would we need to do to test whether it worked.

No code in the first session with any new participant. That is a firm rule that I arrived at through watching what happens when you start with code. When you start with code, you immediately sort the room into people who are comfortable with code and people who are not, and the people who are not become spectators rather than participants within the first fifteen minutes. The AI literacy skill I am trying to build is not dependent on programming ability. Starting with code conflates the two and loses most of the room immediately.

The code, when it comes, comes later and in service of a problem that the participants have already engaged with analytically. At that point, the code is answering a question they have already asked themselves. That ordering makes an enormous difference to how it lands.

The most important thing I teach is not how AI works. It is how to describe a problem well enough that an AI system could potentially help with it. That is a harder skill than most people think.

What I Have Learned About How People Learn AI

Seven years of weekly sessions with people from very different backgrounds has produced a set of observations about how people learn AI that I have not seen reflected anywhere in the formal AI education literature. These are empirical findings from watching thousands of people encounter AI for the first time. Take them as such.

The fear is about judgment, not technology

The most consistent barrier I encounter in first sessions is not technical unfamiliarity. It is the fear of looking stupid. People have heard enough about AI to believe that not understanding it reflects badly on their intelligence. That fear produces a performance of engagement rather than actual engagement: nodding at things that did not land, asking the kind of question that sounds smart rather than the kind that would actually resolve confusion, pretending comprehension to avoid revealing its absence.

The only thing that consistently addresses this is making it unambiguously safe to not understand. Not just saying it is safe, but demonstrating it by being visibly uncomfortable with things myself, asking questions I do not know the answers to, inviting correction, and treating the sessions as a genuine inquiry rather than a knowledge transfer. The moment people believe that revealing confusion is safe, the quality of learning changes dramatically.

The analogy is the unit of understanding

Mathematical explanation of how machine learning works produces very limited transfer to practical AI understanding for non-technical learners. Analogies produce very high transfer. Specifically, analogies that connect AI behavior to human social behavior that the learner already understands.

Training a model is like hiring someone new and giving them a set of examples of work done well and work done badly so they can calibrate their judgment. Fine-tuning a model is like taking an experienced hire with general skills and giving them specific on-the-job experience to make them more effective in your particular context. Overfitting is like a student who has memorized the practice exams but cannot handle questions that differ slightly from what they have seen before. These are not technically precise descriptions of what is happening. They are pedagogically effective descriptions that build accurate intuitions faster than formal technical explanation does.

Caribbean learners are not behind. They think differently.

This is the observation I feel most strongly about and have the most evidence for. Caribbean students who come into AI education without prior background are consistently described by outside educators as "behind" relative to their counterparts in North American or European programs. That framing is wrong in an important way.

Caribbean learners, particularly those who have grown up in the informal economic complexity of Caribbean life, often have exceptional intuitions about certain AI problems: fraud detection, social network analysis, resource allocation under scarcity, decision-making under uncertainty. These are cognitive skills developed through lived experience navigating economies and social environments where resources are constrained and trust is earned rather than assumed. They are exactly the skills that make someone good at framing AI problems in contexts where those same economic and social dynamics apply.

What Caribbean learners often lack is not cognitive capability. It is the vocabulary and the exposure to connect what they already know to the formal language of AI. When that bridge is built, the learning is remarkably fast. The issue has never been the quality of the minds. It has been the absence of the bridge.

What the Training Industry Gets Wrong

The commercial AI training industry, the online courses, the bootcamps, the enterprise training programs, has a consistent set of failure modes that I have watched play out across seven years of watching people try to learn AI through formal channels and then show up in my sessions still confused about the fundamentals.

The first and most significant failure is treating AI literacy as equivalent to machine learning engineering. The dominant AI education products teach people to build and train models. That is a legitimate and valuable skill. It is not the same skill as being able to use AI productively as a non-technical professional, understand what AI systems can and cannot do, evaluate AI solutions critically, or communicate effectively with AI builders. Those skills require different education, and most commercial programs do not teach them.

The second failure is the absence of domain specificity. A doctor, a teacher, a financial analyst, and a logistics manager all need to understand AI. But they need to understand different AI, applied to different problems, with different risk profiles and different failure modes. Generic AI literacy education that does not connect to domain-specific problems produces graduates who can describe AI in general terms but cannot identify the three specific places in their daily work where AI could meaningfully change what they do.

The third failure is the speed of the certification conveyor belt. Online courses reward completion. Completion rewards speed. Speed rewards surface pattern matching, identifying the right answer to the quiz question, rather than the deeper engagement that produces genuine capability. People emerge from these courses with certificates and confident vocabulary and very limited actual ability to do anything productive with AI. Then they get into a real AI project and the gap becomes visible quickly.

The Results Over Seven Years

What has actually come out of seven years of free weekly sessions?

Some of the people who started in those early sessions are now running AI projects in Caribbean organizations. Not as AI engineers but as informed clients of AI development: people who can define the problem, evaluate the solution, and manage the ongoing relationship between their organization and the AI systems it uses. That is exactly the capability that was missing when I started.

Some participants have gone on to formal AI education with a foundation that meant they could engage with the technical material from day one rather than spending the first semester just reorienting. Some have started businesses. Some are teaching AI themselves, which matters more than any individual trajectory.

The aggregate effect is harder to measure but real. There is a larger community of Caribbean people who understand AI at a functional level than would exist otherwise. That community is part of the foundation on which Caribbean AI capability gets built. You cannot build AI capacity in a society without the broad human infrastructure of understanding that allows organizations to use AI, identify AI problems, hire AI practitioners, and evaluate AI claims critically. The sessions have contributed to that infrastructure, one session at a time.

Why I Keep Doing It

People occasionally ask me why I continue doing free sessions when my time has clearly become more commercially valuable. The answer has two parts.

The first part is selfish, in the best sense. Teaching forces clarity. Every session requires me to explain something clearly enough that a person with no prior background can follow it. That discipline keeps me honest about what I actually understand versus what I can perform understanding of. The sessions have made me a better AI practitioner because they have required me to get to the bottom of things rather than operating at the level of professional competence that most practitioners operate at, which is considerably above the bottom.

The second part is more straightforward. I grew up with neurodivergence and learning disabilities in a system that was not designed for the way my mind works. The people who showed up for me, teachers who found different ways to explain things, family members who made space for how I learned rather than insisting I learn the way the curriculum assumed, those people changed what my life was. The sessions are the simplest possible way I have of paying that forward: showing up, every week, for anyone who wants to learn, removing the cost barrier, and teaching the way I wish I had been taught. That is enough reason.

Get the Weekly Post

New thinking on Caribbean AI, governance, and technology, delivered every week.