What This Course Covers

Every AI output travels through a pipeline of human decisions — tokenisation, vector mapping, attention weighting, fine-tuning, and system prompts. Each layer introduces usage & governance risk. Each layer encodes human values, biases, and blind spots. This course deconstructs that pipeline, layer by layer, so you can understand what’s actually happening inside the systems you & the organisations around you deploy.

You’ll learn the technical architecture of large language models — understanding how these systems work, and whose choices shaped them, where the vulnerabilities live, and what questions to ask.

Course Lessons

  1. Fragments of Thought: The Tokenisation Layer
  2. The Geometry of Meaning: Mapping Vector Space
  3. The Gateway to Meaning: Transformers and Attention
  4. The Invisible Architectures: Fine-Tuning and Alignment
  5. Who Holds the Prompt? Epistemic Power and the Invisible Frame
  6. Test Your Understanding

Connect with Us