Building Mini GPT Workshop
As part of the Global AI Community, we're hosting a hands-on session where we explore the fundamentals behind modern Language Models (LLMs). During this workshop, we'll walk through how LLMs process text, how tokenization actually works, and how you can build and train a compact GPT‑style model yourself. We'll break down the internal components of a tiny transformer, train it on the sample dataset, and experiment with generating text from the model you build.
During this session, you'll learn:
- How tokenization converts text into model-ready tokens
- How a Mini GPT architecture is structured (embeddings, attention, transformer blocks)
- How the training loop works and how the model learns patterns
- How to generate text using your trained model
- What small models can teach us about today's large-scale LLMs
⚠️ Please bring your own device! ⚠️
This will be a fully interactive workshop where you will build, train, and test the model step-by-step on your own machine. We'll start the evening with an introduction to LLM basics and tokenization, followed by a practical workshop where you'll construct your own Mini GPT model and see it generate text live.
Program
- 18:00: Welcome & community introduction
- 18:30: Introduction to Tokenization & Mini GPT Models
- 19:00: Coffee Break
- 19:15: Build Your Own Mini GPT Workshop
- 21:00: Q&A / Networking & drinks
- 21:30: Closing
