Embeddings: From Words to Vectors
How AI maps discrete objects into continuous vector spaces
An embedding is a learned linear map from a discrete set (words, users, products) into a continuous vector space. This module covers the geometry of embeddings: why king - man + woman = queen works, how sentence embeddings capture semantic meaning, and what it means for LLM token embeddings to live in a 12,288-dimensional space. You'll also confront the curse of dimensionality -- why intuition breaks in high dimensions. Mini-lab: Build a semantic search engine. Embed a collection of sentences, compute cosine similarities, and retrieve the most relevant document for a query.
Estimated time: 60 minutes
Stuck on something? The AI tutor sees this lecture—just ask.
Loading learning experience...