AI isn't magic — it's math, history, and human choice wrapped in an abstraction. We dismantle the abstraction layer by layer, learning from the root. We choose our own adventure, build things that matter to our communities, and ask the questions that textbooks skip.
Most AI courses teach you to use tools. This one teaches you to think — about systems, power, and whose intelligence gets valued. We start at the root: probability, logic, linear algebra. We build each idea from scratch before we use it. We ask hard questions at every layer: who built this? Who benefits? Who gets harmed? You'll leave with tools and the critical judgment to use them responsibly.
We don't start with TensorFlow. We start with a probability, a line, a decision. You'll understand every layer before you stack them. Research by Amy Ko shows that understanding before doing creates stronger, more transferable learning. You'll read the math and understand it before you write the code — not the other way around.
Three tracks share core concepts and diverge on projects. You pick problems that matter to your community. Jeff Anderson's framework: "You are the world's leading expert on your own learning." Your final project is yours to define.
Rooted in Paulo Freire and bell hooks: you are not an empty vessel to be filled with AI facts. You are a co-creator of knowledge. Critical consciousness about who built AI systems — and who they harm — is not a "soft" topic. It is the most advanced technical skill in this course.
Dr. Ko at the University of Washington has spent 25+ years researching how to make computing education equitable, joyous, and liberatory. Her discoveries directly shape this course: justice-focused CS requires student trust and agency; understanding ML requires understanding uncertainty; reading programs before writing them builds deeper mastery; scaffolded problem solving beats unguided trial-and-error. CS assessments often aren't fair — so we ditch traditional assessments entirely.
Every student engages the same core concepts each week — probability, optimization, learning, ethics. Then we diverge into project tracks. Tracks aren't ceilings. They're starting points. You can move up mid-semester. You can also propose your own track.
You'll use visual tools, Google Colab notebooks, and real community data to understand AI from the outside in. We learn to read AI systems before we write them — a principle from Amy Ko's research: reading before writing creates deeper mastery.
You implement core algorithms from scratch in NumPy before ever touching scikit-learn. The rule: never use a library function until you've built it yourself first. Understanding means being able to explain every matrix multiplication.
Calculus, linear algebra, probability theory — all in play. You'll derive backpropagation from the chain rule, understand attention mechanisms mathematically, and engage critically with published AI research papers. Amy Ko: "understanding ML requires understanding uncertainty."
We start with a question, not a tool: what is intelligence, and who decided? Before writing a line of code, we read, debate, and think critically. Every week opens with a problem you can't yet solve. By the end of that week, you can — and you understand why the solution works, not just how to run it.
Before writing a line of code, pause and think about what you bring in:
Research shows that the strongest learning happens when new ideas attach to things you already know (Ambrose et al., 2010). The connections you make are yours — nobody can take them away.
Adapted from Jeff Anderson's ungrading practice, grounded in research showing traditional grading harms learning. We don't optimize for grades. We optimize for real mastery you can use in your career and community.
Your portfolio is evidence of your learning journey — not a finished showcase, but a process document. First attempts, failures, revised understanding, growth moments. Include everything.
You receive feedback from three sources, not one. The instructor is the smallest. This mirrors professional practice in tech.
These aren't productivity hacks. They're how people actually learn hard technical material — especially first-generation students navigating a field that wasn't designed with them in mind.
Before running any code, trace through it by hand on paper with 3 small inputs. Predict the output. Amy Ko's research: this is the single highest-leverage learning habit in CS.
Explain one concept to a classmate, family member, or rubber duck every week. If you can't explain it plainly, you don't understand it yet. Teaching reveals the gaps that practice hides.
When stuck on an algorithm, shrink the problem. n=3 instead of n=100. Draw it. The bug is almost always visible at small scale. Don't reach for debugging tools before reaching for paper.
Before using ChatGPT or Stack Overflow, write your precise confusion: what you understand, what step breaks, what you've tried. That writing is the learning. The answer is secondary.
Error messages are AI's way of pointing exactly at the problem. Read the full error. Find the line number. Read the surrounding code. 80% of bugs announce themselves clearly — if you look.
Confusion doesn't mean you're behind — it means you're at the edge of your current model. Write down exactly what's confusing. That specificity is expertise in formation. Then bring it to class.
AI isn't a collection of isolated tools — it's a web of ideas that build on each other. Here's how every major concept in this course connects.
Every project in this course can be applied to real data that matters — not iris flowers and toy examples. These datasets are curated to connect technical learning to real-world consequence.
Recidivism prediction scores used in US courts. Foundational for bias auditing — used in Machine Bias investigation.
→ Get DatasetPredict whether income >50K. Exposes class, race, and gender disparities in feature importance. Classic fairness benchmark.
→ Get DatasetBART and AC Transit ridership by station. Build a graph-based recommendation system or route optimizer.
→ Get DatasetTweets labeled hate speech, offensive, or neither. Useful for NLP classification and studying labeling bias in training data.
→ Get DatasetJoy Buolamwini's Gender Shades benchmark. Exposes intersectional bias in commercial facial analysis systems.
→ Learn MoreOne million+ words from 500 English texts across 15 genres. Free, built into NLTK. Excellent for NLP and language analysis.
→ Get DatasetComplete units to unlock badges. They're tracked locally in your browser — a record of your journey, not a grade.