Andrew Ng's landmark course redesigned for community college learners. Three tracks. Real community projects. No exams. We ask not just how AI works — but who built it, for whom, and at whose expense.
Andrew Ng's "AI for Everyone" reached millions. It's a superb introduction to AI concepts for executives and professionals. We took it apart and rebuilt it for community college students — people who live inside the systems AI is reshaping, often without being asked.
You choose your track at the start of each unit — and can change as you build confidence. All three tracks share the same core readings, discussions, and community projects. They diverge in technical depth.
Understand how AI systems work at a conceptual level. Map AI in your community. Write policy analysis. Participate in civic debate. No coding required — but you'll understand what engineers are building and why it matters to you. This track prepares you to advocate, vote, and organize around AI policy.
Everything in Track I plus hands-on use of AI tools — prompt engineering, workflow automation, generative AI applications. Learn to use AI as a collaborator for real work tasks. By the end, you'll have a portfolio of AI-augmented projects and the critical lens to evaluate what you're using and why it works (or fails).
Build AI-powered applications using APIs and open-source tools. Implement simple classifiers and pipelines from scratch. Understand the technical architecture of systems you use daily. By the end, you'll have deployed a working AI application solving a real community need — and understand every component inside it.
Each unit introduces new concepts through a problem — never the reverse. You encounter the question before you learn the vocabulary. By the end, your four projects form a connected portfolio about AI in your community.
Start by finding AI in the wild. Before any definitions, you'll document AI systems you interact with daily — before and after you understand how they work. Concepts: supervised learning, decision boundaries, training data.
Deep dive into one deployed AI system in housing, hiring, lending, or healthcare. Use public data to ask: who does this system serve? What does it optimize? Concepts: objective functions, error types, fairness definitions.
Transform your impact analysis into action. Draft regulatory proposals, redesign problematic systems, or build alternatives. Concepts: AI governance, regulation frameworks, technical standards, community consent.
Take your work beyond the classroom. Exhibition nights, community presentations, op-eds, open-source releases. Your portfolio should be legible to someone who never took this course. Concepts: communication, advocacy, transfer.
Every AI model learns from past data — and encodes whatever patterns, biases, and omissions that data contains. Garbage in, garbage out — but so is "normal" in.
AI optimizes for something specific. Whoever chooses what to optimize makes a value judgment — often hidden. Maximizing "engagement" is not neutral.
The most common AI approach: show examples with correct answers, learn the pattern. What counts as a "correct answer" is a human choice, every time.
No system is perfect. The question is: whose errors are acceptable? False positives in facial recognition vs. in medical diagnosis carry vastly different stakes.
Loosely inspired by the brain. Layers of transformations that learn to detect patterns. Not magic — mathematics. Neurons, weights, activation functions.
Predict the next token, at scale. ChatGPT, Claude, Gemini are autocomplete systems trained on the internet — which reflects the full spectrum of human writing, including its worst.
A model performs well in a lab. In the real world, distribution shifts, adversarial inputs, and feedback loops change everything. Lab accuracy ≠ community accuracy.
Who decides where AI gets deployed? Who gets to opt out? Community consent and democratic oversight are the missing pieces in most AI deployments.
Joy Buolamwini's term for encoded bias in computer vision. Systems that struggle to see darker skin are not technical failures — they are failures of who was in the room when the data was collected.
Gender Shades. The coded gaze. Bias in facial recognition systems as a function of whose faces were in the training data. Core reading, Week 3.
Algorithms of Oppression. Google search results as a site of racial and gender bias. How information retrieval encodes power. Core reading, Week 5.
Race After Technology. The "New Jim Code" — when automation encodes and amplifies racial hierarchy while appearing neutral. Core reading, Week 7.
Equitable, joyous computing education. Questioning who computing is for. Foundational to the course's pedagogical approach throughout.
Pedagogy of the Oppressed. Students are not empty vessels to be filled. Education is a liberatory practice when students are agents, not objects.
The original "AI For Everyone" — which we use, critique, and extend. Ng's conceptual clarity on AI techniques is unmatched. We add the community lens.
How high-tech tools profile, police, and punish the poor. Automated eligibility systems, predictive policing, child welfare algorithms. Core reading, Week 9.
The classroom as a site of freedom. Education as the practice of freedom. Every student brings knowledge worth centering. Referenced throughout.