Community College CS Andrew Ng × Critical Pedagogy 3 Tracks · Portfolio · No Exams
Generative AI For Everyone · Reimagined
Prompts
are arguments.

Andrew Ng's Generative AI for Everyone rebuilt for community college learners. Every image is a choice. Every output reflects the data it was trained on. We use these tools critically, build with them skillfully, and question them rigorously.

No Prereqs (Track I) 3 Tracks 18 Weeks Portfolio Free Tools Only
Prompt → Tokenization → Prediction
"Write a job description for a software engineer at a community nonprofit in San Jose."
Write a job description for a software engineer at a community nonprofit
Each token is a prediction. What does "software engineer" predict next? What assumptions are baked into that prediction?
0
Exams
0
Required Textbooks
3
Learning Tracks
4
Portfolio Projects

This vs. Andrew Ng's Original

Richer tools.
Sharper questions.

Andrew Ng's "Generative AI for Everyone" is an excellent conceptual overview — clear, accessible, and free. We use it as a foundation, then push further: we build with these tools, audit their outputs, and ask whose world they reproduce.

Andrew Ng's Version
Overview → Quizzes
  • Video lecture series on generative AI concepts
  • Single path — one depth for everyone
  • Assessment: multiple choice quizzes
  • Limited hands-on building or deployment
  • Minimal treatment of bias in generated outputs
  • No community application or real project requirement
This Course
Critical Use → Build → Audit
  • Every concept explored through hands-on prompting and analysis
  • Three tracks: collaborator, creator, critic — you choose
  • Assessment: four portfolio projects you own and can show
  • Track II and III students deploy real gen AI applications
  • Bias in generative outputs is a required project — quantified, not named
  • Capstone must address a real community need you identify

Three Learning Tracks

Use, build, or interrogate.

Choose your depth. All tracks share the same core discussions, readings, and community case studies. They differ in how technically deep you go — and you can change tracks as your confidence grows.

Track I
I
AI
Collaborator
Prereq: None · Basic internet use
Focus

Learn to use generative AI tools effectively, critically, and safely. Prompt engineering, fact-checking AI outputs, understanding hallucination, and recognizing when a tool is failing you. Produce a portfolio of real work augmented by AI — with analysis of what the AI got right, wrong, and why.

Projects
  • P1Prompt Portfolio: 30 prompts across 3 different tools, with analysis
  • P2Fact-Check Challenge: Verify 20 AI-generated claims against primary sources
  • P3AI-Augmented Work: Use AI to complete a real project in your career field
  • P4Critical Review: When did AI help vs. mislead you, and why?
Track II
II
AI
Creator
Prereq: Basic Python · Comfortable with APIs
Focus

Build real applications using generative AI APIs. Chain prompts, use RAG, build assistants and content pipelines. Evaluate outputs programmatically. Deploy a working gen AI application for a real community use case. Understand the economics, constraints, and failure modes of building with gen AI.

Projects
  • P1Prompt Engineering System: Build a multi-step prompt chain for a real task
  • P2RAG Application: Build a document Q&A system on community resources
  • P3Bias Measurement: Quantify output bias across demographic groups using code
  • P4Deployed App: Build and ship a gen AI tool for a real community need
Track III
III
AI
Critic
Prereq: Python · Statistics · ML background
Focus

Technically understand and rigorously audit generative AI systems. Probe models for bias, test alignment, examine training data provenance. Write analyses that are both technically precise and publicly legible. Prepare for AI safety, policy, or research roles. Your capstone must be publishable.

Projects
  • P1Red Team Report: Find 10 failure modes in a widely used gen AI tool
  • P2Image Audit: Quantify bias in 500 AI-generated images using CLIP embeddings
  • P3Training Data Analysis: Trace outputs back to training data provenance
  • P4Publishable Audit: Full technical + ethical audit of a deployed gen AI system

18-Week Project Arc

Four units.
Four questions.

Each unit is organized around a question you can answer only by doing. The question comes first. The tools come second. By the end, your four projects form a coherent argument about generative AI in your community.

Weeks 1–4
What is a prompt?

Not just a sentence. A prompt is an argument about what exists, who matters, and what should be said. Explore how small changes in prompts produce radically different outputs. Learn to read outputs critically before learning to write prompts skillfully.

Deliverable → Prompt Portfolio + Analysis
Weeks 5–9
What does it get wrong?

Hallucination, bias, stereotyping, erasure. Systematically document how a gen AI tool fails — particularly for communities underrepresented in training data. Quantify, don't just observe. Failure analysis is the most technically rigorous part of the course.

Deliverable → Bias Audit Report
Weeks 10–14
What can we build?

Design and build a generative AI application that serves a real community need you identified in Unit 2. The tool should address a gap — not replicate something that already exists. Every design decision must be documented and justified.

Deliverable → Working Application or Policy Brief
Weeks 15–18
What should we decide?

Take a position. Should your community use a specific generative AI system? Under what conditions? With what safeguards? Who should be in the room making that decision? Your final project is an argument — technical and civic — that you present publicly.

Deliverable → Public Exhibition Portfolio

Core Concepts

What every track
will know.

01
Tokens, Not Words

Language models don't read words — they process tokens. A token is a statistical unit, not a semantic one. Understanding this changes how you read every output.

02
Next-Token Prediction

All language generation is, at its core, predicting the most likely next token given all previous tokens. Autocomplete, at scale, on all of human text. No understanding required.

03
Hallucination

LLMs generate plausible text. "Plausible" and "true" are different things. The model does not know what it does not know — it will invent confidently. This is a property of the architecture, not a bug to be fixed.

04
Training Data Provenance

The model is its training data. What texts were included? Whose voices were overrepresented? Whose were excluded? The output of any gen AI system is a function of these choices.

05
Prompt Engineering

System prompts, few-shot examples, chain-of-thought, role prompting. Each technique changes what the model "attends to." Prompts are arguments about what to prioritize.

06
Retrieval-Augmented Generation

Ground the model's outputs in real documents. Reduces hallucination for specific domains. Does not eliminate it. Understanding RAG's limits is as important as understanding its power.

07
Diffusion Models

Image generation is not painting from imagination. It is denoising: reverse a diffusion process trained on labeled images. The labels encode human aesthetic choices and power structures.

08
RLHF and Alignment

Reinforcement learning from human feedback trains models to produce outputs that human raters prefer. Who are those raters? What did they find acceptable? This shapes every response.

09
Model Cards

Documentation for AI models: what they were trained on, what they're good at, where they fail, who should not use them. Reading model cards is a professional skill. Writing them is an ethical obligation.

"A prompt is not a command. It is a negotiation — between what you want, what the model has seen, and what the training data encoded about the world."
Course Philosophy · Benjamin, Noble, Ng, Ko, hooks
🔬
Measure, Don't Just NameSaying "AI is biased" is the beginning of the analysis, not the end. We quantify, test, and document — then propose remedies.
🛠️
Build What You CritiqueThe best critics understand what they're critiquing from the inside. You will build with these tools before you audit them.
🏘️
Community as ClientYour capstone must serve a real community need. Not a toy problem. A real person or organization who will use what you build.
📣
Make It PublicYour portfolio must be legible beyond this classroom. We hold exhibition nights and community presentations every semester.

Intellectual Lineage

Who we're reading.

Andrew Ng
DeepLearning.AI

The original "Generative AI for Everyone" — whose conceptual clarity and accessible framing we build on, critique, and extend for community college students.

Emily Bender
Stochastic Parrots

On the dangers of large language models at scale. What they cannot do. What harms they cause at the margins. Core reading for Unit 2.

Timnit Gebru
DAIR Institute

Datasheets for datasets. The ethics of large-scale AI training. Whose labor built these systems? Who was erased? Core reading, Weeks 6–8.

Ruha Benjamin
Race After Technology

When automation appears neutral but encodes and amplifies racial hierarchy. Essential framing for the bias audit project in Unit 2.

Joy Buolamwini
Unmasking AI

The coded gaze applied to generative image models. Whose face is the default? Whose aesthetic is "beautiful"? Methodology for Track III image audits.

Amy J. Ko
UW CS Education

Equitable computing education. Agency over anxiety. The three-track structure and portfolio system are built on her pedagogical framework.

Safiya Umoja Noble
Algorithms of Oppression

Information retrieval as a site of racial bias. The search engine as a mirror of power. Foundational for understanding recommendation and retrieval in gen AI.

bell hooks
Teaching to Transgress

The classroom as a site of freedom. Education as the practice of freedom. Every student brings knowledge worth centering. Referenced throughout the course.