Andrew Ng's Generative AI for Everyone rebuilt for community college learners. Every image is a choice. Every output reflects the data it was trained on. We use these tools critically, build with them skillfully, and question them rigorously.
Andrew Ng's "Generative AI for Everyone" is an excellent conceptual overview — clear, accessible, and free. We use it as a foundation, then push further: we build with these tools, audit their outputs, and ask whose world they reproduce.
Choose your depth. All tracks share the same core discussions, readings, and community case studies. They differ in how technically deep you go — and you can change tracks as your confidence grows.
Learn to use generative AI tools effectively, critically, and safely. Prompt engineering, fact-checking AI outputs, understanding hallucination, and recognizing when a tool is failing you. Produce a portfolio of real work augmented by AI — with analysis of what the AI got right, wrong, and why.
Build real applications using generative AI APIs. Chain prompts, use RAG, build assistants and content pipelines. Evaluate outputs programmatically. Deploy a working gen AI application for a real community use case. Understand the economics, constraints, and failure modes of building with gen AI.
Technically understand and rigorously audit generative AI systems. Probe models for bias, test alignment, examine training data provenance. Write analyses that are both technically precise and publicly legible. Prepare for AI safety, policy, or research roles. Your capstone must be publishable.
Each unit is organized around a question you can answer only by doing. The question comes first. The tools come second. By the end, your four projects form a coherent argument about generative AI in your community.
Not just a sentence. A prompt is an argument about what exists, who matters, and what should be said. Explore how small changes in prompts produce radically different outputs. Learn to read outputs critically before learning to write prompts skillfully.
Hallucination, bias, stereotyping, erasure. Systematically document how a gen AI tool fails — particularly for communities underrepresented in training data. Quantify, don't just observe. Failure analysis is the most technically rigorous part of the course.
Design and build a generative AI application that serves a real community need you identified in Unit 2. The tool should address a gap — not replicate something that already exists. Every design decision must be documented and justified.
Take a position. Should your community use a specific generative AI system? Under what conditions? With what safeguards? Who should be in the room making that decision? Your final project is an argument — technical and civic — that you present publicly.
Language models don't read words — they process tokens. A token is a statistical unit, not a semantic one. Understanding this changes how you read every output.
All language generation is, at its core, predicting the most likely next token given all previous tokens. Autocomplete, at scale, on all of human text. No understanding required.
LLMs generate plausible text. "Plausible" and "true" are different things. The model does not know what it does not know — it will invent confidently. This is a property of the architecture, not a bug to be fixed.
The model is its training data. What texts were included? Whose voices were overrepresented? Whose were excluded? The output of any gen AI system is a function of these choices.
System prompts, few-shot examples, chain-of-thought, role prompting. Each technique changes what the model "attends to." Prompts are arguments about what to prioritize.
Ground the model's outputs in real documents. Reduces hallucination for specific domains. Does not eliminate it. Understanding RAG's limits is as important as understanding its power.
Image generation is not painting from imagination. It is denoising: reverse a diffusion process trained on labeled images. The labels encode human aesthetic choices and power structures.
Reinforcement learning from human feedback trains models to produce outputs that human raters prefer. Who are those raters? What did they find acceptable? This shapes every response.
Documentation for AI models: what they were trained on, what they're good at, where they fail, who should not use them. Reading model cards is a professional skill. Writing them is an ethical obligation.
The original "Generative AI for Everyone" — whose conceptual clarity and accessible framing we build on, critique, and extend for community college students.
On the dangers of large language models at scale. What they cannot do. What harms they cause at the margins. Core reading for Unit 2.
Datasheets for datasets. The ethics of large-scale AI training. Whose labor built these systems? Who was erased? Core reading, Weeks 6–8.
When automation appears neutral but encodes and amplifies racial hierarchy. Essential framing for the bias audit project in Unit 2.
The coded gaze applied to generative image models. Whose face is the default? Whose aesthetic is "beautiful"? Methodology for Track III image audits.
Equitable computing education. Agency over anxiety. The three-track structure and portfolio system are built on her pedagogical framework.
Information retrieval as a site of racial bias. The search engine as a mirror of power. Foundational for understanding recommendation and retrieval in gen AI.
The classroom as a site of freedom. Education as the practice of freedom. Every student brings knowledge worth centering. Referenced throughout the course.