Andrew Ng's five-course deep learning specialization rebuilt for community college learners. Six modules, three entry depths. You will not use a neural network until you have built one by hand.
Andrew Ng's Deep Learning Specialization across five Coursera courses is the most thorough public DL education available. We condense it to one community-college-length course, add three entry tracks, and make implementation from scratch mandatory — not optional.
All tracks attend every class, read every case study, and participate in every community discussion. The tracks differ in how far into the implementation you go — and you can move between them as your confidence builds.
Understand how deep learning systems work at an architectural and conceptual level. Interpret model outputs, identify failure modes, read about DL in the news without being lost. Work with pre-trained models. Prepare for roles that use or oversee DL systems without building them.
Build real deep learning applications using PyTorch. Train, fine-tune, and deploy CNNs, sequence models, and transformers. Understand the key architectural decisions. By the end, you'll have built and deployed a working DL application solving a community problem you chose.
Implement every architecture from scratch in NumPy: forward pass, backpropagation, Adam, batch normalization. Before you use PyTorch's autograd, you'll have written it. Before you call a Conv2d layer, you'll understand every multiplication inside it. Prepare for DL research.
Each module builds on the last. You will not see attention until you deeply understand the MLP. You will not use PyTorch until you have implemented the core operations by hand. Depth before breadth.
What is a neuron, really? The perceptron, sigmoid activation, and the universal approximation theorem. Implement a single neuron that learns to classify. Derive backpropagation from the chain rule — not from a textbook formula, but from first principles.
Stack the layers. Understand why depth matters — and when it doesn't. Implement batch normalization and dropout from scratch. Investigate: what does each layer actually learn? Visualize activations and challenge the "black box" myth.
Why does a filter slide? Implement convolution as a mathematical operation. Build ResNet-style skip connections. Ask: when we train an image classifier on biased data, what exactly does the convolutional filter learn? This isn't abstract — it's the coded gaze.
Language has order. Implement RNNs and LSTMs. Understand why they struggle with long-range dependencies. When we train a language model on community text, what patterns does it amplify? Sequence models encode the statistics of whoever wrote the training data.
Attention is all you need — but what does attention mean? Implement single-head self-attention from scratch. Understand the architecture of GPT and BERT. Probe a small LLM to understand what it has and has not learned about your community.
Two paths. Build: design and deploy a deep learning system addressing a real community need you identified — with full documentation and a harm analysis. Audit: conduct a deep technical and ethical audit of an existing deployed DL system, including a reproducibility study.
The five-course specialization whose conceptual depth and visual clarity we build on and condense. Module 5 directly references his attention mechanism explanations.
The best from-scratch implementation tutorials in existence. Our Track III backpropagation project is inspired by his micrograd series.
The coded gaze. Module 3's bias analysis project is built around her Gender Shades methodology applied to locally-relevant datasets.
Datasheets for datasets. Stochastic parrots. Critical analysis of scale in language models. Core reading for Module 5.
The canonical Deep Learning textbook (free online). Mathematical depth for Track III. The source for RNN and attention derivations.
Read before write. Student agency. The pedagogical framework behind the three-track structure and portfolio assessment system.
On the dangers of stochastic parrots. What language models do and do not understand. Essential for Module 5's critical framing.
Portfolio over exams. Deep learning over performance. The two-minute rule. Assessment philosophy that runs throughout.