Machine Learning Specialization · Reimagined
Every
algorithm
is a choice.

Andrew Ng's three-course ML specialization rebuilt for community college students. We derive every algorithm before we use it, implement from scratch before touching a library, and ask: optimized for what — and for whom?

3Learning Tracks
18Weeks
6Major Projects
0Exams
Gradient Descent · Loss Over Iterations
optimal iter 0 iter n loss J(θ) θ ← θ − α∇J(θ)
We derive this before we run it.

This vs. Andrew Ng's Original

Deeper foundations.
Community stakes.

Andrew Ng's Machine Learning Specialization on Coursera is the gold standard introduction — rigorous, clear, and accessible. We use it as a launchpad, then push further: every algorithm is derived, every dataset is examined for bias, and every project is connected to a real community problem.

Andrew Ng's Version
Concepts → Notebooks
  • Video lectures followed by guided Jupyter notebooks
  • All datasets provided — no data collection or critique
  • Single track at fixed depth
  • Assessment: graded auto-checked assignments
  • Limited discussion of fairness or algorithmic harm
  • No community application requirement
This Course
Problem → Derive → Build → Audit
  • Every algorithm introduced through a real problem first
  • Derive the math before running the code
  • Three tracks from conceptual to from-scratch implementation
  • Assessment: six portfolio projects on real community data
  • Bias auditing is a required project — not optional extra credit
  • Capstone must serve a real, named community need

Three Learning Tracks

Same math.
Three depths.

Depth is not fixed. You choose your track at the start of each unit. All tracks share the same readings, case studies, and community discussions. The divergence is in how far into the implementation you go.

Track I
I
ML
Consumer
Prereq: Basic stats concepts
Focus

Understand what ML algorithms do, interpret model outputs, and identify failure modes. Use trained models confidently. Spot bias, evaluate benchmarks skeptically, and read ML papers without being overwhelmed by notation. Prepare for roles that use ML without building it.

Projects
  • P1Interpret rental price predictions from a trained regression model
  • P2Audit a hiring classifier for disparate impact across groups
  • P3Write a plain-language report explaining an ML system to a community board
Track II
II
ML
Practitioner
Prereq: Python · Intro statistics
Focus

Build end-to-end ML pipelines using scikit-learn, pandas, and matplotlib. Clean and analyze real datasets. Train, validate, and tune models. Understand the bias-variance tradeoff in practice. Deploy simple models as APIs. Prepare for data analyst or ML engineer roles.

Projects
  • P1Predict rent prices using real census data from your city
  • P2Build and compare classifiers on a real hiring dataset
  • P3Cluster neighborhood demographic data — interpret ethically
  • P4Deploy a model; document its failure modes and edge cases
Track III
III
ML
Engineer
Prereq: Python · Calculus · Linear Algebra
Focus

Implement every algorithm from mathematical first principles using only NumPy. Derive gradient descent, backprop, the kernel trick, and EM from scratch before using any library. Understand every line of every model you deploy. Prepare for ML research and engineering roles.

Projects
  • P1Implement linear regression with gradient descent from scratch; verify against sklearn
  • P2Build logistic regression + SVM from scratch; derive kernel trick
  • P3Implement k-means and Gaussian Mixture Models; prove EM convergence
  • P4Build a neural network from scratch with backpropagation in NumPy
  • P5Full bias audit with statistical tests on a community dataset

18-Week Curriculum Arc

Six units.
One coherent argument.

Each unit starts with a community problem. Technical concepts arrive as tools to solve it — not as the point. The capstone unifies all six units into a single project that must serve a real, named need.

Weeks 1–3
Regression

Problem: Can we predict rent prices for next year? Derive linear regression. Implement gradient descent. Understand why "fit the data" is never neutral — what counts as normal in housing data?

Project → Predict rent prices in your zip code
Weeks 4–6
Classification

Problem: Should this job application proceed? Derive logistic regression and decision boundaries. Understand why false positive rates differ across groups — and why that matters more than overall accuracy.

Project → Build + audit a hiring classifier
Weeks 7–9
Neural Networks

Problem: Can a computer recognize handwritten rent checks? Derive perceptrons and multi-layer networks. Implement backpropagation from scratch. What features does the network learn, and why?

Project → Build an image classifier from scratch
Weeks 10–12
Unsupervised Learning

Problem: What patterns exist in neighborhood demographic data? Derive k-means and PCA. When we cluster people by data, what assumptions are we encoding? Whose similarities and differences are we measuring?

Project → Cluster + ethically interpret community data
Weeks 13–15
Bias Audit

Required for all tracks. Audit a real deployed algorithm — in lending, hiring, healthcare, or criminal justice. Quantify disparate impact using real data. Document findings. Propose a remedy. Present to a community audience.

Project → Full algorithmic bias audit report
Weeks 16–18
Capstone

Build an ML system that addresses a real problem named by your community. Not a toy dataset. Not a pre-cleaned benchmark. Real data, real stakes, real constraints. Documentation must include a harm analysis.

Project → Community ML application + harm report

Core Concepts · All Tracks

What you'll
understand deeply.

01
Gradient Descent

The universal optimization engine. Derive it, implement it, understand when it fails. The math behind "learning."

02
Bias-Variance Tradeoff

Underfitting vs. overfitting. Why a model that memorizes training data fails on real people.

03
Loss Functions

What you optimize is what you get. Cross-entropy, MSE, and why the choice is a value judgment.

04
Regularization

L1, L2, dropout. Constraining complexity to prevent memorization. The math of generalization.

05
Confusion Matrix

True positives, false positives, precision, recall. The four cells that determine who gets hurt.

06
Disparate Impact

When a "neutral" algorithm produces outcomes that differ systematically across protected groups. This is a technical concept, not only a political one.

07
Feature Engineering

Choosing what to measure is choosing what to value. The most impactful part of ML — and the most overlooked.

08
Cross-Validation

Test on what you didn't train on. How we actually measure generalization in practice.

J(θ) = ???
"Every ML algorithm is an answer to the question: what are we optimizing for? That question is never answered by the math. It is answered by the humans who choose the loss function."
Course Philosophy · Ko, Benjamin, Ng, Freire
📐
Derive Before You ComputeYou will implement gradient descent from scratch before you call optimizer.fit(). Understanding precedes convenience.
🔍
Audit Before You DeployEvery model you build must include a fairness analysis. Not as ethics theater — as engineering practice.
🌍
Community Data, Not Toy DataYou will work with real datasets from real communities. The messiness is the point.
📂
Portfolio Over ExamsSix projects you actually built. No auto-graded MCQs. Your grade is your GitHub.

Intellectual Lineage

Who we're reading.

Andrew Ng
DeepLearning.AI

The original ML Specialization — whose conceptual clarity and mathematical rigor we build directly on and depart from.

Amy J. Ko
UW CS Education

Equitable computing education. Read before implement. Student agency over assessment anxiety.

Ruha Benjamin
Race After Technology

The "New Jim Code." When automation appears neutral but encodes and amplifies racial hierarchy.

Solon Barocas
Fairness in ML

Formal mathematical definitions of fairness — and why no single definition is universally correct.

Ziad Obermeyer
UC Berkeley

Hidden racial bias in a widely used healthcare algorithm. Case study in Weeks 13–15.

Ian Goodfellow
Deep Learning Text

The foundational textbook — free online. Our Track III references for mathematical depth.

Cathy O'Neil
Weapons of Math Destruction

How models targeting poor communities amplify inequality. Core reading for the bias audit unit.

Jeff Anderson
Ungrading · Deep Learning

The 2-minute question rule. Ungrading. Deep learning over shallow performance. Foundational to this course's assessment philosophy.