CS 210 · Foothill / West Valley College · 18 Weeks · 4 Units

Data Structures
& Algorithms

Every data structure is an argument about the world — a claim that some operations matter more than others. Every algorithm is a strategy, a tradeoff, a value judgment. We study them from the root and build something real with each one.

12 Data Structures
as Projects
18 Weeks of
Deep Building
0 LeetCode
Grind Culture
3 Tracks:
Novice→Arch
01
Build First, Abstract Second

You implement a linked list before you use java.util.LinkedList. You build a binary search tree before you use TreeMap. Amy Ko's research: reading and understanding before writing creates deeper, more transferable learning. We earn the abstraction.

02
Each Structure Is a Story

A stack isn't just a data structure — it's how your browser's back button works, how undo/redo works, how compilers parse expressions. Every data structure gets a real story about why it was invented and what problem it solved. Jeff Anderson: context before content.

03
No LeetCode Grind Culture

The "grind 500 problems" approach to CS education is a hazing ritual, not pedagogy. Amy Ko's research shows it produces surface learning that doesn't transfer. We do deep, meaningful projects instead. The goal is understanding, not speed-running a hiring filter.


Pedagogical Foundation

How This Course Works

Every design choice in this course is grounded in research on how people actually learn to program — not how universities have traditionally tested programming. This means projects over exams, depth over breadth, and critical questions about why each data structure was invented.

🧱
Mini Project-Based Learning for Each Structure

Every major data structure culminates in a mini project where you build something real with it. A stack becomes a browser history system. A graph becomes a campus navigation app. You don't just implement — you deploy and reflect.

🗺️
Three-Track Choose-Your-Adventure System

Same core concept every week, different depth and project scope. Novice Explorer focuses on building and understanding. Builder adds implementation rigor. Architect engages with proofs and research. Tracks are starting points, not ceilings.

📖
Read Before Write (Amy Ko)

Every new data structure: you trace through existing implementations and predict behavior before writing a single line. Ko's research shows this "read before write" approach dramatically improves debugging ability and structural understanding.

Critical History of Computation

Who invented the hash table, and why? What social problem did the graph data structure first solve? Who gets credit for foundational algorithms — and who doesn't? Knowing the history makes the structures meaningful, not arbitrary.

📂
Portfolio + Ungrading (Jeff Anderson)

No traditional exams. A living portfolio of every implementation, project, and reflection. You assign your grade at the end, with evidence. The 2-minute question rule: before asking for help, write a precise question that demonstrates what you already understand.

🌱
Scaffolded Productive Struggle

Ko's Gidget research: framing difficulty as authentic practice (not failure) keeps learners engaged. Each project has three layers: a working solution (required), an efficient solution (encouraged), and an "edge cases and analysis" layer (for those ready for it).

The Core

Every Data Structure
Is a Mini Project

Twelve data structures. Twelve projects. Each one teaches you to ask: why was this invented? What problem does it solve better than the alternative? Who benefits when this runs in O(1) vs. O(n)? What does the tradeoff cost?
DS-01
Access: O(1)
Insert: O(n)
Space: O(n)
Arrays & Dynamic Arrays
The foundation. Contiguous memory. Random access. The price of insertion.

You've used arrays without thinking about what "contiguous memory" means. Here, we think about it. We build a dynamic array (like Python's list or Java's ArrayList) from scratch — understanding doubling strategy, amortized O(1) pushes, and why index access is fast but insertion is slow.

arr = [ 🍎 | 🍊 | 🍋 | 🍇 | __ | __ ] idx = 0 1 2 3 4 5 arr[2] → O(1): just compute base + 2×size insert(1, 🫐) → O(n): shift everything right
Mini Project: Build a Spreadsheet

A simplified CSV processor — read a CSV file, store data in a 2D dynamic array, implement sort by any column, filter by value, compute column statistics. Why is a spreadsheet the right metaphor for an array? What are its limits?

Novice
Use Python lists, implement all features, write a reflection on what happens when you add a million rows.
Builder
Implement your own DynamicArray class from scratch with a raw array underneath. Match Python list's amortized behavior.
Architect
Prove that the doubling strategy achieves amortized O(1) push via the potential method. When does tripling beat doubling?
🔗 Real-world connection: NumPy arrays, database tables, image pixel matrices, stock price time series.
DS-02
Access: O(n)
Insert: O(1) head
Space: O(n)
Linked Lists
Nodes and pointers. Freedom from contiguity. The cost of navigation.

An array is a neighborhood where houses have addresses in order. A linked list is a treasure hunt — each house has a note pointing to the next one. Fast at the front, slow everywhere else. We build singly-linked, doubly-linked, and circular. We ask: when is this actually better than an array?

Head → [🐢|next] → [🦊|next] → [🦅|next] → null val val val val ↑ insert here: O(1) ↑ but finding here: O(n)
Mini Project: Music Playlist System

Build a music player's playlist engine using a doubly-linked list. Support: play, skip, go back, shuffle (Fisher-Yates), loop. Why is a doubly-linked list perfect for "previous track"? What does array-based comparison reveal about tradeoffs?

Novice
Build in Python with a Node class. Implement all playlist operations. Write about why you chose each pointer direction.
Builder
Implement both singly and doubly linked. Compare operation costs. Add an LRU cache using linked list + hashmap.
Architect
Prove the Floyd cycle detection algorithm. Implement memory-efficient XOR linked list. Analyze cache locality vs. arrays.
🔗 Real-world connection: Browser history, undo stacks, operating system process scheduling, blockchain.
DS-03
Push/Pop: O(1)
Peek: O(1)
Space: O(n)
Stacks & Queues
LIFO and FIFO. The philosophy of ordering. The hidden structure in everyday software.

You use stacks every time you hit Cmd-Z. Queues run every print job, task scheduler, and message broker. These abstract data types are everywhere — we implement them multiple ways, understand why the abstraction matters more than the implementation, and deploy them in something real.

Stack (LIFO): Queue (FIFO): push(🍎) enqueue(🍎) → [ 🍎 ] push(🍊) enqueue(🍊) → [ 🍊 | 🍎 ] pop() → 🍊 dequeue() → 🍎 pop() → 🍎 (first in, first out)
Mini Project: Text Editor Undo/Redo + Task Queue

Part 1: Build a command pattern text editor with full undo/redo using two stacks. Part 2: Build a priority task queue for a simulated hospital triage system. Why does triage require a priority queue, not a regular queue?

Novice
Implement using Python lists. Build the text editor. Discuss: what real-world systems around you use queues?
Builder
Implement stack using linked list, queue using circular buffer. Implement a monotonic stack for the "next greater element" problem.
Architect
Prove that any queue can be implemented with two stacks in amortized O(1). Analyze the van Emde Boas priority queue.
🔗 Real-world connection: Call stacks, browser history, compiler expression parsing, OS scheduling.
DS-04
Access: O(1) avg
Insert: O(1) avg
Worst: O(n)
Hash Tables
The most magical structure in computing. O(1) lookup — if the hash function cooperates.

Hash tables are everywhere: Python dicts, Java HashMaps, database indexes, every set you've ever used. But how do they actually work? We build one from scratch, grapple with collisions (chaining vs. open addressing), understand load factors, and build something that makes O(1) real.

hash("cat") = 7 → table[7] = "cat" hash("dog") = 3 → table[3] = "dog" hash("fish") = 7 → COLLISION! chain it: table[7] = ["cat", "fish"]
Mini Project: Word Frequency Analyzer

Build a text analysis tool that finds word frequency across a large text file (novel, song lyrics, congressional record) using your own hash table. Compare performance against a sorted array approach. Extend: detect plagiarism between two texts. Critical question: whose words get counted as "common"?

Novice
Build HashMap with chaining. Analyze frequency of your favorite artist's lyrics. What does the algorithm think is "important"?
Builder
Implement both chaining and open addressing. Compare empirically. Build a consistent hash ring for distributed caching.
Architect
Prove the expected O(1) lookup under uniform hashing. Analyze SHA-256 as a hash function. Study perfect hashing.
🔗 Real-world connection: Python dict, database indexes, blockchain (Merkle trees), password storage.
DS-05
Search: O(log n)
Insert: O(log n)
BST worst: O(n)
Binary Trees & BSTs
Hierarchy encoded in pointers. The fastest search structure that stays sorted.

A binary search tree keeps data sorted for free — every insertion maintains the invariant. We implement from scratch, study all three traversals (and know WHY each traversal order matters), understand the degenerate case, and marvel at how balance changes everything.

50 / \ 30 70 / \ / \ 20 40 60 80 inorder: 20,30,40,50,60,70,80 ← sorted!
Mini Project: File System Navigator

Build a command-line file system explorer (like a mini tree command) using a BST to index all files. Support: search by name, sort by size, find all files of type X, print directory tree. How is a real file system different from a BST? What happens at scale?

Novice
Build BST with all three traversals. Build the file system explorer. Draw the tree on paper before coding it.
Builder
Implement BST, then discover its worst case. Implement AVL tree with rotations. Compare height in practice.
Architect
Prove AVL rotation maintains balance invariant. Study red-black trees. Implement a B-tree for disk-based access.
🔗 Real-world connection: File systems, database B-trees, HTML/XML DOM, decision trees in ML.
DS-06
Insert: O(log n)
Extract-min: O(log n)
Peek: O(1)
Heaps & Priority Queues
The best structure for "what's most important right now."

A heap answers one question faster than anything else: "what's the minimum (or maximum)?" Not arbitrary access — just the best element. We build a binary heap stored in an array (the beautiful trick!), implement heapsort, and build a priority queue that solves a real scheduling problem.

Max-heap in array: 100 / \ 80 60 arr = [100, 80, 60, 20, 70] / \ / parent(i) = (i-1)/2 20 70 (stored flat!)
Mini Project: Emergency Room Triage Simulator

Simulate an ER triage system. Patients arrive with severity scores. Your priority queue ensures the most critical patient is always seen next. Handle re-prioritization (decrease-key). Analyze: how does this system encode values about whose emergency "counts more"? What if severity scores encode bias?

Novice
Use Python heapq. Build the ER simulator. Write a reflection on what it means to algorithmically prioritize human life.
Builder
Implement binary heap from scratch. Add decrease-key operation. Compare heapsort vs. quicksort empirically.
Architect
Prove heap construction is O(n). Analyze Fibonacci heaps. Study why Dijkstra needs decrease-key.
🔗 Real-world connection: OS process scheduling, Dijkstra's shortest path, merge k sorted lists, A* pathfinding.
DS-07 · DS-08
BFS/DFS: O(V+E)
Dijkstra: O((V+E)log V)
Space: O(V+E)
Graphs & Graph Algorithms
The most expressive data structure. Relationships made computable.

Graphs model everything: social networks, road maps, the internet, recommendation systems, supply chains, biological pathways. We study adjacency matrix vs. list, directed vs. undirected, weighted vs. unweighted. Then we implement BFS, DFS, Dijkstra, and topological sort — understanding what each one finds and why.

A ←─── B ──→ D │ │ ↑ ↓ ↓ │ C ──→ E ──────┘ Adjacency list: A:[C], B:[A,D,E], C:[E]...
Mini Project: Campus/City Navigation + Social Network Analyzer

Part 1: Build a campus (or city block) navigation system using Dijkstra's algorithm. Find shortest paths, identify bottlenecks. Part 2: Model a social network. Find connected components, detect communities, compute degree distribution. Critical question: what does a recommendation algorithm's graph encode about who connects to whom?

Novice
Build adjacency list, implement BFS and DFS. Map your campus. Ask: which buildings are most "central"?
Builder
Implement Dijkstra and Bellman-Ford. Implement topological sort. Compare runtime on dense vs. sparse graphs.
Architect
Prove Dijkstra correctness. Study network flow (Ford-Fulkerson). Analyze social network algorithms and their surveillance potential.
🔗 Real-world connection: GPS routing, social networks, dependency management, web crawling, fraud detection.
DS-09 · DS-10
Trie insert: O(m)
Union-Find: O(α(n))
Segment tree: O(log n)
Advanced Structures
Tries, Union-Find, Segment Trees — specialized weapons for specific battles.

These structures solve problems that arrays, trees, and hash tables can't touch efficiently. A Trie makes autocomplete instant. Union-Find detects cycles in O(α(n)) — effectively O(1). A segment tree handles range queries in O(log n). We study them when we need them, not as abstract exercises.

Trie for autocomplete: root → [a] → [p] → [p] → "app" [a] → [p] → [p] → [l] → "apple" [a] → [r] → "art" Type "ap" → immediately find all words
Mini Project: Search Autocomplete Engine

Build a Trie-based autocomplete system trained on a large word list (English dictionary + a corpus meaningful to you — medical terms, cooking vocabulary, your language). Support prefix search, ranked suggestions by frequency, and deletion. Compare to a hash-based approach for prefix queries. Critical: whose vocabulary does the default word list center?

Novice
Build the Trie autocomplete. Train on a word list that includes your community's vocabulary. Document the gaps in standard word lists.
Builder
Add Union-Find for cycle detection. Build a segment tree for range sum queries. Analyze when each is appropriate.
Architect
Prove the inverse Ackermann bound on path compression + union by rank. Implement a persistent segment tree.
🔗 Real-world connection: Search engines, spell check, network cycle detection, range query databases.
ALGO-01 · Sorting
Comparison lower bound: Ω(n log n) · Counting sort: O(n+k) · Timsort used in Python
The Sorting Story
Six algorithms. One question: what does "better" mean?

We implement and deeply understand six sorting algorithms: Bubble, Selection, Insertion, Merge, Quick, and Counting/Radix. Each one is not just an algorithm — it's a strategy with a story: who invented it, what problem it solved, and what it costs on different input shapes.

Merge Sort: O(n log n) always — pays in space Quick Sort: O(n log n) avg, O(n²) worst — pays in pivots Counting: O(n+k) — only when k is small Python uses: Timsort (merge + insertion hybrid)

The information-theoretic lower bound is the jewel: no comparison sort can be faster than O(n log n) in the worst case. This is a theorem about the universe, not about programming skill. Understanding why it's true is one of the most beautiful moments in this course.

Mini Project: Sorting Visualizer + Analysis

Build an animated sorting visualizer (web-based or terminal). Show each algorithm's steps. Then: given a real dataset (NBA stats, census data, your music library), benchmark all six. Document when each wins and loses. Present your findings as a one-page "algorithm recommendation guide" for a non-technical audience.

Novice
Implement bubble, selection, merge. Build the visualizer. Write a guide for when to use each — in plain language.
Builder
All six algorithms. Implement Timsort from scratch. Benchmark on 5 different input distributions. Document every anomaly.
Architect
Prove the Ω(n log n) comparison lower bound. Derive average-case analysis of quicksort. Study introsort.
🔗 Real-world connection: Database query optimization, Python's sort(), search engine indexing, financial sorting by risk.
ALGO-02 · Dynamic Programming
Technique: Optimal substructure + overlapping subproblems · Memoization or tabulation
Dynamic Programming
Trading memory for time. The most elegant algorithmic technique.

DP is not a data structure — it's a way of thinking. The key insight: if a problem has optimal substructure and overlapping subproblems, you can solve it once and cache the answer. We study DP through real problems: edit distance, longest common subsequence, 0/1 knapsack, coin change.

Fibonacci (naive): O(2ⁿ) — recomputes everything Fibonacci (memo): O(n) — compute once, cache fib(n) = fib(n-1) + fib(n-2) ← optimal substructure fib(3) computed 21 times without memo ← overlapping

Every DP problem follows the same template: define your subproblem, find the recurrence, add memoization, convert to bottom-up tabulation. Once you see the pattern, it never leaves you.

Mini Project: Spell Checker + DNA Sequence Aligner

Part 1: Build a spell checker using edit distance (Levenshtein) to suggest corrections. Part 2: Implement the Needleman-Wunsch algorithm for DNA sequence alignment. Both are DP. Critical question: the spell checker's "correct" spellings encode whose language norms — and this matters for correctness in medical, legal, and multilingual contexts.

Novice
Implement Fibonacci memo and 2-3 classic DP problems. Build the spell checker. Document the cultural assumptions in the dictionary.
Builder
Implement 5+ DP problems. Compare memoization vs. tabulation. Build the DNA aligner. Optimize space to O(n).
Architect
Prove optimal substructure for your problems. Study approximation algorithms where DP is too slow. Analyze DP on trees.
🔗 Real-world connection: Bioinformatics, spell check, speech recognition, portfolio optimization, game AI.

18-Week Calendar

Root to Branch,
Week by Week

We build from the ground up — arrays and memory first, then structure, then abstraction. Every unit culminates in a mini project exhibition. The final four weeks belong to your capstone: a substantial system built with your choice of structures.

Wk
Structure / Algorithm
Core Concept + Analysis
Mini Project Milestone
Critical Lens
Unit I — Memory, Arrays, and Linear Structures
01Unit I
Memory + Big-O
How computers store data. RAM model. Big-O, Omega, Theta from scratch. Why asymptotic analysis matters.
Foundational
Analyzing Your First Functions
Trace through code and count operations. Identify loops, nested loops, recursion. Build intuition before formalism.
Lab: Measure, Don't Guess
Time 5 functions on inputs of size n, 2n, 4n. Plot the curve. Does it match your prediction? Where were you wrong?
Project
Whose Time Counts?
Big-O hides constants. A 1000x slower algorithm is still "O(n)". On whose hardware is performance optimized? Discuss: mobile vs. server, global south vs. silicon valley.
02Unit I
Arrays + Dynamic Arrays
Contiguous memory, O(1) access, O(n) insertion. Build dynamic array from scratch with doubling strategy.
Foundational
Amortized Analysis
Why doubling gives amortized O(1) push. The potential method sketch. Novice: run the experiment. Architect: prove it formally.
Project: Spreadsheet Engine
Build 2D dynamic array CSV processor. Sort, filter, compute statistics. Demo to class at end of week.
Project
What Gets Measured?
A spreadsheet processes whatever data you put in. Whose data exists in structured form? Whose lives are hard to measure, and what disappears in the unmeasured?
03Unit I
Linked Lists
Singly, doubly, circular. Node pointers. Trade random access for cheap insertion. Build from scratch.
Traversal + Mutation
Iterative and recursive traversal. Reverse in place. Detect cycles (Floyd). Amy Ko: trace before implement.
Project: Music Playlist Engine
Doubly-linked playlist with play/skip/back/shuffle. Demo to class. What does this reveal about doubly-linked lists?
Project
Spotify's Actual Architecture
What does Spotify actually use? Discuss the gap between "data structures class" and production systems. What's hidden in the abstraction?
04Unit I
Stacks & Queues
LIFO vs. FIFO as philosophies. Implement both via linked list and array. Understand the ADT vs. implementation distinction.
The Call Stack + Expression Parsing
Why recursion uses a stack. Shunting-yard algorithm: evaluate math expressions using two stacks. This is how your calculator works.
Project: Text Editor Undo/Redo
Command pattern with two stacks. Full undo/redo history. Add a deque-based sliding window. Mini-exhibition.
Project
Priority + Power
Queues encode fairness assumptions: first-come, first-served. But not all queues are equal. Who designed the queue, and for whose workflow?
Unit II — Hash Tables and Search
05Unit II
Hash Tables
Hash functions, collision resolution (chaining + open addressing), load factors. Build from scratch. Hit O(1) lookup.
Foundational
Why Hashing Works
Birthday paradox. Expected collisions. Load factor threshold. Python dict internals. The magic is math, not mystery.
Project: Word Frequency Analyzer
Analyze a large text with your hash table. Compare to sorted array. Detect near-duplicates. Mini-exhibition.
Project
What Gets Indexed?
Search engines are hash tables at scale. What content gets indexed, and what's invisible? Safiya Noble on the politics of information retrieval.
06Unit II
Binary Search + Sorting I
Binary search on sorted arrays. Invariant-based reasoning. Then: bubble, selection, insertion sort — build and analyze each.
Lower Bounds + When to Stop
O(n) vs O(n log n) vs O(n²) — visualize on actual timing curves. Why O(n²) is unusable at scale. The sorting hierarchy.
Foundational
Sorting Visualizer — Part 1
Build animated visualizer for bubble, selection, insertion. Benchmark on three input shapes. Begin building intuition.
Project
Sorting Human Lives
Credit scores sort humans by financial risk. What data goes in? What assumptions are baked in? Discuss: what should never be "sorted" algorithmically?
07Unit II
Merge Sort + Quick Sort
Divide and conquer. Merge: O(n log n) always. Quick: O(n log n) average, O(n²) worst. The pivot problem.
Foundational
Recurrence Relations
T(n) = 2T(n/2) + O(n). Master theorem. Why merge sort's recurrence gives O(n log n). Novice: plug in. Architect: derive.
Sorting Visualizer — Part 2
Add merge and quicksort. Complete full comparison. Final "sorting recommendation guide" for a non-technical audience.
Project
The Information-Theoretic Bound
No comparison sort can beat O(n log n) in the worst case. This is a theorem about the universe. Discuss: what does a mathematical impossibility tell us about the limits of computation?
08Mid-I
Mini Exhibition I
Present your best project from Units I-II. Peer feedback. Self-evaluation. What surprised you? What would you rebuild?
Exhibition
Portfolio Checkpoint
Learning conference: review portfolio progress. Identify 2 concepts to deepen. Set goals for second half. Adjust track if needed.
Capstone Project Pitch
Pitch your final capstone idea. Get feedback from peers and instructor. Begin scoping the data structure choices.
Project
Reflection: Who Builds This?
Survey the CS field's demographics. Whose problems get solved first? Who is in this room? What would change if the field looked different?
Unit III —