Why you exist. What you stand for. Click to define.
40,000 ft
3–5 Year Vision
The future you're building toward. Click to define.
30,000 ft
1–2 Year Goals
Specific outcomes for the next 12–24 months. Click to define.
20,000 ft
Areas of Focus & Responsibility
Ongoing roles and standards you maintain. Click to define.
10,000 ft
Current Projects
Multi-step outcomes with a finish line. Click to manage.
Runway
Next Actions
Your inbox items and immediate tasks. 0 items waiting.
Go to Inbox →
5-Year Academic Plan
Conquering College Lab 4
Map out your courses, milestones, and transfer timeline. From Jeff Anderson: "Make this plan visually appealing, easy-to-edit, and flexible."
Career Capital Inventory
So Good They Can't Ignore You
Cal Newport: Rare & valuable skills = career capital. Track the skills that make you irreplaceable.
Energy & Engagement Log
Designing Your Life
Burnett & Evans: Track which activities give you energy vs drain you. Pattern reveals your ideal work.
Grit Pulse Check
Angela Duckworth
Passion × Perseverance. "Enthusiasm is common. Endurance is rare." Are you still on the path?
Passion Alignment — Does this still light you up?
Perseverance — Are you doing the hard thing daily?
— Grit Score
Recommended Reading
Growth Library
Open Questions
Things to Figure Out
Jeff Anderson: "As you create your five-year plan, you will likely have questions. Capture them."
Odyssey Plans — 3 Possible Lives
Designing Your Life
Burnett & Evans: Don't agonize over one path. Sketch 3 genuinely different 5-year lives. Rate each on resources, confidence, likability, and coherence.
Identity Statements
Atomic Habits × Grit
James Clear: "Every action is a vote for the type of person you wish to become." Write identity statements — not goals, but who you ARE.
Weekly Review
Sunday evening · 30 min · GTD + PARA Review combined.
Review Checklist
One Big Thing
Reflection
Completion Trend
Weekly review completion %
Archive
Week
Exercise
CS Hrs
Papers
Notes
YT Hrs
OBT
Build your archive.
👋 You're viewing Henry Fan's weekly schedule · v4.1
Shared read-only view · updated recently · Deliver excellence at work · Build mastery on the margins · Earn the PhD
⏳ The 18-Month Arc
Loading arc...
—
—
— Loading —
—
The weekly mentor note will load in a moment. If you see this for more than a couple of seconds, reload and tell me.
This week, specifically · one or two sentences
COMPOUND v4.1
Deep work on the 18-month arc · CS Ed PhD prep · Remote CVC analyst
Deliver excellence at work. Build mastery on the margins. Earn the PhD.
Exercise CS Theory CVC Work CS Build PhD Prep YouTube Family Reflect
7h Exercise
8h CS Theory
6.5h CS Build
6h PhD Prep
4h Grad Prep (Sun)
3.5h YouTube
40h CVC Work
1h Property Mgmt
6h 20m Sleep/night (realistic)
v3.1 → v4.0 CHANGES
7 Evidence-Based UpgradesNet: +45 min sleep/night · +5.25h consolidation/wk · stronger transfer · PhD ↔ CS convergence
5 phase capstone projects: Each CLRS phase produces a real tool — Sorting Visualizer → Mini Search Engine → Research Network Mapper → Document Similarity Detector → Portfolio. Phases 3–4 directly serve your PhD prep. (Chi, 2009: elaboration through application creates richer, more transferable memory traces)
Evening Project Work 9–11 PM, Wind-down 11 PM, Sleep 11:30: Realistic pattern — 6h 20m of sleep is below the 7–8h target, but it matches what actually happens. The 9–11 PM window is execution time, not novel theory. Tune upward when you're ready to sleep earlier. (Walker, 2017: each 90-min cycle matters — get earlier bedtime back when you can)
Within-session interleaving: Every CLRS session now touches 2+ chapters. Last 10 min of each session = exercises from a different chapter. Builds discriminative contrast, not just familiarity. (Rohrer, 2015: d=0.35 over blocked practice)
Incubation protocol: Evening sessions end with an unsolved question. Sleep processes it. Morning retrieval revisits. (Wagner et al., 2004: 2.6x more likely to find insight after sleep)
Daily PhD micro-writing: 10-min writes in weekday PhD slots. Sunday polishes. Compounds to 50+ min/wk of additional writing. (Boice, 1990: daily writers published 9x more than binge writers)
PhD papers → Anki: 3 cards per paper. Without spaced retrieval, you'll forget 80% of what you read within 2 weeks. Same tool, same habit, applied to both tracks.
Sunday active recovery: Walk or yoga instead of gym. Parasympathetic dominance supports weekly nervous system recovery. (Kellmann, 2010)
WEEKLY GRID
Time
Mon
Tue
Wed
Thu
Fri
Sat
Sun
6 AM
🏋️ Exercise5:50 – 6:50BDNF ↑ CORTISOL ↓
🏋️ Exercise5:50 – 6:50
🏋️ Exercise5:50 – 6:50
🏋️ Exercise5:50 – 6:50
🏋️ Exercise5:50 – 6:50
🏋️ Exercise5:50 – 6:50
🚶 Active Recovery5:50 – 6:50Walk, yoga, or mobilityPARASYMPATHETIC ↑
✍️ Deep Write · polish9:00 – 11:00 PMPolish this week's micro-writes. No new material — shape what's there.EXECUTION · LOW COG
🔧 Build · long-form9:00 – 11:00 PMHighest-output evening of the week. Capstone project push.SECOND-WIND WINDOW
📖 Retrieval · light work9:00 – 11:00 PM9:00–9:30 hardest problem no notes · then Sunday commitments prep for JeffINCUBATION ★
10 PM
11 PM
🌙 Wind Down — No Screens11:00 – 11:30Read fiction · journal · dim lights · incubation question set before sleepMELATONIN ONSET
Sleep
😴 Sleep 11:30 PM → 5:50 AM = 6h 20mBelow ideal (7-8h). Realistic pattern — tune when you're ready to sleep earlier. SWS replays today's learning; REM integrates.MEMORY CONSOLIDATION · SHORT
GRAD ALGO CAPSTONE PROJECTS
PHASE 1 Weeks 1–8 · Foundations
🏎️ Sorting Benchmark Visualizer
A Python tool (or web app) that visualizes and benchmarks every sorting algorithm side-by-side. Input an array size, watch them race, see O(n²) vs O(n log n) in real time.
Each Friday: add that week's sort to the benchmark
Week 6: timing comparison chart with n=100, 1K, 10K, 100K
Week 8: polished README + live demo link or animated GIF
→ When you implement merge sort inside a racing visualizer, you encode it with richer context than a standalone function. That's transfer.
PHASE 2 Weeks 9–16 · Data Structures
🔍 Mini Search Engine
A search engine built entirely from first-principles data structures — no libraries. Hash table for the inverted index, BST for ranked retrieval, DP for fuzzy matching (edit distance). Feed it your paper reading database or your own notes.
README: "Zero dependencies. Every data structure built from scratch."
→ DP isn't abstract when you need it to find "Dijkstra" when you typed "Dijstra." The application makes the algorithm stick.
PHASE 3 Weeks 17–22 · Graphs
🕸️ Research Network Mapper
Maps the citation network of your PhD paper database. Papers are nodes, citations are edges. BFS finds connection paths, Dijkstra finds "shortest intellectual distance" between topics, MSTs show the core skeleton of your field.
Week 18: BFS/DFS traversal of your paper citation graph
Week 21: Dijkstra pathfinding between any two papers
Week 22: MST visualization of your research field's core structure
Bonus: this tool IS your literature review infrastructure for PhD apps
→ Builds your PhD prep track and CS skills simultaneously. Every algorithm you learn directly improves a tool you actually use.
PHASE 4 Weeks 23–28 · Advanced
📄 Document Similarity Detector
A tool that fingerprints and compares your paper syntheses. Rabin-Karp for text fingerprinting, KMP for exact pattern matching, flow-based model for section matching. Finds thematic clusters across your research database.
README: architecture diagram showing which CLRS algorithm powers each component
→ String matching stops being abstract when you're using it to find patterns in your own 50+ paper database.
PHASE 5 Weeks 29–30 · Consolidation
📦 Portfolio Integration
Unify all four projects into a single GitHub portfolio with a master README. Each project documents which algorithms it uses, why you chose them, what you learned, and performance benchmarks. This is the artifact you cite in PhD applications.
Deliverables:
Master README linking all 4 projects with architecture diagrams
2-page written reflection: what you know, what to study next
Blog post: "What I Learned Working Through CLRS in 30 Weeks"
Portfolio site update: fansofhenry.github.io
→ The portfolio IS the PhD application evidence. "I worked through CLRS and built 4 real systems using the algorithms."
DESIGN PRINCIPLES
PROJECT Elaboration Through Application
Implementing merge sort standalone encodes it one way. Implementing it as the sorting layer inside your search engine encodes it with rich contextual connections retrievable in more situations. Projects create elaborated memory traces that transfer to novel problems.
→ Anki deck + Friday interleaved review + Sunday retrieval test
INTERLEAVE Within-Session Mixed Practice v4
v3 interleaved across days. v4 interleaves within each session: the last 10 min always covers a different chapter than the first 45. This builds discriminative contrast — "which algorithm applies here?" — not just isolated familiarity.
→ 5:15–6:45 = single unbroken session. Saturday = 4 × 90 min
SLEEP Consolidation + Separation v4
v3 had 7h 20m with screens until 10:15. v4 gains 30 min and separates screens from sleep by 45+ min. Each lost cycle costs ~20% of that period's consolidation. 30 min/night = 3.5h/week of learning protected.
→ 6h 20m sleep (realistic). Project work 9–11 PM. Wind-down 11–11:30. Sleep at 11:30 PM. Tune upward when ready.
INCUBATE Sleep On It v4
Struggling with a problem before sleep triggers unconscious restructuring during SWS. Participants who slept were 2.6× more likely to discover a hidden shortcut. End evening sessions with an open question — the brain continues working overnight.
→ Evening blocks end with an unsolved question. Morning retrieval revisits it.
LOAD Cognitive Matching
Morning = peak cognition → hardest material (CLRS theory, PhD writing). Post-work = depleted → building at the edge + micro-writing. Evening = lowest → creative YouTube well before sleep.
Sweller, 1988 · Chronotype research · Newport
→ PhD writing → Sunday AM. YouTube → 8:30 PM (no longer competes with sleep).
GENERATION Predict First
Generating answers before seeing them = stronger memory. Before each chapter: predict the algorithm. Before each paper: predict the finding. Before each project session: predict the bug you'll hit.
→ Prediction step before all reading and build sessions
WRITE Daily Micro-Writing v4
Boice tracked 27 new faculty for 2 years. Those who wrote in brief daily sessions published 9× more than binge writers. v4 adds 10-min PhD writes to weekday evenings. Sunday AM polishes the fragments into prose.
Boice, 1990 · Silvia, 2007 ("How to Write a Lot") · Sword, 2012
→ 10-min micro-writes Mon–Thu. Sunday AM assembles and polishes.
RITUAL Transitions
Context-switching costs ~23 min recovery. Entry/exit rituals clear attention residue: 2 min to confirm action, 2 min to log, set tomorrow's task, and note one incubation question.
Mark, UCI · Leroy, 2009 · Gollwitzer, 1999
→ Entry + exit ritual per block. Shutdown ritual at 6:45 PM. Exit now includes incubation question.
Min 0–5: Retrieval warm-up. Close book, write what you remember. Revisit last night's incubation question. (Testing effect + incubation)
Min 5–7: Predict. "I think this algorithm does ___." + "It connects to the project because ___." (Generation + elaboration)
Min 7–45: Core work — follow the day's method (survey, read, trace, or exercises).
Min 45–55: Interleaved switch — exercises or Anki from a DIFFERENT chapter. v4(Within-session interleaving)
Min 55–60: Exit — "Today I learned ___. For the project, I can use this to ___." + 3 Anki cards, all due reviewed. (Kolb reflection + elaboration)
Why within-session interleaving (v4): Rohrer et al. showed interleaving during learning — not just during review — builds discriminative contrast. "Which algorithm fits this problem?" requires comparing, not just recognizing. The last 10 min of each session now always comes from a different chapter.
Project integration: The exit reflection now includes a project connection. This is elaboration — linking each algorithm to a concrete use case in your current capstone creates richer, more transferable memory traces.
Weekly Cadence (Project-Integrated)
Mon AM: Pass 1 survey + predict → ask: "How might this fit into the current project?" → 10′ exercises from 2-wk-ago chv4
Tue AM: Pass 2 active read + self-explain. Note which project component this enables. → 10′ exercises from last wk chv4
Wed AM: Hand-trace pseudocode. Sketch how the algorithm's I/O maps to project data. → 10′ Anki all due
Thu AM: Pass 3 exercises. Use project data as test input where possible. → 10′ random past ch exercisev4
Fri AM: Full interleave — 3 chapters mixed, no topic labelsv4(forces discrimination)
M/W/F 5:15 PM: Implement algorithm → integrate into current phase project.
T/Th 5:15 PM: Ben Eater breadboard / Nand2Tetris build.
Sat AM: MIT 6.006 + extended Anki + deep project build session.
Sun PM: 30 min — hardest exercise, book closed → sleep immediately after (incubation)
This Week
Install Anki. Create deck "CLRS Algorithms." Add first card today.
Create repo:github.com/fansofhenry/clrs-python with /projects/ subfolder for capstones.
Get CLRS 4th ed. Accessible at 7 AM daily.
Scaffold Phase 1 project: Create sorting-benchmark/ folder with empty files for each sort.
Print interleave template: When doing Friday problems, shuffle chapter labels — write just "Problem 1, 2, 3…" without topic headers. v4
Week 8: Sorting Benchmark Visualizer complete. Week 16: Mini Search Engine. Week 22: Research Network Mapper. Week 28: Document Similarity Detector. Week 30: Portfolio unified on fansofhenry.github.io.
Entry (2 min): Read yesterday's exit note. Check incubation question — did sleep yield insight? v4 Timer on, phone gone.
Build (50 min): Work at the edge of ability. If easy, increase difficulty. (Deliberate practice)
PhD micro-write (10 min): Free-write connecting today's build to research question. v4(Daily writing)
Exit (2 min): Git commit. "Tomorrow I need to ___." + One unsolved question for tonight's incubation. v4
Why alternate code (M/W/F) and breadboard (T/Th): Interleaving abstract + concrete representations encodes concepts through two modalities → stronger transfer. (Dual coding + interleaving)
Why end with an open question (v4): Wagner et al. (2004) found that participants who struggled with a problem and then slept were 2.6× more likely to discover a hidden shortcut than those who stayed awake the same duration. Your brain doesn't stop working at shutdown — it starts a different kind of processing.
Phase 1 — Months 1–3
Ben Eater: Clock → registers → ALU → RAM. Photo-document "expected vs. actual."
Nand2Tetris I: Projects 1–6. Explain each solution aloud. (Self-explanation)
CLRS Python: Translate literally → test → timeit → refactor.
Month 6: Working 8-bit computer. Nand2Tetris I complete. 16 chapters implemented. All on GitHub.
🎓 PhD Prep — Research Identity Through Daily Writing~5.5 hrs/wk · 18 months
Paper Reading Protocol P1
Before: Read title + abstract only. "I predict the finding is ___." (Generation)
After (10 min): Close paper. Synthesize from memory. Then check. (Retrieval)
Anki (5 min): Create 3 cards: research question, method, key finding. v4(Spaced retrieval for papers)
Target: 2 papers/wk → 50 in 6 months. 150+ paper Anki cards at 90% retention.
Why PhD Anki cards (v4): Without spaced retrieval, you'll lose 80% of paper content within 2 weeks. At Month 6, you should be able to recall the RQ, method, and finding of any paper you've read — not just vaguely remember it. Same Anki app, same daily habit, second deck.
Fri: Program research + retrieval: recall 3 paper findings from memory, then check (testing effect for papers)
Sat: 90-min deep paper session + elaborative interrogation
Sun AM: 70-min deep write — assemble week's micro-writes into polished prose v4(peak cognition)
Why daily micro-writing beats Sunday binges: Boice (1990) tracked 27 new faculty over 2 years. Daily writers produced 9× more pages and reported less anxiety about writing. Your Sunday session now polishes fragments instead of generating from scratch — dramatically lower activation energy.
PhD ↔ Project synergy: Phase 3 (Research Network Mapper) directly uses your paper database. Phase 4 (Similarity Detector) finds thematic clusters in your syntheses. Your CS projects and PhD prep converge — the tools you build serve the research you're doing.
This Week
Create Anki deck: "PhD Papers" — separate from CLRS deck. v4
Create spreadsheet: Title | Authors | Venue | RQ | Method | Finding | Critique | My RQ Connection
Write one paragraph: "My research question is ___."
Read one paper each from Leo Porter, Amy Ko, Mark Guzdial.
Draft cold email template.
Month 18: 50+ syntheses · 150+ paper Anki cards · Research statement · 3–5 faculty relationships · 1 paper drafted · SoP per program.
Wind-down: 11:00 – 11:30 PM. Screens off · read fiction · dim lights · incubation question.
Honest tradeoff: 6h 20m is below the 7–8h research-optimal range (Walker, 2017). The realistic schedule trades 1.5 hours of sleep for 2 hours of evening project work. That's a real cost — each 90-min sleep cycle lost costs ~20% of that cycle's consolidation. Worth it if you're shipping artifacts; not worth it if you're just grinding. Reevaluate monthly. When you're ready to sleep at 10:30 PM, pull wind-down back to 10 PM and reclaim 1h of sleep.
The incubation bonus: The wind-down isn't wasted time. Open questions from the evening session are actively processed during stage 2 and SWS sleep. Wagner et al. (2004) found that insight problems showed a 2.6× solution rate after sleep vs. equivalent waking time. Your wind-down is the on-ramp to unconscious problem solving.
🧪 The Lab
Where mastery compounds.
Not a productivity tool — a research lab for a PhD applicant on an 18-month arc. Each block is a bet on the long game. Each card is an artifact. Each session compounds.
📵 DND · 💬 Slack off · 💧 water · 🪟 one tab · 🎧 headphones
📝 What did you accomplish? Where are you stuck?
Last 7
Define the ONE thing you'll do next block. Keep it specific — "implement heap sort's sift-down helper" beats "work on sorting visualizer." ⌘/Ctrl+Enter to save · Esc to cancel
💡 Seed ideas — click to use as next action
All Tracks · the long view
Sunday review · Monday mentor call · compound across the 18-month arc
This block used to say "read CLRS cover to cover." That plan was honest but wrong — CLRS is a reference, not a first text, and grinding it at 60 min/day burned you out before you ever got to dynamic programming. The new plan is a 16-week grad algorithms on-ramp built around Tim Roughgarden's four-book Algorithms Illuminated series (free PDFs, free YouTube lectures, designed as a course). One book per four weeks. Four visible wins before fall. CLRS stays on the shelf as the reference you open when Roughgarden is too terse — not the primary text. The goal isn't "I read a book." The goal is walking into a grad algorithms midterm in August and not drowning.
When you're stuck
Roughgarden's proof moves too fast? Open the matching YouTube lecture (algorithmsilluminated.org). Watch once at 1x. Close the video. Write the proof from memory.
Truly confused on a topic? Open CLRS at the matching chapter as the reference text — it's more thorough, slower, and denser. Roughgarden is the first read; CLRS is the backup.
Stuck on a recurrence? Close the book. Walk to the kitchen. Try it on paper, not in your head.
No focus at all? Add one Anki card from yesterday. That's enough to keep the chain. Close the laptop.
⚡ Today's Grad Algo Block · 7:00–8:00 AM
Current book: Roughgarden Algorithms Illuminated Part 1. Pick up at the section you stopped at yesterday.
5-min retrieval warm-up → 20-min one problem at real attempt (not-reading) → 30-min Roughgarden read + matching lecture → 5-min Anki + 2-sentence approach writeup for tomorrow-you.
📋 60-Minute Session Protocol Daily
0-5 min — Retrieval warm-up (book closed): write yesterday's main result and one line of pseudocode from memory 5-25 min — Real-attempt problem: one end-of-section problem from the current Roughgarden chapter, BEFORE reading. Struggle is the point. 25-55 min — Read + lecture: read the Roughgarden section that covers the problem you just attempted, then watch the matching lecture if you got it wrong 55-60 min — Anki + approach writeup: add 1-3 cards, write a 2-sentence "what I\'ll try tomorrow" note for tomorrow-you
Why problem-first: Attempting before reading is productive failure (Kapur 2008, Schwartz & Bransford 1998). The struggle activates schema slots; the reading then fills them. Same total time; much more learning per minute. This is why grad students who do psets before lecture outperform those who read first.
📚 16-Week Grad Algo On-Ramp · Roughgarden's 4 Books
Why this structure: Four books, four weeks each, four visible wins by early August. Each book is ~180 pages — finishable. Each has a free companion YouTube lecture series. Each closes with a problem set you can use as a self-test. CLRS sits beside you as the reference when Roughgarden is too terse, not as the primary text you're grinding through.
The on-ramp. Get Big-O, recurrence-solving, and divide-and-conquer into your muscle memory before anything else. End of week 4: take Book 1's end-of-part test under timed conditions. Must pass before advancing.
Where algorithms stop being about sorting and start being about relationships. Implement Dijkstra from scratch by end of week 6 — if you can't, back up one section.
The make-or-break book. DP is the single biggest filter between passing and failing a grad algorithms class. Go slow. Open CLRS Ch 15 as cross-reference on every DP section. Budget extra days — if you ship Book 3 a week late, let Book 4 shrink; do not shrink Book 3.
The vocabulary of PhD-level algorithms: "this problem is NP-hard, here's a 2-approximation, here's when local search beats it." End of week 16: walk through one reduction on a whiteboard without notes.
+
Buffer + Self-Test · Aug 3 – 16
Grad midterm simulation
Take a past midterm from Stanford CS161, CMU 15-451, or MIT 6.046 under timed 90-min conditions. Score it honestly. Identify the 2-3 topics that fell apart — those become your August focus before grad coursework begins.
🎥 Companion Video Resources
MIT OpenCourseWare
🎓
MIT 6.006 Intro to Algorithms (Spring 2020)
Erik Demaine + Jason Ku · Free · Maps directly onto CLRS chapters
Schedule: 5-8 new cards/day · cap reviews at 30/day weekdays · catch up Saturday
Use FSRS scheduler (Anki 23.10+) with retention target 0.9
🏆 Grad-Level Self-Test Prompts
"Explain merge sort to a 12-year-old using a deck of cards." · "Why is a hash table usually faster than a sorted array for lookup?" · "What does NP-complete mean to someone who's never seen a computer?"
Weekly whiteboard prompts (30 min, no notes): Derive merge sort recurrence with recursion tree · Prove BFS distance invariant · Implement red-black insertion with all rotation cases · Reduce 3-SAT to Independent Set · Prove Dijkstra correctness via cut argument
The build block is where you become dangerous. Reading CLRS teaches you the vocabulary; building from scratch teaches you the grammar. The 5-phase capstone is designed so that by the time you apply to PhD programs, your GitHub tells a story — not "I learned these topics" but "I built the tools my future lab needs." Every edge-of-ability hour you log here is a sentence in that story. When you want to stop, stop. When you want to ship broken, ship broken — and write the one-line exit note so tomorrow-you knows where to pick up.
When you're stuck
Won't compile at 6:40pm? Commit the broken state with a message "WIP stuck at X" and close. You will solve it in the shower tomorrow.
Don't know what to build? Open yesterday's exit-note and do the "first action" line. Don't improvise at night.
20 min stuck is learning; 50 min stuck is punishment. Move the ladder down — simpler subtask — then climb back up tomorrow.
Can't start at all? Run yesterday's code, read the output, commit one cosmetic fix. You'll be back in the loop before you notice.
⚡ Today's Build Block · 5:15–6:45 PM · 90 min
Mon/Wed/Fri = current capstone phase · Tue/Thu = Ben Eater 8-bit hardware · Sat 8:15–9:45 = Project Build 1 (long form)
2-min entry: read yesterday's exit note → 50-min build at edge of ability → 10-min PhD micro-write → 2-min exit: git commit + tomorrow's first action + new incubation question.
📋 Daily 50-Minute Build Protocol M-F
5:15 (2m) Read yesterday's exit-note.md + check incubation question — did sleep yield insight? 5:17 (50m) BUILD at the edge of ability. No tutorials on autopilot. 6:07 (10m) PhD micro-write: 1 paragraph in phd-journal/YYYY-MM-DD.md connecting today's build to a research question 6:17 (2m)git commit -m "..." · write tomorrow's first action + new incubation question to exit-note.md 6:19 Walk away. Incubation does the rest.
The edge-of-ability rule: If you can't describe why you're stuck in one sentence, you're too far over. If you haven't been stuck today, you're too far under. Move the ladder.
🎯 Primary Narrative — 5-Phase Grad Algo Capstone Schedule v4.2
This is the capstone arc the Schedule promotes. Each phase produces a real tool you can show in a PhD application — mapped onto the 16-week Roughgarden on-ramp in the Grad Algo Prep pane. Code-project days (Mon/Wed/Fri + Sat) feed directly into whichever phase you're currently on.
1
🏎️ Sorting Benchmark Visualizer
Weeks 1–8 · Python or web · Ch 2, 6, 7, 8
Visualize + benchmark every sorting algo side-by-side. Insertion · Merge · Heap · Quick · Counting · Radix. Watch O(n²) vs O(n log n) race in real time. Week 8 deliverable: polished README + live demo.
2
🔍 Mini Search Engine
Weeks 9–16 · Zero dependencies · Ch 11, 12, 15
Hash table (chaining + open addressing) for the inverted index, BST for ranked retrieval, DP (edit distance) for fuzzy matching. Corpus: your PhD paper notes. Week 16: "Every data structure from scratch."
Citation graph of your paper database. BFS/DFS for traversal, Dijkstra for "shortest intellectual distance," MST for the field's skeleton. This tool IS your literature review infrastructure — cite it in your SoP.
4
📄 Document Similarity Detector
Weeks 23–28 · Ch 32, 34, 35
Rabin-Karp fingerprinting + KMP for exact matching + Floyd-Warshall for thematic clusters. Run it on your 50+ paper synthesis corpus. Week 28: similarity matrix + cluster visualization.
5
📦 Portfolio Integration
Weeks 29–30 · Consolidation
Unify all 4 projects into one GitHub portfolio with a master README. 2-page reflection. Blog post: "What I Learned in 16 Weeks of Grad Algorithms Prep." Updated on fansofhenry.github.io. This is the artifact PhD committees read.
Why phases not tiers: The tier ladder below is useful reference, but the 5-phase capstone is what the Schedule commits to. Every code-project day should push the current phase forward. When you finish a phase, move on — don't stall on the tiers.
🔨 Tier 1 — Foundation Hardware: Ben Eater 8-Bit
555 Clock — 4-6h · oscilloscope trace of square wave
Registers — 6-8h · A, B, IR with 74LS173 · bus tri-state
ALU — 8-10h · 2x 74LS283 adders + XOR for subtract
#1 debugging tip: Most bugs are miswired clock enable lines. Always clock by hand first. Use a logic probe. 0.1uF decoupling cap on every chip. Wire colors matter: red=VCC, black=GND, yellow=data, blue=control.
💻 Tier 1 — Foundation Software: Nand2Tetris
P1 Boolean Logic — 4h · NAND→MUX
P2 Arithmetic — 6h · half/full adder, ALU
P3 Sequential — 6h · DFF, RAM8→RAM16K
P4 Machine Language — 4h · Hack assembly
P5 Computer Architecture — 8h · CPU+Memory wiring
P6 Assembler (Python) — 10h · symbol table two-pass · own repo
🎓 Tier 4 — PhD-Aligned Projects (THE DIFFERENTIATORS)
These are what admissions committees notice. Each one doubles as research infrastructure.
Research Network Mapper — Pulls Zotero + Semantic Scholar API → BFS, PageRank, Louvain clusters, MST. Becomes your literature review engine. Goes in your SoP.
Document Similarity Detector — Rabin-Karp + MinHash/LSH on your paper notes · KMP for phrase reuse
Curriculum Generator (SIGCSE-ready) — LLM + retrieval over textbooks → generates CS1 syllabi · evaluate with CC instructors. Publishable as SIGCSE/ITiCSE tool paper.
Equity Dashboard — Scrape California Community College Chancellor's MIS data → CS enrollment by race/gender/first-gen. Streamlit/Observable.
Help-Seeking Bot — Chatbot trained on (consented) CC student questions · log hesitation, rephrasings. This IS a dissertation pilot.
For each: start a repo TODAY with a research/ folder containing rq.md, related-work.md, protocol.md. Makes your PhD story visible in commit history.
README sections: Problem · Background · Approach · Complexity · Benchmarks · Limitations · References · How to run
Polish moves: GitHub Actions CI badge · pytest coverage · live demo on GH Pages · CITATION.cff · tagged releases
This is the block that makes the other blocks matter. Papers and outreach are how a PhD hopeful stops being an applicant and becomes the candidate Dr. Porter has already emailed once. Thirty minutes a day, a pen in your hand, the same paper you predicted the abstract of — by month nine, the goal is that faculty recognize your name in the inbox. Most of your competition will not do this. You just have to. And running in parallel: the NSF GRFP personal statement (submit Oct 15, 2026) — card below. An awarded GRFP lands in every admit committee you care about before your SoP does.
When you're stuck
Can't read a paper? Read only abstract + figures + conclusion. That still counts. Log it.
Blank on the faculty email? Open the cold-email template in the reference drawer below, fill the two brackets, save as draft, send in the morning. Never draft from zero at night.
No time today? Add one Anki card from yesterday's paper. Chain preserved.
Too tired to think? Update one row in the faculty tracker. Admin work counts as a kept promise.
⚡ Weekday PhD · inside Build → PhD block (5:15–6:45 PM, final 10 min micro-write) · Lunch Deep 12:00–1:30
Today's rotation: see weekday focus below
Predict from title (2m) → read paper (20m) → synthesize from memory (5m) → 3 Anki cards (3m). Target: 2 papers/week → 50 in 6 months.
📋 Weekday 30-Min PhD Prep Rotation
Mon — Read #1: Predict abstract from title (2m) → read (20m) → synthesize in own words (5m) → 3 Anki cards (3m). Use Zotero + a "CSEd-papers" note template. Tue — Micro-write + Read #2: 10-min reflection on Monday's paper (what it changes about your teaching) → skim Paper #2 (20m). Wed — Synthesis: Close laptop for 10 min. Rewrite both papers from memory. Then outline a 500-word piece connecting them. Thu — Outreach: Draft ONE cold email or faculty reply. Update your faculty tracker spreadsheet. Fri — Program research: Deep-dive ONE program per week. Fill: deadline, faculty, materials, funding, fit notes.
🎓 NSF GRFP · Fellowship Application Track
Three years of funding (~$37k/yr stipend + tuition) earned on a personal statement, a research plan, and three letters. GRFP runs in parallel with the December PhD apps — and if it hits, it's the single largest multiplier on your admit odds at every program in the longlist. Draft 1 of the personal statement already exists (April 12, 2026). The job now is keeping it moving without crowding out the weekday prep rotation above.
⚡ Submit deadline · Wed Oct 15, 2026
Calculating days to submit…
Research direction: Adaptive Learning Systems for Equitable CS Education. Anchored in Learning Code, ProjectBridge, the 16-week Roughgarden grad algorithms on-ramp, three open-source learning tools, and the CVC OEI reconciliation-layer vantage.
The 6-Step Runway
Step 1 — Personalize the hook· this week (Apr 12–19). Replace the generic opener with one concrete moment from the last 12 months — a teaching moment, a reconciliation-layer bug, a student email. Quantify one thing. Keep the voice yours, not GPT's.
Step 2 — Fill the gaps· by May 15. SJSU specifics (courses, labs, GPA arc). Quantify student services reach, Learning Code audience, ProjectBridge scope. Explain the CVC OEI role in language a reviewer outside CS can parse.
Step 3 — Get feedback· May 15 – Jun 15. Four readers in one batch, not four rounds: Jeff Anderson (Strategic Deep Learning frame), Giorgio Lagna (CRISPR MESA), one reader outside CS, the SJSU writing center. Merge comments — don't serve each reviewer separately.
Step 4 — Draft the research plan· Jun 1 – Jul 15. Two pages. Sections: Motivation · Background · Aims · Timeline · IM/BI. Include one figure. The research plan is graded as hard as the personal statement — do not skimp because the personal statement felt like the "main" doc.
Step 5 — Secure letter writers· formally ask by Jul 1. Send each writer a packet: CV, current personal statement, current research plan, a half-page "what I hope you'll speak to." Recommenders write better letters when they know the frame you need.
Step 6 — Polish and submit· Aug – Oct 15. v5 by Aug 31 · v7 by Sept 30 · submit at least 72 hours before close. FastLane historically fails under load on deadline day — no last-day submissions.
Parallel, not additive: GRFP writing lives inside the existing Sunday Deep Write block (6:50–8:00 AM) — no new time is added to the week. Until submit, the 4-week rotation bends: Week A is the GRFP personal statement, Week B is the GRFP research plan, Weeks C/D stay as lit review / working paper. After Oct 15, the rotation snaps back to the original SoP / research statement cycle in time for the Dec 1–15 program deadlines.
Mark Guzdial (Michigan) — media computation, task-specific PLs
Beth Simon (UCSD) — peer instruction, POGIL
Cynthia Lee (Stanford) — inclusive pedagogy
Colleen Lewis (UIUC) — equity, help-seeking. CSTeachingTips.org
Sepehr Vakil (Northwestern) — justice-centered CS
Jean Salac (UW) — equity in K-12 CS
Brett Becker (UC Dublin) — error messages, GenAI
Andrew Petersen (Toronto) — CS1 assessment
Paul Denny (Auckland) — PeerWise creator
Andrew Luxton-Reilly (Auckland) — systematic reviews
Barbara Ericson (Michigan) — Parsons problems, Runestone
Kathi Fisler (Brown) — Bootstrap curriculum
Elizabeth Patitsas (McGill) — grading, stereotype threat
For each: bookmark Google Scholar profile + subscribe to alerts. Follow on Mastodon (hci.social, fediscience.org).
🎯 Top PhD Programs (deadlines typically Dec 1-15)
UW iSchool (Information Science PhD) · Dec 15 · Amy Ko, Jean Salac, Katie Davis · Full 5-yr funding
UCSD CSE · Dec 15 · Leo Porter, Christine Alvarado, Gerald Soosai Raj · Full
UCSD Cognitive Science · Dec 1 · Beth Simon, Porter (joint) · Full · CS-ed-through-cogsci angle
UC Berkeley EECS · Dec 8 · Armando Fox, Marti Hearst, Lisa Yan · Full · Competitive
Stanford CS · Dec 1 · Chris Piech, Mehran Sahami, Cynthia Lee · Full · Very competitive
CMU HCII · Dec 1 · Ken Koedinger, Amy Ogan, Majd Sakr · Full
Northwestern Learning Sciences (SESP) · Dec 1 · Sepehr Vakil, Eleanor O'Rourke · Full
Michigan CSE / School of Information · Dec 15 · Mark Guzdial, Barb Ericson · Full
Toronto CS · Dec 15 · Andrew Petersen, Michelle Craig, Steve Easterbrook · Full
Brown CS · Dec 15 · Kathi Fisler, Shriram Krishnamurthi · Full
UIUC CS · Dec 15 · Colleen Lewis, Craig Zilles · Full
📧 Cold Email Template
Subject: Prospective PhD student — [specific paper title] question
Dear Dr. Porter,
I'm Henry Fan, an Application Support Analyst at CVC OEI Exchange in the California Community Colleges system. I'm preparing applications for Fall 2027 PhD programs in CS Education.
Your 2013 "Halving Fail Rates" paper has shaped how I think about peer-instruction scaffolding for async CC CS courses. Working inside the CCC online-course reconciliation layer has given me a direct view of how async students seek help — the patterns appear to differ sharply from the synchronous case your work studied, and I'm curious whether Copilot changes the peer-explanation dynamic you documented.
Are you planning to take new PhD students for Fall 2027? I'd welcome a brief conversation about research fit, or any pointers for strengthening my preparation over the next year.
Best, Henry Fan
🎓 GRFP-fit variant: After Jul 1 (once letters are locked), a second version of this template works well — add one sentence about the GRFP personal statement topic ("I'm currently drafting an NSF GRFP on adaptive learning systems for CS equity in async CC contexts, which is why your work on [X] is directly relevant…"). Faculty read "GRFP-applicant" as a strong fit signal and reply at meaningfully higher rates.
Timing: Originally Aug–Oct is prime, BUT that window now overlaps the NSF GRFP polish sprint (v5 → v7 → submit Oct 15). Shift the primary cold-email push to late Jun–early Aug: earlier in the faculty-open-to-inquiries window, and leaves the Aug 15 – Oct 15 stretch clear for GRFP. Second push in late Oct–mid Nov after GRFP ships and before Dec 1–15 app deadlines. Avoid December (apps review). Follow up ONCE after 2 weeks.
🎓 GRFP Resources & Reference
🔗
NSF GRFP official program page
Solicitation, eligibility, deadlines, field list — always read the current year's solicitation end-to-end before Step 2
Large bank of funded personal statements + research plans, sortable by field. Read 3 CS Ed / Learning Sciences winners before rewriting the hook (Step 1)
The exact phrases reviewers are scoring against. Step 4 of the runway: verify your research plan literally uses "Intellectual Merit" and "Broader Impacts" as section headers or anchor phrases
IM / BI reminder: Every GRFP is dual-scored on Intellectual Merit (does the work advance the field?) and Broader Impacts (does it reach beyond academia?). Henry's CC + CVC OEI reconciliation-layer background is a BI goldmine — explicit equity reach, underrepresented student population, statewide infrastructure. Do not bury that story in the personal statement's second half; it belongs in both documents, named by NSF's own language.
📅 18-Month Milestone Roadmap
PhD-app track and GRFP track run in parallel. GRFP feeds the PhD apps — the personal statement mined here gets cannibalized into every SoP in December.
Month 1-3 (Apr-Jun 2026): Zotero set up · 12 Tier-1 papers read · draft RQ v1 · blog started · contact 1 faculty informally · GRFP: hook personalized · gaps filled by May 15 · feedback batch Jun 15 · research plan v1
Month 4-6 (Jul-Sep 2026): 25 papers synthesized · 2 faculty contacted · research statement v1 · finalize program longlist (20) · ask PhD recommenders by Sep · GRFP: research plan v3 done Jul 15 · letter writers locked Jul 1 · personal statement v5 by Aug 31 · v7 by Sept 30
Month 7-9 (Oct-Dec 2026) — CRITICAL: GRFP submit ≤ Oct 15 · research statement v3 · 5 faculty contacted (2 warm replies) · SoPs drafted per program (mine GRFP personal statement for material) · letters in motion · submit PhD apps Dec 1-15
Month 10-12 (Jan-Mar 2027): Interviews, visit days, polish paper submission (Koli Calling or RESPECT 2027 first-author target) · GRFP results announced ~early April
Month 13-15 (Apr-Jun 2027): Decisions, pick program, submit first workshop paper
Month 16-18 (Jul-Sep 2027): Onboarding, pre-read with new advisor, move logistics
Sunday morning is the one block where your words are for a reader who isn't you. This is where the week's ideas stop being private — they get shape. Boice's nine-times-more finding is real, but it rests on a secret: the daily writes aren't the product; this Sunday block is. Treat it with the seriousness of a deadline even though no one is waiting for the file. Your future advisor is going to read something you write, one day — this block is the rehearsal for that moment.
When you're stuck
Blank on Sunday? Copy the best sentence from this week's micro-writes and polish that one sentence. Then the next. Polish, don't generate.
Rotation says Research Statement but you hate it today? Swap to Lit Review week. Sunday's goal is shipping polished prose, not obedience to a rotation.
Can't hit 400 words? Stop at 200. Commit with "short today" and pick up next Sunday. Shipped short beats unshipped long.
Anxious about quality? Read it out loud. If it sounds like you talking to Jeff, it's good enough to ship.
⚡ Sunday Deep Write · 6:50–8:00 AM (70 min)
Polish the week's micro-writes into 400-700 words of polished prose
Peak cognition window. Hardest task. Use your 4-week rotation: Research Statement → SoP → Lit Review → Working Paper.
📋 70-Minute Sunday Protocol
0-10 min Re-read the week's micro-writes and Anki cards. Highlight the BEST sentence from each. 10-60 min Polish into 400-700 words of prose targeted at ONE artifact (4-week rotation below) 60-70 min Write next week's focus sentence + pick the two papers Mon/Tue will cover
Why daily micro-writing beats Sunday binges: Boice (1990) tracked 27 new faculty over 2 years. Daily writers produced 9× more pages. Your Sunday session POLISHES fragments — never starts from scratch.
📅 4-Week Writing Rotation
GRFP mode (until Oct 15, 2026): Week A becomes the GRFP Personal Statement. Week B becomes the GRFP Research Plan. Weeks C/D stay as-is — lit review and working paper work directly feeds the GRFP research plan's Background + Aims sections. After Oct 15, the rotation snaps back to the four artifacts below in time for Dec 1–15 PhD deadlines.
Week A — Research Statement(→ GRFP Personal Statement until Oct 15): Polish 1 paragraph of your evolving research statement. Sharp, citation-backed claims about WHAT you want to study and WHY. In GRFP mode: polish the hook, the SJSU specifics, or the CVC OEI reconciliation-layer framing — one paragraph per Sunday.
Week B — Statement of Purpose(→ GRFP Research Plan until Oct 15): Draft a section of an SoP for ONE target program. Tailor: name 2 faculty, cite a specific paper of theirs, connect to your CC teaching. In GRFP mode: draft ONE section of the 2-page research plan (Motivation / Background / Aims / Timeline / IM / BI) per Sunday.
Week C — Lit Review Mini-Synthesis: Write a 500-word piece connecting 3-5 papers you've read recently. THIS becomes a blog post (publishable!) + future paper, AND directly feeds the GRFP research plan's Background section.
Week D — Working Paper / Position Piece: Draft a section of a future SIGCSE submission OR a position paper (e.g., "AI in CC CS: A Pragmatic Framework"). Doubles as GRFP research plan Aims content.
📖 Writing Resources (Free & Essential)
📕
"How to Write a Lot" by Paul Silvia
The productivity bible for academic writers · 100 pages · Boice's daily-writing research applied
📕
"The Craft of Research" by Booth, Colomb, Williams, Bizup
Free PDFs widely circulated · Research questions, evidence, warrants, drafting
📕
Helen Sword's "Stylish Academic Writing"
How to write academic prose that doesn't put readers to sleep
Creswell, Research Design (5e) — mixed methods bible
Saldaña, The Coding Manual for Qualitative Researchers — thematic analysis
Charmaz, Constructing Grounded Theory — grounded theory
OpenIntro Statistics (free at openintro.org) — descriptive + inferential stats
CITI IRB training (Social/Behavioral module at citiprogram.org) — needed before any human subjects research
Fincher & Robins, Cambridge Handbook of Computing Education Research — THE reference
📝 SoP Structure for CS Ed
Hook — A specific teaching moment that generated a research question
Trajectory — Your path: support analyst → CC instructor → research-curious
Research interests — 2-3 questions with citations to 3-5 papers in target lab
Why this program — Name 2-3 faculty, one specific paper each, why YOUR background uniquely positions you
Career goal — Usually "tenure-track faculty researching X" or "research scientist"
Personal Statement vs Research Statement: Many programs require BOTH. Personal = diversity/lived experience. Research = intellectual agenda. Don't duplicate.
The channel is the one track where your growth is public. Everywhere else, progress is invisible until the acceptance letter or the published paper. Here, week 4 looks like week 4 and week 50 looks like week 50 — both on the same shelf, forever. That's a gift, and it's also the reason the single rule is ship. A published 7/10 beats a queued 9/10 every single time. Marisol doesn't need your best video. She needs a video — the one that tells her she's not alone.
When you're stuck
Don't know what to script? Open the first-12-weeks list below. Pick the next unchecked one. Write only the hook sentence and stop.
Tired to record? Record anyway — one take, no retakes. Tomorrow you're editing, not recording.
Edit feels tedious? Cut one minute of footage. Save. Close. Next session is a fresh edit.
Thumbnail paralysis? Use a text-only Canva template. Thumbnails can improve; unpublished videos can't.
Weekday 30 min low-load creative. Saturday 90 min big build. (Monday 8–8:40 PM is the Jeff Anderson mentor call — not a YouTube day.)
🎯 Channel Positioning
Working name: "First Principles CS"
Tagline: "Computer science for the rest of us — built from scratch"
Mission: Teach CS from the ground up for community college students who are serious about transferring, grad school, or building real systems — no prerequisites, no gatekeeping, no hand-waving
Target persona — "Marisol, 20": First-gen, second-year CC student, working 20 hrs/week, just finished CS1 in Python, intimidated by "algorithm," scared she's behind transfer students
Differentiation: "Ben Eater–style first-principles teaching, made for community college students who are applying to transfer. Taught by a CC instructor who's been there."
📺 6 Content Series
Series A — "Build a Computer From Nothing" (15 eps, 15-25 min, flagship): Mirrors Ben Eater with CC framing. Why build a CPU? · Binary · Logic gates · Adders · Clock · Registers · RAM · Program counter · Instruction decoder · First running program
Series B — "CLRS Decoded" (15 eps, 12-18 min): One algorithm per video. 10 min explainer + 10 min code-along. Insertion · Merge · Quick · Heap · Binary search · BFS · DFS · Dijkstra · Hash tables · DP intro · Fibonacci · Coin change · Union-find · Topo sort · Master theorem
Series C — "Office Hours Recovered" (12 eps, 5-10 min, START HERE): "I'm scared of my first CS class" · "Stack vs heap with a backpack" · "What is a pointer, actually?" · "Recursion is not magic" · "Big-O in 7 minutes" · "Am I too late to learn CS?"
Series D — "Should I Get a CS PhD?" (10 eps, 10-15 min): Why a CC student should consider a PhD · Transferring from CC to top CS program · Cold-emailing professors · GRE, grades, what really matters
Series E — "Henry Builds X" (10 eps, 20-30 min): Sorting visualizer · Mini search engine · Build your own shell/grep · Tiny interpreter · Chip-8 emulator · Spreadsheet from scratch · NN from scratch
Series F — "Transfer Track" (8 eps, 8-12 min): IGETC · ASSIST.org · Transfer essay · Timelines · UC vs CSU vs private · CS vs CompE
📅 Weekday Production Workflow
Mon — Script (45m): Pick from backlog. 1-page outline: Hook → Promise → 3 teaching beats → Application → CTA. Don't write full script — talking points only. Tue — Record (45m): 5-min setup, 30-min A-roll in 2-3 takes, 10-min B-roll/screen capture. Stand if possible (energy). Wed — Edit (45m): Rough cut only. Cut filler, title cards, one music bed, 2-3 SFX. Don't chase perfection. Thu — Publish (45m): Thumbnail (15m Canva), SEO title/description (10m), upload, schedule, pin first comment. Fri — Plan (45m): Analytics (CTR, AVD, retention), reply to comments, 2-3 new ideas, pick next topic.
🎬 Scripting Template (tape above your desk)
HOOK (10s): "Most CC students never learn how a CPU actually works. By the end of this, you'll have built one in your head." PROMISE (15s): "I'll show you the clock module — the heartbeat of every computer." TEACH (6-8 min): 3 beats, each with visual + analogy + code/hardware APPLY (60s): "Here's why this matters for your transfer/interview/CS2 class..." CTA (15s): "Subscribe for the next one — we're building the register file."
🛠 Equipment ($0-500 budget)
Phase 0 — Start This Weekend ($0-80)
iPhone 1080p30 as camera (NOT 4K — file size)
Fifine K669B USB mic ($30) or Maono AU-PM421 ($55)
Wk 1: "Why I'm making this channel — for CC students like me" (4 min)
Wk 2: "I'm scared of my first CS class — watch this" (6 min) · Series C
Wk 3: "Insertion Sort, Visualized for CC Students" (12 min) · Series B
Wk 4: "What's actually inside your laptop?" (10 min) · Series A-1
Wk 5: "Stack vs Heap with a backpack analogy" (7 min) · Series C
Wk 6: "Binary: counting like a computer" (12 min) · Series A-2
Wk 7: "Merge Sort, decoded" (14 min) · Series B
Wk 8: "Should a CC student get a CS PhD?" (10 min) · Series D
Wk 9: "Logic gates on a breadboard — no EE background needed" (18 min) · Series A-3
Wk 10: "Pointers, from first principles" (9 min) · Series C
Wk 11: "Binary Search, decoded" (11 min) · Series B
Wk 12: "Building an adder by hand" (20 min) · Series A-4
Realistic expectation: By week 12 → 12 videos, ~500-1500 total views, ~50-150 subs. THIS IS NORMAL AND GOOD. Median educational channel takes 12-24 months to hit 1k subs.
🚫 What to Avoid
Don't imitate ThePrimeagen — be the calm classroom teacher
Don't chase production value — Ben Eater shoots in a garage
Done > perfect — Published 7/10 beats unpublished 9/10
Don't teach what you haven't built — Make it a "learning in public" episode
Don't monetize for 18+ months — distracts from craft
Don't compare week 4 to year 4 channels
Don't delete old videos — they become "I used to be bad too" proof
Don't record students without consent — use composite questions
This pane is the map, not the route. Ten grad-level courses you can self-teach is a menu, not a target. Pick three that touch what you actually want to research — for CS-Ed prep, that's usually ML + HCI + Algorithms — and skim the rest as reference. The goal is not to master everything before grad school. The goal is that when a professor drops a term in week two of a grad course, you don't spend lecture three googling it while everyone else is taking notes.
When you're stuck
Not actively on a self-study course? This pane is reference, not daily protocol. Use it on Sunday planning to pick next quarter's focus.
Too many courses, can't decide? Pick one and commit for six weeks. You cannot learn distributed systems and compilers in the same month.
Overwhelmed by the list? Cross out every course that doesn't serve HCI, ML, or Algorithms. The list is now short.
Lost the thread between this and your actual PhD work? Re-read the mentor note above. Remember: map, not route.
⚡ Pre-Grad-School Excellence Track
Daily prep regimen for crushing grad CS coursework
Real Analysis — Abbott or Rudin · 12 weeks · Only if going theory-heavy
📚 Essential Books Beyond Textbooks
How to Prove It by Velleman — proof techniques · DO BEFORE GRAD THEORY
Concrete Mathematics by Knuth/Graham/Patashnik — math for CS bible
The Algorithm Design Manual by Skiena — CLRS's practical cousin
Designing Data-Intensive Applications by Kleppmann
Computer Systems: A Programmer's Perspective (CS:APP) by Bryant & O'Hallaron
OSTEP by Arpaci-Dusseau (FREE)
Crafting Interpreters by Bob Nystrom (FREE)
🏆 Top 5 Pre-Grad-School Milestones
Complete Nand2Tetris — proves systems thinking
Work through CLRS Ch 1-30 with full solutions to ~1/3 of exercises
xv6 modifications — add a syscall, implement COW fork, add a scheduler
End-to-end ML system — dataset → training → deployment (FastAPI) → GitHub
Write 2-3 technical blog posts — e.g., "Raft in 500 lines of Go" · function as proto-research papers
The single highest-ROI activity right now: MIT 6.824 + DDIA. Distributed systems is the intersection of every other course and the strongest signal for PhD admissions. Pair with CS:APP labs for systems depth and CS229 for ML breadth, and you'll enter any grad program ready to contribute in your first semester.
🧠 The Grad Student Mindset Shift
Undergrad: Reads textbook. Solves psets.
Grad: Reads papers. Proposes extensions. Writes their own.
Start now: Each week read one NeurIPS/SOSP/SIGCOMM/POPL paper. Ask "what would I do differently?"
Keep an "ideas" file: When you disagree with a textbook claim, write it down and try to formalize your objection.
Think of every course not as "material to learn" but as "tools I'm adding to my research toolbox"