Writing
Statements, Papers, and Reading
Formal statements, working papers, and an annotated reading list organized around the intellectual territory this work occupies. The annotations describe what each work contributes to my thinking — not just what it says, but where I agree, where I push back, and how it shapes specific research questions and curriculum decisions.
Formal Statements
Research Statement · March 2026
Structural Predictors of Help-Seeking and Departure in Introductory CS: A Research Program for Community Colleges
A formal statement of my research agenda, written for PhD applications. Covers the problem space (structural equity in introductory CS), four research questions, theoretical grounding (Seymour & Hunter, Harel, Walton & Brady), methodological approach (mixed-methods: qualitative interviews + learning analytics + NLP), and why this work requires doctoral training. The central argument: students leave introductory CS not because they lack ability but because course structures suppress the help-seeking behavior that persistence requires — and those structures are measurable, changeable, and unevenly distributed across institution types.
Teaching Philosophy · March 2026
Derive Before Compute: A Teaching Philosophy for Introductory CS at Community Colleges
A statement of pedagogical commitments grounded in constructionism (Papert), productive failure (Kapur), self-determination theory (Deci & Ryan), and antiracist learning science (Anderson, Ko, Freire). Three core principles: (1) Derive before compute — every concept arrives as the answer to a question the student already owns. (2) Build before import — students implement data structures, circuits, and algorithms from scratch before using abstractions. (3) Equity as design — the social implications of technical systems are part of understanding what those systems do, not an afterthought.
The philosophy is enacted through a three-track system (Novice / Builder / Architect), portfolio assessment with student-proposed grades following Anderson's ungrading framework, and a signature project — Build a Computer from Scratch — where teams construct a working 8-bit breadboard computer and encounter physics, linear algebra, and chemistry along the way. The argument: hands-on, first-principles construction is the most equitable form of CS instruction because it produces understanding that doesn't depend on prior exposure to computing culture.
Working Papers
Working Paper · In Progress
Help-Seeking Suppression in CS Education: A Literature Synthesis
A synthesis of the existing literature on help-seeking behavior in introductory CS, organized around three themes: structural predictors (course design features that suppress help-seeking), measurement approaches (how help-seeking has been operationalized in learning analytics research), and equity dimensions (how help-seeking patterns differ across demographic groups). Draws on Seymour & Hunter, Newman (2002), Karabenick & Berger, Ryan & Pintrich, and recent SIGCSE/ICER papers on LMS behavioral analysis. The synthesis identifies a gap: most help-seeking research has been conducted at four-year research universities, and the structural conditions of community colleges — open admissions, high work obligations, limited office hours culture — may produce qualitatively different suppression patterns.
Targeting 3,000–4,000 words · Expected: Spring 2026
Methods Note · In Progress
Why Mixed Methods: Combining Qualitative Interviews with Computational Analysis in CS Education Research
A methodological reflection on why computational analysis of course materials and LMS data must be grounded in qualitative interview studies rather than pursued independently. The argument: NLP features for detecting help-seeking suppression are only meaningful if the construct itself has been validated through interviews with actual students — otherwise, we build classifiers that are predictive but not actionable (Wise & Shaffer, 2015). This is why P3 (interviews) precedes P1 (learning analytics) in my execution timeline. The note engages with debates about mixed-methods research in CS education (Margulieux et al., 2019), learning analytics (Wise & Shaffer, 2015), and the broader philosophy-of-methods question of when quantification helps and when it obscures.
Expected: Spring 2026
Curriculum Design Paper · In Progress
Building Understanding: Cross-STEM Transfer in a First-Principles Computer Architecture Course for Community Colleges
A design paper describing the pedagogical rationale, structure, and theoretical grounding of a 20-week team-based project in which community college students build a working 8-bit computer on breadboards. The paper argues that physical computing creates transfer opportunities across five STEM disciplines (physics, discrete mathematics, linear algebra, differential equations, chemistry) and that these transfer moments must be explicitly named by the instructor to achieve Perkins & Salomon's "high road" transfer. Describes the three-track system, the assessment model (portfolio + ungrading), and seven learning science principles that ground the curriculum: constructionism, productive failure, experiential learning, self-determination theory, transfer theory, situated learning, and intellectual need. Targets SIGCSE 2027 Experience Reports track.
Draft in preparation · Targeting SIGCSE 2027
Annotated Reading List
Key papers and books that shape this research program and curriculum practice. Annotations describe what each contributes to my thinking — not just what it says, but how it informs specific research questions and teaching decisions. The list is organized by intellectual lineage rather than chronology.
Foundations: STEM Departure and Structural Equity
Seymour, E., & Hunter, A.-B. (2019). Talking About Leaving Revisited. Springer.
The empirical foundation for this entire research program. Seymour and Hunter's central finding — that students who leave STEM are not academically weaker than those who stay, and that departure correlates with teaching quality and institutional culture — motivates every project on this site. Their taxonomy of departure reasons (weed-out culture, help-seeking suppression, belonging threats) provides the deductive coding framework for P3 and the feature selection rationale for P1. The key open question for my work: their studies were conducted at research universities. Do the findings replicate at community colleges, where the structural conditions — open admissions, compressed schedules, part-time faculty — differ fundamentally?
Margolis, J., & Fisher, A. (2002). Unlocking the Clubhouse: Women in Computing. MIT Press.
Foundational qualitative work on how CS culture systematically excludes students who don't match the "geek mythology" profile. Margolis and Fisher's analysis of how belonging is structurally communicated (or denied) through course design, peer culture, and institutional signals directly informs P5's coding scheme for belonging signals in course materials. Their methodological approach — in-depth interviews combined with institutional analysis — is a model for P3. What I carry forward: the insight that exclusion isn't always intentional — it's often structural, embedded in choices that seem neutral but aren't.
Walton, G. M., & Brady, S. T. (2017). The many questions of belonging. In Handbook of Competence and Motivation (2nd ed.). Guilford Press.
Walton and Brady distinguish interpersonal belonging (do I have friends here?) from structural belonging (does this institution signal that people like me are expected to succeed?). This distinction is critical for my work — P5 specifically measures structural belonging features that are visible in course materials, not interpersonal dynamics that require observational methods. The Walton 3-item belonging scale is the validation target for P5. I find myself returning to their phrase "belonging uncertainty" — the not-knowing whether you belong — as the most precise description of what first-generation community college students experience in introductory CS.
Lewis, C. M., et al. (2017). Building belonging in an introductory CS course. SIGCSE '17.
A practical intervention study that operationalizes belonging in an intro CS context. Lewis et al. show that specific, concrete changes to course design — not just instructor warmth — can measurably increase belonging. This paper is a bridge between the theoretical belonging literature (Walton & Brady) and the actionable curriculum design that my teaching practice aims for. It validates the premise that belonging is designable, not just personality-dependent.
Learning Analytics and Help-Seeking
Karabenick, S. A., & Berger, J.-L. (2013). Help seeking as a self-regulated learning strategy. In Applications of Self-Regulated Learning.
Establishes help-seeking as a self-regulated learning strategy rather than a sign of weakness — a reframing that matters for P1. Karabenick and Berger's framework distinguishes adaptive help-seeking (seeking understanding) from executive help-seeking (seeking answers). P1's NLP classification of discussion forum posts attempts to operationalize this distinction computationally. The challenge I see: this framework was developed for traditional classroom settings. In LMS discussion forums, the line between adaptive and executive help-seeking may be blurred by the public nature of the medium and the norms of the course.
Wise, A. F., & Shaffer, D. W. (2015). Why theory matters more than ever in the age of big data. JLAC, 2(2).
A crucial methodological argument: learning analytics without theoretical grounding produces features that are predictive but not actionable. This paper is the reason P3 (qualitative interviews) precedes P1 (learning analytics) in my timeline. The features I extract from LMS data must be interpretable through the theoretical lens that P3's interviews establish. Wise and Shaffer's critique directly shapes my methods note on mixed-methods design. I take their argument one step further: theory isn't just necessary for interpretation — it's necessary for construct definition. Without P3, I don't know what "help-seeking suppression" looks like in community college LMS data.
Newman, R. S. (2002). How self-regulated learners cope with academic difficulty: The role of adaptive help seeking. Theory Into Practice, 41(2).
Newman's framework on adaptive help-seeking provides the motivational model underlying P1: students who don't seek help aren't lazy — they're making a rational cost-benefit calculation based on their perception of the learning environment. If the perceived cost of asking (looking incompetent, wasting the instructor's time, revealing a "basic" gap) exceeds the perceived benefit, help-seeking is suppressed. My contribution: I believe these perceived costs are structurally produced by course design features that are measurable in course materials — assignment framing, syllabus language, grading policy.
Constructionism, Physical Computing, and Hands-On Learning
Papert, S. (1980). Mindstorms: Children, Computers, and Powerful Ideas. Basic Books.
The foundational text for everything I build as an instructor. Papert's central insight — that students learn most deeply when they construct public, shareable artifacts — is the theoretical basis for the Build a Computer project, for ProjectBridge, and for portfolio-based assessment. What strikes me on re-reading is how precisely Papert diagnosed the failure mode of traditional CS education: the belief that you teach programming by explaining syntax rather than by creating an environment where programming is the natural tool for doing something the student already cares about. The breadboard computer is my attempt to create that environment for computer architecture.
Harel, I., & Papert, S. (1991). Constructionism. Ablex.
Extends Papert's initial formulation into a research program. The key addition: constructionism is not just "learning by doing" — it's learning by building something public, for an audience, that the student cares about. This publicness matters. When students build a breadboard computer that has to work in front of the class, the artifact is both the learning product and the assessment. The computer works or it doesn't — no rubric required. Harel and Papert's emphasis on "objects to think with" directly shapes my curriculum design: every module produces a physical, testable artifact.
Blikstein, P. (2013). Digital fabrication and "making" in education: The democratization of invention. In FabLabs: Of Machines, Makers and Inventors.
Blikstein connects the maker movement to Papert's constructionism and argues that digital fabrication tools democratize invention in ways that traditional lab equipment doesn't. I'm interested in a more specific version of this claim: does the physicality of breadboard construction create learning outcomes that simulation cannot? My hypothesis — informed by embodied cognition research — is yes, specifically for students who lack prior computing cultural capital. The physical artifact serves as proof of understanding that no amount of tutorial-following can replicate.
Sentance, S., et al. (2017). Creating cool stuff: Pupils' experience of the BBC micro:bit. SIGCSE '17.
One of the few empirical studies of physical computing in CS education at scale. Sentance et al. report that physical computing increased engagement and confidence, particularly among students who were less interested in CS beforehand. This finding maps directly to my argument that hands-on construction is an equity intervention: it provides an entry point for students who don't identify with traditional CS culture. The limitation I want to address: the micro:bit is pre-designed. What happens when students build the computer itself, rather than programming someone else's?
Przybylla, M., & Romeike, R. (2014). Physical computing and its scope — towards a constructionist computer science curriculum. Informatics in Education, 13(2).
Przybylla and Romeike define physical computing as a distinct pedagogical approach (not just "using sensors") and argue for its inclusion in CS curricula. Their framework helps me articulate why the Build a Computer project is qualitatively different from a Raspberry Pi tutorial: it's physical computing applied to the computer itself, making the normally invisible visible. The question they leave open — and that my curriculum attempts to answer — is how physical computing scales to a full semester course rather than isolated activities.
Curriculum Design and the Necessity Principle
Harel, G. (1998). Two dual assertions: The first on learning and the second on teaching. AMM, 105(6).
The necessity principle: students must experience intellectual need before receiving the tool that addresses it. Harel's formulation for mathematics ("if math is the medicine, what is the headache?") generalizes to CS education. P2's annotation scheme for "motivational debt" is a direct operationalization of necessity principle violations — moments where courses introduce formalism before students have experienced the problem the formalism solves. In my curriculum, every module begins with a problem the students can't yet solve — and the week's content is the tool that resolves it.
Anderson, J. Applied Linear Algebra Fundamentals. (Textbook and twelve modeling criteria.)
Jeff Anderson's textbook and teaching practice are the existence proof that the necessity principle works at scale. His twelve modeling criteria for curriculum design provide the practical grounding for P4's graph-based curriculum analysis. Anderson's broader pedagogical framework — antiracist learning science, ungrading, strategic deep learning, five learner-centered objectives — is the foundation of my own teaching philosophy. His conviction that "every classroom decision should map back to research in cognitive science, the psychology of learning, and the scholarship of anti-racism" is the standard I hold my curriculum to.
Ko, A. J. (2022). Critically Conscious Computing. Online: criticallyconsciouscomputing.org.
Ko's free textbook redefines what it means to teach CS with equity as a design principle, not an add-on module. Her read-before-write sequencing, emphasis on student agency, and insistence that social implications are inseparable from technical understanding directly inform my curriculum structure. What I find most valuable is her framework for connecting coding activities to students' lived experiences — a challenge I face every day with community college students who ask "why does this matter to my life?" Ko gives a principled answer rather than a motivational one.
Learning Science Frameworks
Kolb, D. A. (1984). Experiential Learning: Experience as the Source of Learning and Development. Prentice Hall.
The four-phase cycle — concrete experience, reflective observation, abstract conceptualization, active experimentation — structures every week of the Build a Computer curriculum. Students build (experience), write build journal entries (reflection), connect to theory and cross-STEM bridges (conceptualization), and attempt modifications or extensions (experimentation). What I've learned from implementing this: the reflection phase is the one students and instructors most want to skip, and it's the one that produces the deepest learning. The 20-minute daily reflection in my curriculum is non-negotiable because Kolb says the cycle is incomplete without it.
Kapur, M. (2008). Productive failure in mathematical problem solving. Instructional Science, 36(6).
Kapur demonstrates that students who struggle with a problem before receiving instruction develop deeper understanding than those who receive direct instruction first. This finding is the theoretical basis for the Build a Computer project's integration sessions — the moments when teams connect their independently built modules and face the reality that the system doesn't work yet. The debugging process is not a failure of instruction; it is the instruction. Kapur gives me the language to defend this design choice to administrators who worry about student frustration.
Perkins, D. N., & Salomon, G. (1992). Transfer of learning. International Encyclopedia of Education (2nd ed.).
The distinction between "low road" transfer (automatic, through practiced routines) and "high road" transfer (deliberate, through conscious abstraction and connection-making) is the theoretical foundation for every STEM Bridge Moment in my curriculum. Perkins and Salomon's key finding: high road transfer does not happen automatically — the instructor must explicitly name the connection. This is why every module in the Build a Computer curriculum includes a scripted bridge sentence for the instructor: "The equation governing this clock circuit is the same equation you'll solve in Differential Equations. You've already seen what the solution looks like."
Deci, E. L., & Ryan, R. M. (2000). The "what" and "why" of goal pursuits: Human needs and the self-determination of behavior. Psychological Inquiry, 11(4).
Self-determination theory identifies three conditions for intrinsic motivation: autonomy (choice), competence (calibrated challenge), and relatedness (belonging). My three-track system maps directly to SDT: Track I/II/III provide autonomy through genuine choice, calibrated challenge through differentiated depth, and the team-based structure provides relatedness. The design test for every week: does this structure guarantee all three conditions, or does it rely on hope? SDT says the conditions must be structural, not aspirational.
Ambrose, S. A., et al. (2010). How Learning Works: Seven Research-Based Principles for Smart Teaching. Jossey-Bass.
The most practically useful book on pedagogy I've read. Each of the seven principles — prior knowledge, knowledge organization, motivation, mastery, practice and feedback, course climate, self-directed learning — has immediate implications for curriculum design. I return to Chapter 6 (course climate) most often: the finding that students' perception of the learning environment determines whether they take intellectual risks, which determines whether they learn. The Build a Computer curriculum's portfolio assessment, celebration moments, and "no wrong track" framing are all attempts to create the climate that Ambrose et al. describe.
Meyer, J. H. F., & Land, R. (2003). Threshold concepts and troublesome knowledge. In Improving Student Learning.
Threshold concepts are transformative and irreversible — once understood, they permanently change how a student thinks. Meyer and Land argue that these concepts require extended engagement and toleration of uncertainty. In the Build a Computer curriculum, I've identified three threshold concepts: two's complement representation, the fetch-decode-execute cycle, and the idea that "programs" are just numbers stored in the same memory as data. Each gets extra time, extra scaffolding, and explicit acknowledgment that the confusion is expected and productive.
Community College Research
Bailey, T., Jaggars, S. S., & Jenkins, D. (2015). Redesigning America's Community Colleges: A Clearer Path to Student Success. Harvard University Press.
The "guided pathways" framework argues that community college students are not well served by the cafeteria model of course selection. Bailey, Jaggars, and Jenkins show that structured pathways with built-in advising produce dramatically better completion rates. I find myself in partial agreement: structure matters, but so does agency. My three-track system attempts to combine the structure of guided pathways with the autonomy of student-directed learning. The tension between these two design goals — structure for persistence, agency for motivation — is one of the most interesting design problems in community college CS.
SEISMIC Consortium. (Ongoing). Large-scale equity analysis of STEM education.
Multi-institutional dataset and methodology for studying equity in STEM at scale. Relevant to P1's learning analytics approach and to the eventual scaling of this work beyond a single institution. The SEISMIC approach of combining institutional data with equity analysis is a model for what I hope P1 can contribute at the community college level — where the data infrastructure is often less mature but the equity stakes are higher.
CS Education Research Methods
Fincher, S., & Robins, A. (Eds.). (2019). The Cambridge Handbook of Computing Education Research. Cambridge University Press.
The most comprehensive survey of CS education research methods, findings, and open questions. I treat this as a reference rather than a cover-to-cover read, but Chapter 1 (What is CS Ed Research?), Chapter 4 (Introductory Programming), Chapter 8 (Assessment), and Chapter 14 (Equity) have been formative. The handbook reveals how young this field is — many fundamental questions remain open, which is both humbling and encouraging for a researcher entering the field.
Margulieux, L. E., et al. (2019). When and how to guide CS students. ICER '19.
Important for my understanding of when instructional scaffolding helps versus hinders CS students — directly relevant to P2's annotation scheme, which attempts to measure scaffolding regularity in course materials. Margulieux et al. show that the relationship between scaffolding and learning is non-monotonic: too little guidance causes unproductive floundering, but too much prevents the productive failure that Kapur shows is necessary for deep understanding. My curriculum design attempts to navigate this by providing structure (milestones, checkpoints) without providing answers (no step-by-step instructions for the build).
Guzdial, M. (2015). Learner-Centered Design of Computing Education: Research on Computing for Everyone. Morgan & Claypool.
Guzdial's monograph makes the case for computing education as a field with its own research questions, not just a subfield of either CS or education. His insistence on "computing for everyone" — not just for future software engineers — directly informs my decision to teach computer architecture as a cross-STEM liberal arts experience rather than as vocational training. The most provocative claim: most students will not become programmers, so CS education must be justified on grounds other than workforce preparation. I agree, though I'd add: for community college students, the workforce argument and the intellectual argument must both be present.
AI in CS Education
Porter, L., & Zingaro, D. (2024). Learn AI-Assisted Python Programming. Manning.
The first serious textbook on integrating AI code generation into introductory CS pedagogy. Porter and Zingaro don't treat AI as a threat to be managed — they design a curriculum around it. What I take from this work: the argument that AI changes what students need to learn (reading and evaluating code becomes more important than writing it from scratch). Where I push back: for community college students who have never built anything, I believe the act of construction — even if AI could do it faster — is irreplaceable. The Build a Computer project is my strongest argument for this position: no AI can debug a floating pin on a breadboard.
Denny, P., MacNeil, S., Savelka, J., Porter, L., & Luxton-Reilly, A. (2024). Desirable characteristics for AI teaching assistants in programming education. ITiCSE '24.
Denny et al. identify what instructors want from AI teaching tools: not replacement of human interaction, but amplification of it. The characteristics they identify — scaffolding rather than answer-giving, Socratic questioning, sensitivity to student frustration — map directly to how I think about AI integration in my curriculum: AI as copilot, not author. In the Build a Computer curriculum, AI is introduced in Layer 5 (Weeks 17–20), after students have built the foundational knowledge to evaluate its output. The order matters.
Equity Frameworks and Pedagogical Foundations
Freire, P. (1968/1970). Pedagogy of the Oppressed. Continuum.
The banking model of education — instructor deposits knowledge into passive students — is the default mode of introductory CS, and it is the structural root of the departure problem that Seymour and Hunter document. Freire's alternative — education as dialogue, as co-investigation, as liberation — is the animating principle behind project-based learning, student-proposed grades, and the design of ProjectBridge as a platform where students are builders, not consumers. Every capstone in my curriculum is student-chosen and connected to a problem in the student's community because Freire says knowledge is liberatory only when connected to lived experience.
hooks, b. (1994). Teaching to Transgress: Education as the Practice of Freedom. Routledge.
hooks extends Freire into the American classroom and insists on two things: bringing the full self to learning, and the instructor's own vulnerability as a pedagogical tool. I take this seriously. The reflective writing I require alongside every project is not an assessment gimmick — it's hooks' argument that learning involves the whole person, not just the cognitive apparatus. Students write about struggle, confusion, and discovery, and I write back. The classroom is a site of intellectual community, not a delivery mechanism for content.
Washington, A. N. (2020). When twice as good isn't enough: The case for cultural competence in computing. SIGCSE '20.
Washington argues that the computing field's equity interventions have focused too narrowly on access and pipeline — getting students in the door — without addressing the cultural environment they encounter once inside. This reframes the problem: the issue is not that community college students lack preparation; it's that the computing culture they encounter was not designed with them in mind. My research program is an attempt to make that cultural environment measurable and changeable through curriculum design.
This list is not comprehensive — it reflects the works that most directly shape active projects and teaching practice. A fuller bibliography is available in each project's research design. See also: Teaching Computing Differently, where these readings are enacted as curriculum.
Last updated: March 2026