crabbymetrics
  • Home
  • API
  • Binding Crash Course
  • Supervised Learning
    • OLS
    • Ridge
    • Fixed Effects OLS
    • ElasticNet
    • Synthetic Control
    • Logit
    • Multinomial Logit
    • Poisson
    • TwoSLS
    • GMM
    • FTRL
    • MEstimator Poisson
  • Semiparametrics
    • Balancing Weights
    • EPLM
    • Average Derivative
    • Double ML And AIPW
    • Richer Regression
  • Unsupervised Learning
    • PCA And Kernel Basis
  • Ablations
    • Variance Estimators
    • Semiparametric Estimator Comparisons
    • Bridging Finite And Superpopulation
  • Optimization
    • Optimizers
    • GMM With Optimizers
  • Ding: First Course
    • Overview And TOC
    • Ch 1 Correlation And Simpson
    • Ch 2 Potential Outcomes
    • Ch 3 CRE And Fisher RT
    • Ch 4 CRE And Neyman
    • Ch 9 Bridging Finite And Superpopulation
    • Ch 11 Propensity Score
    • Ch 12 Double Robust ATE
    • Ch 13 Double Robust ATT
    • Ch 21 Experimental IV
    • Ch 23 Econometric IV

On this page

  • 1 What Is Covered
  • 2 Notes

First Course Ding: Foundations (Chapters 1 To 4)

This section groups the first four randomized-experiment chapters:

  • Chapter 1: Correlation, Association, and Simpson’s Paradox
  • Chapter 2: Potential Outcomes
  • Chapter 3: The Completely Randomized Experiment and the Fisher Randomization Test
  • Chapter 4: Neymanian Repeated-Sampling Inference

These are the most direct ports in the entire book. They mostly need numpy, plotting, and occasional OLS fits.

1 What Is Covered

  • marginal versus conditional comparisons
  • treatment-effect heterogeneity and potential-outcomes algebra
  • exact randomization distributions under sharp nulls
  • repeated-sampling logic for the difference in means and Neyman variance estimators

2 Notes

  • The pages in this section stay deliberately light on dependencies.
  • They are the template for the rest of the Ding translation track: one self-contained Quarto page per idea, using crabbymetrics when estimation is actually needed.