3 Prerequisite Map
This book is designed to point back into Wayward House Mathematics rather than silently assuming missing knowledge.
3.1 Core prerequisite path
| Needed here | Best place to point back |
|---|---|
| Functions and composition | vol-03/04-functions-relations.qmd |
| Exponents and logs | vol-04/01-exponents-logarithms.qmd |
| Sequences and accumulation | vol-04/04-sequences-series.qmd |
| Differential calculus and gradients | vol-05/02-differential-calculus.qmd |
| Integral/calculus intuition | vol-05/03-integral-calculus.qmd |
| Probability and distributions | vol-06/01-probability-theory.qmd, vol-06/02-distributions.qmd |
| Inference and regression | vol-06/03-statistical-inference.qmd, vol-06/04-data-analysis.qmd |
| Matrices and eigenvalues | vol-07/linear-algebra/01-matrices-systems.qmd, vol-07/linear-algebra/02-eigenvalues.qmd |
| Numerical methods | vol-07/numerics/01-numerical-methods.qmd |
| Optimisation | vol-07/optimisation/01-linear-programming.qmd, vol-08/07-nonlinear-optimisation.qmd |
| Signals and sampled systems | vol-08/02-discrete-systems-signals.qmd |
| Estimation and filtering | vol-08/05-estimation-inverse.qmd |
3.2 What the current series already covers well
- enough algebra and calculus for gradient-based learning
- enough linear algebra for PCA, embeddings, and matrix factorisation
- enough probability and inference for supervised learning foundations
- enough optimisation and numerics for training and fitting logic
- enough signals and estimation for time-series and state-space ML
3.3 What this follow-on book must add explicitly
- loss functions as modelling choices
- classification and probabilistic prediction
- regularisation and generalisation
- information theory
- matrix calculus in ML form
- backpropagation and layered composition
- representation learning and modern AI objectives