Goal: estimate the cohort-intrinsic mortality drift due to age/frailty composition, not intervention effect.
Pick a slope model class \(\mathcal{M}\). Common choice: multiplicative (log-linear) trend around \(t_0\):
Estimate \(\hat{\theta}_g\) from the observed series \(\{D_g(t)\}_{t\in\mathcal{W}}\) on a baseline window \(\mathcal{W}\) chosen to minimize acute exposure effects (e.g., non-COVID period, or trough-to-trough weeks). Options:
Define the neutralizer (slope removal factor)
For the log-linear model \(m_g(t)=\exp(\alpha_g+\beta_g(t-t_0))\), this reduces to
Apply the neutralizer multiplicatively:
This removes cohort-intrinsic drift, leaving (ideally) exposure-related signal plus noise.
Compute cumulative (from enrollment):
Choose a calibration set \(\mathcal{B}\subseteq [t_0, t_0+T]\) representing a baseline period (e.g., non-COVID). Pick a constant \(c>0\) such that
Define the KCOR curve
When we say KCOR “makes no proportional hazards assumption,” we mean it does not require the statistical proportional hazards (PH) condition used in Cox models — i.e., the assumption that the hazard ratio between groups is constant over time. KCOR produces a time-evolving ratio \(R(t)\) without forcing it to be flat, so it works even when the hazard ratio changes with time.
However, there’s another phenomenon sometimes also called “non-proportional hazards” that is different: in the COVID context, the relative hazard of death from COVID across age groups is not proportional to their baseline (all-cause) mortality hazard. For example, COVID might kill elderly people at 5× their normal ACM hazard but younger people at 50× — so the scaling factor differs by age. This is not about change over time within a group; it’s about the shape of the hazard curve across risk strata.
KCOR sidesteps the first kind of NPH entirely, but the second kind can still distort the result if the intervention cohorts are age-skewed and the disease hazard isn’t proportional to baseline mortality across ages. Slope-neutralization can’t fully correct for that, because it’s not just a slow baseline drift — it’s a structural difference in hazard scaling. The safest way to handle this is to select or stratify cohorts to minimize that skew.
One of the strongest features of KCORv3 is that it is self-checking. If the slope neutralization has been done correctly, the net harm/benefit curve \[ R(t) = \frac{\mathrm{CD}_v(t)}{\mathrm{CD}_u(t)} \] will approach a constant value once the intervention’s short-term effects have worn off and only background mortality remains.
Formally, let \( T \) denote the time after which post-intervention hazards are no longer different from baseline hazards. For \( t \ge T \), \[ R(t) \approx \text{constant}. \] This constant reflects the residual ratio of cumulative deaths if the cohorts have been properly normalized for intrinsic slope differences.
Recall that cumulative deaths are computed as: \[ \mathrm{CD}_g(t) = \sum_{\tau = 0}^{t} \hat{D}_g(\tau), \] where \( \hat{D}_g(\tau) \) is the slope-neutralized deaths per week for group \( g \in \{v, u\} \). If slope removal is correct, then: \[ \frac{\mathrm{CD}_v(t)}{\mathrm{CD}_u(t)} \to \text{constant}, \quad t \to \infty. \]
This self-check property is a built-in validity test: if the neutralization is wrong, the curve cannot asymptote to a constant — it will drift or oscillate, signaling a methodological error or unadjusted bias.
Most standard epidemiological estimators (e.g., Cox PH, Poisson/logistic regression, Kaplan–Meier) provide results under modeling assumptions and only offer optional goodness-of-fit diagnostics. They do not contain a built-in, visual “pass/fail” criterion tied to their own preprocessing steps. KCOR differs in three ways:
In short, KCOR’s asymptotic flatness acts as a necessary condition for correct normalization. Standard tools produce estimates even when their assumptions are violated; KCOR visibly “fails” (non-flat \(R(t)\)) when its key assumption is violated—making errors easy to detect.