If we push aggressively to stop climate change, our predictions for sea-ice and AMOC get less certain. Here's why — and what to do about it.
The headline number
5.10 → 0.69
Cross-model disagreement on Arctic sea-ice shrinks 7× as we go from aggressive mitigation (SSP1-1.9) to no mitigation (SSP5-8.5). Under SSP1-1.9 the climate models cannot agree on what sea-ice does. Under SSP5-8.5 they agree closely.
The paradox in one table
| Pathway | Warming target | Sea-ice disagreement (σ_cross) | AMOC disagreement |
|---|---|---|---|
| SSP1-1.9 | ~1.5°C | 5.10 · models disagree wildly | (no models in cache) |
| SSP1-2.6 | ~2°C | 2.30 | 2.53 |
| SSP2-4.5 | ~2.7°C | 1.21 | 1.73 |
| SSP3-7.0 | ~3.6°C | 0.62 | (no models) |
| SSP5-8.5 | ~5°C | 0.69 · models agree closely | 1.55 |
The same effect appears, less strongly, for AMOC. Lower forcing → wider model disagreement. This is not a measurement error or a model bug — it is what the data say.
Why this happens
A climate model's output is the sum of two things:
- The forcing-driven signal — the response to greenhouse gases, aerosols, land-use change, etc.
- The internal variability — the model's own simulated weather, decadal oscillations, ENSO-like cycles, etc.
Under high forcing (SSP5-8.5), the forcing signal is enormous and dominates the internal variability. Different models respond similarly to the dominant signal — they agree.
Under low forcing (SSP1-1.9), the forcing signal is small. Internal variability is now comparable to or larger than the signal. Different models have different internal-variability "weather", and those internal patterns dominate the projection — they disagree.
Climate Mitigation Atlas — \(\beta\) by observable × emissions pathway
Each cell's colour is one number — the rate exponent \(\beta\). Green = stoppable (returns to rest). Orange = super-rate. Red = locked-in.
Click any cell for the full reading: \(\beta\), cross-model dispersion, theorem anchor, and the source code on GitHub.
How to read this chart · what the SSPs mean
The axes
- Rows: 8 climate observables — what is changing.
- Columns: 5 emissions pathways from "very aggressive mitigation" (left) to "no mitigation" (right).
- Cell colour: the rate exponent \(\beta\). Below 1 is stoppable; above 1 is locked-in.
- Cell label: the actual \(\beta\) value cross-model median.
The five emissions pathways (SSPs)
- SSP1-1.9 — ~1.5°C. Aggressive net-zero by mid-century. Paris lower bound.
- SSP1-2.6 — ~2°C. Moderate mitigation. Net-zero by ~2070.
- SSP2-4.5 — ~2.7°C. Current policies, middle-of-the-road.
- SSP3-7.0 — ~3.6°C. Regional rivalry, fragmented action.
- SSP5-8.5 — ~5°C. Continued fossil-fuel growth, no mitigation.
SSPs (Shared Socioeconomic Pathways) are the IPCC's standard set of emissions futures. The number after the dash is the radiative forcing in 2100 (W/m²).
What this means for policy
A policy that mitigates aggressively cannot rely on ensemble-mean projections of sea-ice and AMOC
Standard climate projections compute the mean across models per scenario. They do not measure cross-model dispersion per scenario. Under aggressive mitigation, the mean is informative about forcing response but the dispersion is enormous.
Practical implication: uncertainty bands on sea-ice and AMOC under SSP1-1.9 should be drawn 7× wider than the same bands under SSP5-8.5. A risk assessment that uses a fixed uncertainty band misallocates risk.
What this does NOT mean
It does NOT mean: don't mitigate
Aggressive mitigation is unambiguously best for the forcing-driven cascades (CO₂, sea-level, permafrost-onset timing, glaciers). The σ_cross paradox is a risk-assessment finding — uncertainty bands on a subset of observables (sea-ice, AMOC) need to be drawn correctly. It is not an argument against mitigation.
It does NOT mean: the models are wrong
The models are not wrong under SSP1-1.9 — they are responding to different things. Each model's internal variability is a legitimate physical response to a low-forcing scenario. The framework's diagnosis is that the candidate set of trajectories is not jointly admissible as shadows of one underlying signal under low forcing — formal reading: Theorem 10 fires.
Where this finding came from
This is a framework-derived finding from instance #16 (CMIP6 SSP5-8.5 cross-shadow tipping consensus) extended forward across all 5 SSPs in the scenario fan. It is anchored to Theorem 10 (joint-admissibility detector) of the framework's foundations: under low-forcing SSPs, the joint-admissibility score \(\mathfrak{A}(\{\text{models}\}; \text{SSP})\) exceeds the precision floor \(\tau_{T3}\). Source: scenario_fan_sea_ice.py, scenario_fan_amoc_v2.py.