If we push aggressively to stop climate change, our predictions for sea-ice and AMOC get less certain. Here's why — and what to do about it.

The headline number

5.10 → 0.69

Cross-model disagreement on Arctic sea-ice shrinks 7× as we go from aggressive mitigation (SSP1-1.9) to no mitigation (SSP5-8.5). Under SSP1-1.9 the climate models cannot agree on what sea-ice does. Under SSP5-8.5 they agree closely.

The paradox in one table

PathwayWarming targetSea-ice disagreement (σ_cross)AMOC disagreement
SSP1-1.9~1.5°C5.10 · models disagree wildly(no models in cache)
SSP1-2.6~2°C2.302.53
SSP2-4.5~2.7°C1.211.73
SSP3-7.0~3.6°C0.62(no models)
SSP5-8.5~5°C0.69 · models agree closely1.55

The same effect appears, less strongly, for AMOC. Lower forcing → wider model disagreement. This is not a measurement error or a model bug — it is what the data say.

Why this happens

A climate model's output is the sum of two things:

  1. The forcing-driven signal — the response to greenhouse gases, aerosols, land-use change, etc.
  2. The internal variability — the model's own simulated weather, decadal oscillations, ENSO-like cycles, etc.

Under high forcing (SSP5-8.5), the forcing signal is enormous and dominates the internal variability. Different models respond similarly to the dominant signal — they agree.

Under low forcing (SSP1-1.9), the forcing signal is small. Internal variability is now comparable to or larger than the signal. Different models have different internal-variability "weather", and those internal patterns dominate the projection — they disagree.

What this means for policy

A policy that mitigates aggressively cannot rely on ensemble-mean projections of sea-ice and AMOC

Standard climate projections compute the mean across models per scenario. They do not measure cross-model dispersion per scenario. Under aggressive mitigation, the mean is informative about forcing response but the dispersion is enormous.

Practical implication: uncertainty bands on sea-ice and AMOC under SSP1-1.9 should be drawn 7× wider than the same bands under SSP5-8.5. A risk assessment that uses a fixed uncertainty band misallocates risk.

What this does NOT mean

It does NOT mean: don't mitigate

Aggressive mitigation is unambiguously best for the forcing-driven cascades (CO₂, sea-level, permafrost-onset timing, glaciers). The σ_cross paradox is a risk-assessment finding — uncertainty bands on a subset of observables (sea-ice, AMOC) need to be drawn correctly. It is not an argument against mitigation.

It does NOT mean: the models are wrong

The models are not wrong under SSP1-1.9 — they are responding to different things. Each model's internal variability is a legitimate physical response to a low-forcing scenario. The framework's diagnosis is that the candidate set of trajectories is not jointly admissible as shadows of one underlying signal under low forcing — formal reading: Theorem 10 fires.

Where this finding came from

This is a framework-derived finding from instance #16 (CMIP6 SSP5-8.5 cross-shadow tipping consensus) extended forward across all 5 SSPs in the scenario fan. It is anchored to Theorem 10 (joint-admissibility detector) of the framework's foundations: under low-forcing SSPs, the joint-admissibility score \(\mathfrak{A}(\{\text{models}\}; \text{SSP})\) exceeds the precision floor \(\tau_{T3}\). Source: scenario_fan_sea_ice.py, scenario_fan_amoc_v2.py.