What This Is
This Recovery Library is a proof-of-concept collection of technical recovery guidance for New Zealand, designed for a scenario in which a major nuclear exchange destroys Northern Hemisphere civilization while leaving NZ physically unscathed but cut off from global trade.
The complete library — all 171 documents across 15 categories — was produced in approximately one week of focused human direction, with AI (Claude, by Anthropic) handling research, drafting, cross-referencing, and site construction. It demonstrates that current AI tools, guided by a single human providing strategic direction and editorial oversight, can generate a large body of plausible, well-researched, internally consistent catastrophe recovery documentation at a speed and cost that would have been impossible through conventional means.
These documents have not been peer-reviewed and contain errors. They are published as a demonstration of capability, not as authoritative guidance. The value of this project lies in the methodology and the demonstration that the process works — not in these specific documents. The scenario modeled here is one of many possible catastrophe scenarios, the recovery model is heuristic rather than rigorously derived, and better modeling would likely require generating an entirely new document set rather than revising this one. Domain experts interested in this work should direct their attention to the style guide and methodology rather than to individual documents — though engaging with individual documents is a useful way to surface problems in the underlying scenario modeling, library structure, and editorial standards. We are not yet at the stage of fine-tuning details. The larger questions — are the assumptions sound, is the recovery phasing realistic, does the library contain the right documents — need to be addressed first. Given the scale of those issues, regeneration will probably make sense once the key modeling and structural questions are sorted out. After that, the right approach is iterative: fix specific issues and continue refining the editorial standards, rather than regenerating from scratch each time — full regeneration tends to introduce as many new problems as it solves.
The Scenario
NATO-Russia nuclear exchange. Approximately 4,400 warheads detonated. Northern Hemisphere heavily damaged. Global nuclear winter produces estimated surface temperature drops of 5–15°C — with 20–30°C cooling across Northern Hemisphere agricultural regions and roughly 5°C in New Zealand — and significantly reduced sunlight lasting 5–10 years. (The severity of nuclear winter is genuinely uncertain — published models range from moderate cooling to near-catastrophic. This library models recovery around the moderate-to-severe range; a worst-case scenario would require a fundamentally different analysis.)
New Zealand is physically unscathed — no warheads detonate on or near NZ. The electrical grid (85%+ renewable) continues operating. Roads, ports, and domestic telecommunications remain intact. Government and institutions continue to function.
The problem is imports. NZ depends almost entirely on imported fuel, medicines, spare parts, electronics, tires, chemicals, clothing, and most manufactured goods. The country manufactures very little beyond basic food processing, timber products, some steel (one steelworks at Glenbrook), and aluminum (Tiwai Point smelter). When global trade stops, NZ must learn to maintain, substitute, and eventually manufacture what it previously imported — or accept permanent capability loss as existing stocks deplete.
The central risk is not any single shortage but a compounding degradation spiral: infrastructure fails faster than NZ can learn to maintain or replace it, each failure triggers others, and the society slides from a modern economy with temporary problems into a permanently diminished one. A transformer fails; a dairy plant loses power; milk spoils; the community that depended on that plant disperses; the electrician who lived there is unavailable for the next failure. This kind of cascade is the realistic danger — not dramatic collapse, but a slow ratchet downward that becomes harder to reverse the longer it continues.
Key Assumptions and Uncertainties
Food production. NZ currently produces enough food for roughly 40 million people under normal conditions — primarily grass-fed pastoral agriculture (dairy, beef, sheep). Under nuclear winter conditions (approximately 5°C cooling, significantly reduced sunlight), grass growth would drop substantially. The magnitude of this drop is uncertain — estimates depend on modeling assumptions about cloud cover, precipitation changes, and UV effects on pasture. NZ almost certainly still feeds its own population, but the comfortable “8x surplus” figure does not apply under nuclear winter. The actual surplus available for export or to absorb refugees is unknown without more detailed modeling.
Institutional continuity. This library assumes NZ’s government and institutions continue to function. This is plausible — NZ has strong institutions, high social trust, and no direct physical damage from the conflict — but it is not guaranteed. The first weeks would involve severe economic shock (what happens to money when imports stop permanently?), potential panic buying and hoarding, and massive psychological trauma. Institutional continuity is the goal, not the starting condition, and several documents in this library are specifically designed to help achieve it.
Electrical grid. NZ’s grid is predominantly renewable (hydro, geothermal, wind) and does not depend on imported fuel for generation. However, the grid is a complex system requiring constant maintenance, and key components — particularly transformers, control electronics, and switchgear — have finite operational lives. How fast the grid degrades without access to imported replacement parts is uncertain. Transformers can last 30–50 years with proper maintenance, but some are already old, and control electronics are more fragile. The grid’s degradation rate is one of the most important unknowns in the entire recovery scenario.
Other regions. NZ does not recover alone. Australia is contactable immediately via HF radio and reachable in one to two weeks by sail. Brazil, Argentina, South Africa, and Southeast Asia represent additional nodes of surviving capability with larger populations and, in some cases, larger industrial bases. What these regions contribute through trade — and how quickly they stabilize — significantly affects NZ’s trajectory but is difficult to predict.
Recovery Phases
Phases overlap and boundaries are approximate. The timelines are informed by historical industrialization rates, but they involve substantial uncertainty — particularly in later phases where small differences in early recovery compound over decades.
Phase 1 — Shock and Transition (Months 0–12)
Most modern systems still function but nothing is being resupplied. Government’s most critical task is securing non-renewable stocks through a combination of requisition, controlled distribution, and rationing. If this is done well, NZ buys years of runway. If done poorly or too slowly, the depletion timeline compresses dramatically.
Phase 2 — Peak Hardship (Years 1–3)
Nuclear winter near peak severity. Most imported consumables exhausted or tightly rationed. The gap between what NZ needs and what it can make is at its widest. Local manufacturing of basic goods just beginning.
Phase 3 — Early Self-Sufficiency (Years 3–7)
Nuclear winter easing. Local production of basic goods coming online. Food system adapted. Maritime connections with Australia and Pacific developing.
Phase 4 — Industrial Development (Years 7–15)
Agriculture approaching normal conditions. Industrial base expanding. Pre-war electronics failing at increasing rates.
Phase 5 — Mature Self-Sufficiency (Years 15–30)
NZ and other surviving nations operating as a sail-linked trading network. Local manufacturing covers most basic and intermediate goods. Pre-war electronics largely exhausted.
Phase 6 — Regional Industrial Civilization (Years 30–60)
Industrial chemistry maturing in leading regions. Possible powered shipping. Rail networks expanded. The gap between NZ’s capability and mid-20th-century standards narrows, though the comparison is imprecise because NZ’s situation differs in fundamental ways (different energy base, different materials, much smaller population than the mid-20th-century world).
Phase 7 — Computing Re-emergence (Years 50–100+)
Locally manufactured computing in one or more regions — possibly NZ, possibly Australia or Brazil first (depending on who builds semiconductor processing capability, which depends on industrial prerequisites that are hard to predict). Timelines here are highly speculative.
Inter-Regional Trade
NZ’s recovery is affected by what other surviving regions achieve. Under sail, trade is constrained to high-value, low-volume goods — but these are exactly what matters for industrial development.
Australia (1–2 weeks by sail) — Probably NZ’s most important partner. Has mineral resources NZ lacks: copper, lithium, rare earths, bauxite/alumina, tungsten, tin, nickel, chromium, manganese. Also has surface-accessible coal and a larger engineering workforce. Australia’s challenge is food and water — NZ’s food surplus (even if reduced under nuclear winter) could be a valuable export.
Brazil and Argentina (roughly two months by sail) — Larger pre-war populations and industrial bases. Either could potentially rebuild industrial capability faster than NZ simply due to scale. Trade is infrequent due to distance but potentially high-value.
Pacific Islands (days to weeks by sail) — Limited industrial capability but important for tropical agriculture, fisheries, and as waypoints.
Southern Africa (long passage) — Mining and industrial capability in South Africa. Infrequent trade due to distance.
What NZ trades depends heavily on what it can produce and what partners need. The specific trade patterns will be shaped by conditions that are hard to predict from here.
Methodology
Production process
Each document was produced through a structured process:
Research. Web search for NZ-specific data: government statistics, industry reports, academic papers, infrastructure data. Domain-specific technical research: engineering references, historical precedents, manufacturing processes.
Drafting. Following a detailed style guide that emphasizes honest uncertainty, manufacturing realism, NZ-specific data, footnoted claims, and calibrated urgency.
Self-critique and revision. Review against editorial standards, checking for unfounded claims, missing footnotes, inconsistency with other documents, and the tendency to hand-wave away manufacturing complexity.
Documents were produced in batches — typically 8–16 in parallel. After initial drafting, the corpus went through multiple rounds of systematic review and revision — not just spot-checking, but repeated automated audit passes covering every document. The process was iterative: each audit pass identified a category of problem, generated fixes, and often revealed the need for further passes.
Review and audit process
After the initial drafting phase, the library went through an extensive series of audit and revision passes. These were not human line-by-line reviews — they were AI-conducted systematic audits directed by the human author, who identified categories of problems and directed corpus-wide scans to find and fix them.
Practical approach. A 171-document corpus exceeds any single AI context window. Early attempts to audit in large batches consistently failed — agents ran out of context before producing findings, and results were lost when conversations hit their context limits. The solution was to split the corpus into batches of roughly six documents, assign each batch to an independent agent that read the style guide and catalog in full, audited each document semantically (not just pattern matching), and wrote its findings to a persistent file on disk. Agents ran in parallel — up to thirty simultaneously. Because findings were persisted to files, nothing was lost when conversations ran out of context; results could be read back and compiled in a fresh session. This approach was developed through trial and error and became the standard method for all subsequent corpus-wide passes.
The following gives a sense of the scope:
Quality spot-checks and sloppy-claim scans. Early batches of documents were sampled for quality. These spot-checks identified recurring patterns — overly confident claims, missing footnotes, hand-waving about manufacturing complexity — that informed both style guide revisions and subsequent audit passes.
Style guide compliance audits. The entire corpus was audited against the style guide in batches (documents 001–032, 033–054, 055–090, 091–137, 138–174), with non-compliant documents flagged and fixed. This was repeated: first as batch audits, then as individual document-level reviews (one review task per document, covering all 171), then as individual fix tasks for each document that needed changes.
Cross-document consistency checks. Nine separate domain-chain audits verified that documents sharing common assumptions did not contradict each other: fuel, medical, manufacturing, energy/grid, agriculture/food, maritime/trade, governance/institutional, consumables, and baseline scenario assumptions. These caught cases where, for example, two documents assumed different fuel stockpile figures or contradictory timelines for the same industrial process.
Executive summary audits. All executive summaries were audited in four batches, then fixed in five batches. A second pass audited consistency between each executive summary and its document body (four audit batches, six fix batches) — catching cases where revisions to the document body had not been reflected in the summary, or where the summary made claims the body did not support.
Urgency calibration. Five batches of fixes across all documents, correcting actions that were tagged as more urgent than their actual depletion rates or production timelines warranted — a recurring AI tendency to treat everything as equally critical.
Cross-reference integrity. A full audit of all inter-document references, followed by a targeted fix pass for broken or incorrect cross-references. Separate scans checked for title mismatches between documents, the catalog, and the master index.
Pattern-specific scans. Targeted scans across the entire corpus for specific failure modes: obvious/generic advice that added no value, weak executive summary framing, over-prioritised actions, and impractical recommendations. Each scan generated a list of affected documents, which were then fixed individually.
Defensive framing cleanup. Multiple rounds across all documents (including the working paper and catalog) removing hedging language, excessive caveating, and apologetic framing that undermined the documents’ authority. This required two full passes — the first round caught the obvious cases; the second caught subtler instances that only became visible after the first round was complete.
Matauranga Maori integration. Eight cleanup batches standardised the treatment of indigenous knowledge across the corpus, followed by section-reading passes and eight integration batches that rewrote sections where the original treatment was superficial or formulaic.
Full audit-and-fix passes. Every document (001–172) received an individual audit-and-fix pass — a comprehensive review checking style compliance, factual claims, internal consistency, cross-references, and editorial quality. Approximately 40 documents required a second pass after the first round of fixes.
Verification. Documents where multiple edit passes had created potential conflicts were individually verified for structural and logical integrity.
In total, most documents were touched by 8–12 separate review passes after initial drafting. No document received a thorough human line-by-line review — but every document was subjected to multiple systematic AI audits, each targeting a specific category of problem. The style guide itself went through several revisions during this process as recurring issues were identified and codified into editorial standards, which then triggered further audit passes to bring earlier documents into compliance.
Editorial principles
The style guide enforces several principles developed through iterative review:
Urgency calibration. Not everything is equally urgent. Actions are tagged with realistic timelines based on actual depletion rates and production lead times.
Baseline scenario consistency. All documents share common assumptions about what continues to function (grid, roads, institutions) and what doesn’t (imports, global trade).
Manufacturing realism. When a document says “build X from locally available materials,” it must trace the full dependency chain. If building X requires Y, which requires Z, all three steps must be acknowledged. The word “just” (as in “just build a furnace”) is a red flag that the document is skipping essential complexity.
Honest uncertainty. Ranges rather than point estimates. Explicit acknowledgment of what is unknown. Distinguishing facts from estimates from assumptions.
NZ-specific data. Real statistics, real institutions, real geography. Not generic advice with “New Zealand” inserted.
Footnoted claims. Quantitative claims and factual assertions cite sources, with URLs where available.
AI and human roles
The documents were generated by AI (Claude, by Anthropic), prompted and directed by a single human author. The human provided:
- Strategic direction: which documents to produce, what topics to cover, what the library’s overall structure should be
- Editorial standards: developing the style guide through iterative review, establishing scenario parameters and quality criteria
- Problem identification: spot-checking documents for systematic errors, identifying categories of failure (urgency inflation, defensive framing, superficial indigenous knowledge treatment, impractical recommendations), and directing corpus-wide audits to find and fix each category
- Audit design: determining what to scan for, how to structure multi-pass reviews, when a category of problem had been adequately addressed, and when further passes were needed
- Quality judgment: determining whether the corpus met the standard of “a strong first draft a domain expert would take seriously,” and deciding when to stop iterating
The AI provided:
- Research synthesis: finding and combining information from multiple sources
- Technical writing: producing structured, detailed documents following the style guide
- Systematic auditing: conducting corpus-wide scans for specific categories of problems, flagging non-compliant documents, and applying fixes — the bulk of the quality assurance work
- Cross-referencing: maintaining consistency across a large document set
- Domain coverage: generating plausible technical content across fields ranging from pastoral agriculture to machine shop operations to mental health
- Site construction: building the HTML site, generating computed reference tables, and managing the catalog
Library Structure
Categories
The full catalog contains 171 documents across 15 categories:
- Government Response — Emergency governance, rationing, communication, civil defence
- Precomputed Reference — Navigation tables, engineering data, mathematical references for a world without computers
- Printing and Knowledge Distribution — National printing plan, manual printing methods, paper production
- Consumable Management — Managing NZ’s finite stocks of tires, lubricants, batteries, paper, glass, textiles
- Fuel Transition and Transport — Fuel allocation, vehicle electrification, wood gasification, rail, cycling
- Manufacturing — Steel, wire, machine shops, foundry work, chemical production
- Energy and Infrastructure — Hydroelectric maintenance, grid management, wind power, biogas
- Agriculture and Food — Pastoral farming, cropping, seed preservation, food preservation, fertilizer, fishing
- Forestry and Natural Materials — Timber management, charcoal production, harakeke fiber
- Maritime — Ship design, navigation, port operations, inter-regional trade
- Medical and Social — Pharmaceutical rationing, mental health, public health, dental care
- Communications — HF radio, postal service, print media
- Governance — Emergency legal framework, constitutional continuity, local government
- Education — School curriculum, trade training, university priorities, heritage skills
- Computing and Electronics — Computer construction, stockpile management, knowledge preservation
Feasibility ratings
Each document carries a feasibility rating for its core recommendations:
| Rating | Meaning |
|---|---|
| [A] | Established. Uses existing NZ capability or well-proven methods with known NZ materials. |
| [B] | Feasible. Requires developing new capability but the materials, energy, and knowledge base exist in NZ. Significant effort and time required. |
| [C] | Difficult. Dependent on precursor industries or skills that do not yet exist in NZ and must be built first. |
| [D] | Long-term. Requires decades of industrial development. Documented as a roadmap. |
These ratings reflect the author’s assessment and should be reviewed by NZ domain specialists. Some ratings may be too optimistic or too pessimistic.
Limitations
This library has significant limitations that readers should understand:
No peer review. No domain expert has reviewed any document. Errors — including potentially serious ones — are present.
AI-generated content. AI can synthesize information plausibly but can also generate confident-sounding claims that are wrong. The footnotes and sources help, but they do not guarantee accuracy.
Scenario dependence. The documents assume a single specific scenario. Different catastrophe parameters (different cooling, different duration, different geopolitical outcomes) would require substantially different guidance. A production-quality version of this project would likely need to generate separate document sets for different scenario classes rather than revise this one.
Heuristic recovery model. The recovery phases, timelines, and feasibility assessments are heuristic — informed estimates rather than outputs of rigorous modeling. Better catastrophe modeling and domain-specific simulation would likely change many of the assumptions these documents are built on.
NZ-specific. The guidance is written specifically for New Zealand. While some content is broadly applicable, much depends on NZ’s particular geography, climate, institutions, and industrial base.
Proof of concept, not operational guidance. These documents are published to demonstrate that this kind of work is possible and valuable, not as ready-to-use emergency plans. The durable contributions of this project are the methodology, the style guide, and the demonstration that a single person directing AI tools can produce documentation at this scale and quality — not the specific documents themselves.
First run. This is the first complete production run of the library. The editorial standards, scenario modeling, and production process were developed iteratively during this run. A second run — with improved standards, better scenario modeling, and the benefit of domain expert review of the first run — would likely produce substantially better documents. The infrastructure for that second run now exists.
Recoverable Foundation
This library is a project of Recoverable Foundation, which works to reduce suffering and preserve the foundations of advanced civilization in the aftermath of large-scale catastrophe.
The foundation’s working paper, Recoverable: What Civilizational Recovery from Nuclear War Might Actually Look Like, is an extended thought experiment exploring what post-catastrophe recovery would actually require. A central idea is the construction of a pre-positioned AI inference facility in New Zealand — powered by the country’s renewable grid, commercially viable in peacetime, and capable of providing expert-level guidance across every domain simultaneously in a crisis. This library demonstrates what such a facility could produce; the working paper argues that a functioning facility operating post-event, answering questions in real time and adapting guidance to conditions as they develop, would be vastly more valuable still.
License
All Recovery Library documents are licensed under Creative Commons Attribution-NoDerivatives 4.0 International (CC BY-ND 4.0).
You may share and redistribute the documents in any medium or format, with attribution, but you may not distribute modified versions.
Disclaimer requirement: Each document carries a disclaimer stating that it was generated through human-AI collaboration, has not been peer-reviewed, and contains errors. This disclaimer is an integral part of the document and must be included in any reproduction or redistribution, along with attribution to Recoverable Foundation. The disclaimer exists to protect readers from treating unreviewed material as authoritative guidance — removing it would be irresponsible regardless of licensing terms.