Recoverable

What Civilizational Recovery from Nuclear War Might Actually Look Like

Working Paper — Draft 0.82 — February 2026

Recoverable Foundation

A Note on What This Is

This is a thought experiment. Most writing about nuclear conflict focuses on deterrence, or on the immediate horror of the exchange, and then stops—as if the story would more or less be over. We think that assumption is wrong, and that what comes after matters enormously—not only because billions of lives hang on what happens next, but because human civilization may be the only thing like it in the universe, and allowing it to disappear through failures of preparation would be a loss beyond measure. Where this paper cites specific research, we’ve tried to ground claims in peer-reviewed work and cite properly. Where we’re speculating, we say so. It is an extended what-if, informed by the best evidence we could find, intended to sketch a plausible recovery trajectory and start a conversation about how to think about preparation—what questions to ask, what frameworks might apply, and where the highest-leverage interventions might lie.

Some estimates here are necessarily rough—predicting how hundreds of millions of survivors would reorganize is not amenable to precise modeling. But some of the specific, tractable questions (can you run a tractor off the electrical grid? how many draft horses does New Zealand actually have? what does seaweed production look like under nuclear winter?) turn out to have surprisingly concrete answers. And some of the most promising interventions—particularly the idea of pre-positioned AI systems that could serve as expert advisors across every technical domain during recovery—are both feasible with current technology and commercially viable in the meantime.

Two notes on scope. First, we use nuclear war as our reference scenario because it has the most developed scientific literature, but much of the framework here—food resilience, energy transition, knowledge preservation—is relevant to other catastrophic disruptions: engineered pandemics, AI-related catastrophes, or cascading climate-driven failures that propagate across interconnected systems. Second, while the analysis aims to be general, we periodically zoom in on New Zealand for concrete illustration. NZ is geographically isolated, food-sufficient for roughly eight times its population, generating over 85% of its electricity from renewables, and institutionally stable—and its longstanding anti-nuclear stance, lack of strategic military significance, and absence of high-value industrial or economic targets make it very unlikely to feature in any nuclear exchange. Where NZ-specific detail appears, the underlying principles typically apply more broadly, and much more work is needed across many regions and scenarios.

Part I: The Catastrophe and Its Immediate Aftermath

1.1 The Nuclear Exchange

The scenario we examine assumes a full-scale nuclear exchange between NATO and Russia, involving the detonation of approximately 4,400 warheads—a fraction of current arsenals. Modern climate models, particularly those developed by Robock, Toon, and colleagues at Rutgers and the University of Colorado, indicate this would inject approximately 150 Tg (150 million tonnes) of soot into the stratosphere from the firestorms ignited in targeted cities.12 This soot would spread globally within weeks, blocking 70–80% of incoming solar radiation and triggering surface temperature drops of 5–15°C, with the most severe cooling concentrated in the Northern Hemisphere.3

The nuclear winter would persist for 5–10 years in its acute phase, with partial agricultural recovery beginning around year 3–5 and full climatic normalization not occurring for 20–30 years.4 During the acute phase, global crop yields would drop catastrophically. Penn State researchers modeling corn production under the 150 Tg scenario found an 80% decline in yields globally, with an additional 7% reduction from increased UV-B radiation as the ozone layer degrades.5

1.2 The Famine

The 2022 study by Xia, Robock, et al. published in Nature Food remains the most comprehensive analysis of post-nuclear-war food security. Under the 150 Tg scenario (a US–Russia war), the study estimates more than 5 billion deaths from famine alone within two years of the exchange—a figure that does not include the hundreds of millions killed directly by blast, fire, and radiation, and that was calculated from a 2010 population baseline of 6.7 billion rather than today’s 8 billion.6 The study’s detailed country-by-country modeling found that under the most severe scenario, fewer than 25% of the population in most nations would survive to the end of year two. More than 75% of the planet would be starving within two years. Even adaptation measures such as redirecting livestock feed to human consumption and eliminating food waste would have only marginal impact under severe scenarios.7

Critically, these deaths would be overwhelmingly concentrated in the Northern Hemisphere and in nations dependent on food imports. The Southern Hemisphere, buffered by the thermal mass of surrounding oceans and by distance from the Northern Hemisphere soot injection zone, would experience significantly less severe cooling—for example, modelling suggests a decline of roughly 5°C for New Zealand, compared to 20–30°C across Northern Hemisphere agricultural regions—severe enough to require significant adaptation, but potentially survivable with preparation.

It is worth pausing on the scale of what we are describing. The famine estimates above reflect a world with essentially no pre-war food resilience preparation—no pre-positioned seed banks, no scaled-up seaweed farming, no rationing protocols. In theory, aggressive food resilience investments could dramatically reduce these numbers, potentially saving billions of lives. The resilient food technologies discussed in Part II—seaweed, cold-tolerant crops, alternative food sources—exist in research form and could, with sufficient investment, be ready to deploy at scale. But as of today, that investment has not been made. The survivor estimates that follow therefore reflect a scenario closer to the dire end of the spectrum: not the worst imaginable, but far worse than it needs to be. The gap between where we are and where we could be with serious food resilience preparation is, arguably, the single most important finding in the catastrophic risk literature.

1.3 Who Survives and Where

Boyd and Wilson (2023), in their analysis of island refuges published in Risk Analysis, identified Australia, New Zealand, Iceland, the Solomon Islands, and Vanuatu as the island nations most likely to preserve complex societal functioning through an abrupt sunlight reduction scenario.8 Their analysis combined food self-sufficiency modeling under nuclear winter conditions with assessments of energy independence, manufacturing capability, social cohesion, and governance resilience.

New Zealand, in particular, has been the subject of sustained analysis as a potential resilience node. Boyd and Wilson’s 2022 report Sustained Resilience documented NZ’s strengths—geographic isolation, food production capacity (the country currently produces enough calories for approximately 40 million people), over 80% renewable electricity generation (a figure that has since risen above 85%)9—alongside critical vulnerabilities, particularly extreme dependence on imported refined fuel, digital infrastructure fragility, and inability to manufacture replacement parts for farm machinery.10

[SPECULATIVE] The following survival estimates are extrapolated from the academic literature but involve substantial uncertainty. We present ranges rather than point estimates, and readers should treat these as order-of-magnitude indicators rather than predictions.

Drawing on the Xia et al. models and regional food production data, we estimate the following plausible post-war survival ranges (5 years post-exchange):

South America: 100–300 million survivors. Brazil alone has enormous agricultural capacity and the largest hydroelectric complex in the Southern Hemisphere (Itaipu at 14 GW). Argentina and Chile contribute significant food production and industrial capability. However, severe social disruption and governmental fragmentation are likely given the region’s existing institutional stresses.

Sub-Saharan Africa: 80–200 million survivors. East and Southern Africa are the most resilient sub-regions. Minimal industrial base in most areas, but some regions have resilient traditional food systems. South Africa retains significant industrial and mining capability.

Southeast Asia and Oceania: 50–150 million survivors. Australia’s mineral wealth and industrial capability, combined with NZ’s stability and food security, anchor the region. Southeast Asian nations face high-density population pressures against severely reduced food production.

South Asia: 30–100 million survivors. Southern India and Sri Lanka have the best prospects, given India’s significant industrial infrastructure, but the subcontinent faces severe disruption from population density, food import dependency, and potential direct targeting of Pakistan.

Northern Hemisphere remnants: 50–200 million scattered survivors. Concentrated in untargeted rural areas. Severe radiation contamination in many regions, prolonged nuclear winter impact, governmental collapse.

Total estimated global survivors: 300 million–1 billion, depending critically on pre-war resilience preparations. The lower end of this range—the scenario this paper primarily examines—reflects a world that has made little serious food resilience investment. The upper end, and potentially well beyond it, becomes possible with the kind of pre-war preparation discussed in Part II and Part VI. The gap between these outcomes represents billions of lives.

Part II: Surviving the First Five Years—Food Resilience

2.1 The Resilient Foods Portfolio

The numbers above are staggering. But a growing body of academic research has begun to ask what could be done—both before and after such a catastrophe—to reduce the death toll and improve the prospects for recovery. The answers are sobering in their own way: the gap between current preparedness and what would be needed is enormous. But the research at least begins to map what is possible.

The Alliance to Feed the Earth in Disasters (ALLFED), an organization based at the University of Canterbury, New Zealand, has produced the most comprehensive research program on food production under catastrophic conditions.11 Their 2024 review in Critical Reviews in Food Science and Nutrition catalogs a portfolio of food production methods that could be rapidly scaled in an abrupt sunlight reduction scenario, including crop relocation, cold-tolerant cultivars, seaweed farming, greenhouse production, and novel industrial food sources.12

2.2 Seaweed: The Most Promising Resilient Food

Jehn et al. (2024), published in AGU’s Earth’s Future, modeled global seaweed production under nuclear winter conditions and found that seaweed (Gracilaria tikvahiae) could provide the caloric equivalent of up to 45% of pre-war global human food demand within 9–14 months of scaling.13 Counterintuitively, seaweed growth rates actually increase under nuclear winter conditions because ocean upwelling delivers more nutrients to the surface. The farming technology required is low-tech—fundamentally ropes, buoys, and anchors—and could be deployed without experienced labor.14

However, the practical challenges of deploying seaweed farming at scale in the immediate aftermath of a nuclear exchange should not be underestimated. The 9–14 month timeline assumes rapid, coordinated scaling from a near-zero base in most regions—during a period of extreme social trauma, political disorder, and fear. Seaweed farming also requires delayed gratification: farms must be established, grown, and expanded over months before meaningful harvests can begin, while populations are starving now. Without pre-war investment in seaweed farming infrastructure, training, and protocols, the gap between theoretical caloric potential and practical food delivery could be enormous. This is one of the strongest arguments for treating food resilience as a priority area for preparation today, even though this paper focuses primarily on post-catastrophe recovery mechanics.

One important caveat: the same study noted that New Zealand’s coastal waters may be too cold or nutrient-poor for optimal seaweed production even under nuclear winter conditions.15 NZ’s food resilience rests more on its existing pastoral agriculture, which—because it is grass-fed rather than grain-dependent—is more resilient to reduced sunlight conditions than the arable agriculture that dominates Northern Hemisphere food production.

2.3 Cold-Tolerant Crops and Crop Relocation

Research suggests that cool-tolerant crops—potatoes, sugar beets, barley, certain brassicas—could be relocated to tropical regions where temperatures remain viable even under nuclear winter conditions.16 Penn State researchers have recommended the preparation of “agricultural resilience kits” containing seeds for faster-growing, cold-adapted varieties.17 The difference between having these seed stocks pre-positioned and not could be measured in hundreds of millions of lives.

2.4 Industrial Food Production and Alternative Proteins

Beyond seaweed and crop relocation, researchers have identified several additional food production methods that could contribute meaningfully to post-catastrophe caloric supply. These range from low-tech approaches that could be deployed almost immediately to industrial processes that would require some surviving manufacturing capability.

Single-cell proteins—edible bacteria or fungi grown on simple feedstocks—represent perhaps the most intriguing industrial option. The Finnish company Solar Foods has commercialised a product called Solein: a protein powder produced by feeding hydrogen-oxidising bacteria with hydrogen (from electrolysis) and carbon dioxide from the air.18 The process requires only electricity and water, produces no agricultural waste, and is completely independent of sunlight. For a post-catastrophe civilisation with functioning renewable electricity, this is a potentially transformative technology—food production that requires no farmland, no sunlight, and no fossil fuels. The technology is not yet at a scale that could feed large populations, but the underlying biology is well understood and the equipment requirements are modest compared to semiconductor fabrication or petrochemical refining.

Cellulosic sugar—the enzymatic conversion of wood, crop residue, and other plant fibre into edible sugars—offers another pathway. The technology was originally developed for biofuel production and operates at industrial scale in several countries.19 Redirecting it to food production is conceptually straightforward: the enzymes break down cellulose (which humans cannot digest) into simple sugars (which they can). The feedstock is abundant in any forested region, and the process can operate year-round regardless of climate conditions. Leaf protein concentrate, extracted by mechanically pressing green leaves and precipitating the protein from the juice, is even simpler—requiring minimal equipment and drawing on plant material that would otherwise be inedible.

Greenhouse food production using geothermal or renewable electricity for heating and lighting could maintain some conventional crop cultivation even under nuclear winter conditions. New Zealand and Iceland are particularly well-suited to this approach given their abundant geothermal energy, and Iceland already produces significant quantities of tomatoes and cucumbers in geothermally heated greenhouses at high latitudes.20 The scale would be modest compared to open-field agriculture, but greenhouses could provide nutritional variety and fresh produce that would be otherwise unavailable.

Finally, strategic food stockpiling—the most conceptually simple and arguably most immediately impactful intervention—deserves mention. Finland maintains mandatory strategic grain reserves sufficient to feed its population for months. Most nations do not.21 A global network of strategic food reserves, pre-positioned in probable surviving regions and designed to bridge the gap between catastrophe and the scaling of alternative food production, could prevent millions of deaths during the critical first year when no other food system is yet operational. The cost would be substantial—building and maintaining reserves for hundreds of millions of people is expensive—but the intervention is concrete, proven, and requires no technological development whatsoever.

A thorough examination of these food resilience technologies is beyond the scope of this paper—each deserves more detailed treatment than this paper can provide.22 What matters for our purposes is the cumulative picture: there is not one resilient food technology but a portfolio of them, spanning a range of technological complexity and lead times. The more of these that are developed, tested, and pre-positioned before a catastrophe, the more people survive. The research base is growing rapidly; what is lacking is the investment to translate research into deployable capability.

Part III: The Energy and Mechanization Problem

Perhaps the most challenging practical question for post-catastrophe recovery is: how do you maintain agricultural production without fossil fuels? Modern agriculture is profoundly diesel-dependent. Tractors, harvesters, transport vehicles, and processing equipment all run on refined petroleum products. This vulnerability applies everywhere, but it has been examined in particular detail in the New Zealand context—Boyd and Wilson specifically identified dependence on imported refined fuel as one of NZ’s most critical vulnerabilities: “critically, energy is needed for food processing and distribution. Milk needs to be transported every day, without electric trucks this requires refined fuel.”23

Post-nuclear-war, global petroleum refining and distribution would collapse. Existing fuel stocks would be consumed within weeks to months. But the problem runs deeper than disrupted supply chains. Fossil fuels would play a critical but limited role in the earliest phase of recovery—rationed stockpiles powering essential transport and machinery—but would be largely exhausted within years and cannot be counted on as a foundation for long-term rebuilding, for reasons worth examining in detail.

3.1 Why Fossil Fuels Cannot Drive Recovery

It is tempting to assume that a recovering civilization would simply re-access fossil fuels the way industrializing nations did in the 18th and 19th centuries. But the easy fossil fuels are gone. The coal that powered the original Industrial Revolution was often accessible from surface outcrops or shallow mines requiring minimal technology. The oil that transformed the 20th century initially gushed from shallow wells drilled with simple equipment. Today’s remaining reserves are predominantly deep underground, offshore, in tar sands, or in shale formations requiring hydraulic fracturing—all of which demand advanced drilling equipment, specialized metallurgy, complex chemical processing, and enormous capital investment.

Consider what modern oil extraction actually requires: drill bits made from tungsten carbide or polycrystalline diamond, drilling mud (a precisely engineered chemical fluid that stabilises the borehole and prevents blowouts), steel casing rated for extreme pressures, blowout preventers weighing hundreds of tonnes, and refineries comprising thousands of specialized components manufactured across global supply chains. A post-catastrophe society with degraded industrial capability simply cannot produce these things. The expertise alone—petroleum engineers, drilling specialists, refinery operators—represents decades of accumulated institutional knowledge that would be severely depleted even with hundreds of millions of survivors.

The problem goes deeper than physical hardware. Modern oil exploration is fundamentally dependent on advanced computing. Locating remaining reserves requires processing petabytes of seismic data on supercomputers—major oil companies operate dedicated high-performance computing clusters solely for this purpose. Hydraulic fracturing, which accounts for 90% of new U.S. wells, relies on real-time computational monitoring, AI-driven optimization, fiber-optic sensing networks, and cloud-connected control systems. Even conventional well management increasingly depends on sensor networks and automated systems. The easy oil that could be found and extracted without any of this—shallow deposits that gushed under natural pressure—is largely exhausted. What remains requires not just industrial hardware but semiconductor-level technology to locate and produce. This creates a particularly vicious chicken-and-egg problem: rebuilding computing requires an industrial base, an industrial base requires energy, and accessing remaining energy requires computing.

The population dimension compounds this. Modern drilling, refining, and distribution represent the output of a highly specialized global economy with billions of participants. A post-catastrophe world of hundreds of millions of survivors, however resourceful, simply does not have the population base to sustain the thousands of specialized roles that the fossil fuel supply chain demands. Rebuilding that capacity could take a century or more—by which point other energy pathways would likely be more practical.

This is not to say that fossil fuels should be ignored where they are accessible. Surface coal deposits, shallow natural gas wells that remain operational, existing stockpiles of refined fuel—all of these should be exploited opportunistically during the early recovery period. If accessible coal can power a steam engine that prevents people from starving while electrical alternatives are being developed, that is obviously the right choice. Some regions—parts of Australia, southern Africa, South America—may have easier access to coal or gas deposits than others, and those regions should use what they have. The point is not ideological opposition to fossil fuels but a clear-eyed assessment that they cannot be the foundation of recovery the way they were the foundation of the original industrialization. The easy deposits that enabled the Industrial Revolution—coal accessible from surface outcrops, oil gushing from shallow wells—are largely exhausted. What remains requires the very technology that a recovering civilization lacks.

This is a crucial insight for recovery planning: the path back cannot retrace the original Industrial Revolution. Coal and oil were the enabling technologies for the first industrialization precisely because they were easy to access with primitive tools. That historical shortcut is mostly gone. Recovery must instead be built primarily on the energy sources that are available to a society with moderate technological capability—which is where renewable electricity excels. Hydroelectric dams, geothermal plants, and wind turbines are complex to build from scratch but require no fuel inputs once operational. In regions where this infrastructure already exists—as it does extensively in New Zealand—the transition is a matter of maintenance and adaptation rather than construction from zero. The question is not how to re-create the fossil fuel economy but how to build a post-fossil-fuel technological civilization, supplemented by whatever fossil fuels happen to be accessible, using the renewable infrastructure that survives the catastrophe.

The remainder of this section examines the available alternatives in order of technological complexity.

3.2 Can We Revert to Animal-Powered Farming?

The most immediate question for any surviving agricultural region is whether it can revert to animal-powered farming. The answer is sobering. Every industrialised nation has undergone the same transition from draft animals to machinery over the past century, and in every case, the populations of working animals and the knowledge to use them have collapsed. New Zealand provides a well-documented illustration of the problem. European-style farming in NZ was entirely animal-powered from the 1820s through the early 1900s, with Clydesdale horses and bullocks providing draft power for plowing, harvesting, and transport.24 By the mid-1950s, the transition from horse power to tractor power was almost complete, and draft horse populations collapsed.25

Today, the situation is stark. New Zealand has approximately 750 Clydesdales remaining—out of roughly 5,000 worldwide—and the number of breeding mares producing foals has halved in the past decade, with only 44 registered foals expected in 2025.26 The Clydesdale Horse Society notes that “there’s not many of us left that know the old ways, working the horses and feeding them right. After my generation, we haven’t got that many young ones coming on behind us.”27

The problem extends beyond numbers—a horse population can expand from 750 to working levels within a decade through intensive breeding—to knowledge, infrastructure, and land use. Draft horses consume the equivalent of “as much as eight men or four sheep,” requiring significant acreage dedicated to oat production for feed.28 A full reversion to animal-powered agriculture would require diverting productive farmland to horse feed, reducing total food output at exactly the moment it needs to be maximized. Moreover, the specialized knowledge of teamster work—harnessing, working, and maintaining teams of draft horses—has nearly vanished from NZ’s working population.

[SPECULATIVE] While a gradual expansion of draft horse populations is feasible over a 5–10 year period, we assess that animal power alone cannot sustain modern agricultural output in any industrialised region. The problem is universal: draft animal populations, knowledge, and supporting infrastructure have atrophied everywhere. This argues strongly for maintaining electrified mechanization capabilities.

3.3 The Renewable Electricity Advantage

If fossil fuels cannot power recovery, what can? The answer, for any region fortunate enough to have it, is existing renewable electricity infrastructure. Hydroelectric dams, geothermal plants, and wind turbines require maintenance but no fuel inputs—they continue generating power as long as the equipment functions. The key variable across surviving regions is how much of this infrastructure already exists. Brazil’s massive hydroelectric system, Australia’s growing solar fleet, and East Africa’s geothermal installations all represent critical recovery assets. But New Zealand offers the clearest illustration of how a renewable-dominated grid could sustain a recovering society.

New Zealand generates approximately 43,500 GWh of electricity annually, with over 85% from renewable sources: hydroelectric (approximately 60%, from over 5,000 MW of installed capacity), geothermal (approximately 18%), and a rapidly growing wind and solar fleet.29 The national grid comprises nearly 11,000 km of high-voltage transmission lines connecting 178 substations.30 Crucially, this infrastructure is not dependent on fossil fuels for its primary generation. Hydroelectric dams and geothermal plants require maintenance but no fuel inputs.

This is NZ’s most critical post-catastrophe asset, and the principle applies wherever renewable infrastructure exists. While most nations’ electricity generation depends heavily on coal, gas, or imported uranium, any nation with substantial renewable capacity would see its power supply continue functioning—degrading gradually as maintenance becomes difficult, but not collapsing abruptly. NZ is the strongest example: the 1987 NZ Nuclear Impacts Study, referenced by Boyd and Wilson, reached similar conclusions about the resilience of NZ’s hydro-dominated electricity system.31

However, electrical grids everywhere were designed to serve urban centres and industrial loads, not agricultural fields. The question for any region with surviving renewable capacity is whether electrical power can be extended to, or delivered at, the point of agricultural use.

3.4 Tethered Electric Tractors: A Proven Concept

The idea of powering farm equipment directly from the electrical grid via a cable is not science fiction—it has been demonstrated repeatedly, from the first Zimmermann electric ploughs in Germany in 1894 to modern prototypes. For any surviving region with large-scale arable farming and a functioning electrical grid, tethered electric tractors represent a viable path to maintaining crop production without fossil fuels. (New Zealand, where agriculture is overwhelmingly pastoral rather than arable, would have limited direct use for this technology—but regions like Argentina, Australia, and Brazil would benefit enormously.) In 2019, John Deere unveiled the GridCON, a fully electric, cable-powered, autonomous tractor based on the 6210R platform. The GridCON draws power continuously at over 300 kW via a cable from the field border, with a drum carrying up to 1,000 meters of cable. It operates autonomously at up to 20 km/h and achieves 85% drivetrain efficiency.32

Italian company OXE-E has developed a similar concept: a 100% electric, cable-powered, autonomous tractor using steel tracks rather than wheels, with the cable managed by a rotating dome mechanism.33 John Deere’s researchers have modeled daisy-chain configurations of multiple tethered units in the field, covering larger acreages.34

Earlier attempts are also instructive. In 1925, Major Andrew McDowall in East Lothian, Scotland, built an electric tractor powered by a 12.5 hp motor, drawing power through a cable from repositionable power points, covering approximately 400 meters in each direction. He claimed plowing costs were half those of conventional tractors.35 The Soviet Union also pursued cable-powered electric tractors in the 1940s and early 1950s, though the program was ultimately abandoned due to cable management difficulties.36

The key constraints for cable-powered tractors are: (a) the cable length limits range—current prototypes carry 1 km of cable, which covers fields of 50–100 hectares depending on configuration; (b) cable management requires sophisticated guidance to prevent tangling; and (c) a high-power electrical connection must be available at the field border.

3.5 Rural Electrification: How Far Can We Extend the Grid?

The historical record of rural electrification is directly relevant. In the United States, only 10% of farms had electricity in 1935. Private utilities estimated transmission line costs at approximately $2,000 per mile ($30,000+ in 2020 dollars) and considered rural service uneconomical.37 The Rural Electrification Administration, created by Roosevelt in 1935, financed cooperative-owned distribution systems and achieved near-universal farm electrification by the early 1970s.38

New Zealand electrified faster. The country’s nationalized power system, anchored by hydro, enabled rapid rural extension; as ScienceDirect notes, “in places where rural services were nationalized or subsidized, such as in New Zealand, the countryside was rapidly brought into the grid.”39 By mid-century, NZ’s rural areas were substantially electrified.

The World Bank/ESMAP benchmarking study on rural grid extension found medium-voltage distribution line costs ranging from $8,000–$20,000 per kilometer in developing countries, with figures of roughly $4,000/km achievable with single-wire earth return (SWER) designs—a simplified system using one wire and the ground itself as the return path, ideal for low-density rural areas.40 NZ already uses SWER lines extensively in rural areas.

The critical point is that many surviving regions already have extensive rural electrification—New Zealand, Australia, Argentina, and parts of Brazil and South Africa all electrified their agricultural areas during the twentieth century. The infrastructure to deliver power to farms largely exists. What would be needed post-catastrophe is: (a) maintaining the existing grid with locally manufactured replacement parts, and (b) potentially extending or upgrading connections to provide the higher-capacity service (300+ kW) that tethered electric tractors require at field borders. The latter is a significant engineering task but not an insurmountable one—it requires heavier gauge wire and additional transformer capacity, technologies well within reach of a society with functioning hydroelectric power and basic copper-working capability.

3.6 Hydrogen as a Portable Energy Carrier

For agricultural operations beyond the reach of the electrical grid—remote farmland in any region, hill country pastoral stations, areas where grid extension is impractical—an alternative energy carrier is needed. Tethered electric tractors solve the problem for flat, grid-adjacent arable land, but much of the world’s agriculture operates far from existing power lines. Hydrogen, produced by electrolysis of water using renewable electricity, is the most promising candidate for filling this gap. The technology chain is conceptually simple: electricity splits water into hydrogen and oxygen; the hydrogen is stored and transported; a fuel cell or combustion engine converts it back to mechanical power.

Multiple major manufacturers are actively developing hydrogen-powered tractors. AGCO’s Fendt Helios prototype uses a 100 kW fuel cell and 25 kW battery to deliver approximately 135 hp, matching a conventional mid-sized tractor’s capability for a 5–8 hour working window.41 Kubota has unveiled an autonomous hydrogen fuel cell tractor prototype with three hydrogen tanks providing approximately 4 hours of operation, refuelable in 10 minutes.42 Massey Ferguson is targeting a 2026 debut for a hydrogen-powered prototype.43

For a post-catastrophe civilization, the critical question is whether hydrogen production and fuel cell technology are achievable at moderate technological levels. The basic process—electrolysis, or splitting water into hydrogen and oxygen using electricity—was first demonstrated in 1800 and requires only electrodes, a membrane (or simply a tank), and electrical current. Industrial alkaline electrolyzers have been manufactured since the 1920s. Fuel cells, which reverse the process to convert hydrogen back into electricity, are more complex. The most advanced modern designs—proton exchange membrane (PEM) cells—require platinum catalysts and specialized membranes. But simpler alkaline fuel cells, the type that powered the Apollo spacecraft, use nickel rather than platinum and are well within reach of a society with mid-twentieth-century industrial capability.

[SPECULATIVE] We estimate that a recovering society with 1950s–1960s-level industrial capability could manufacture basic alkaline electrolyzers and alkaline fuel cells. Storing hydrogen at the 350-bar pressures used in modern fuel cell vehicles requires advanced metallurgy and is a significant engineering challenge, but lower-pressure storage (50–100 bar) for stationary and slow-moving agricultural applications is considerably simpler. This assessment requires further engineering analysis to validate.

An interim approach uses hydrogen in internal combustion engines rather than fuel cells. The H2 Dual Power tractor, commercially available in the Netherlands, mixes hydrogen into a modified diesel engine, reducing emissions while using familiar mechanical technology.44 Pure hydrogen internal combustion engines are well-established technology, requiring only modest modifications to conventional engines. This approach sacrifices the efficiency advantages of fuel cells but is achievable at a lower technology level.

3.7 The Practical Farm Energy Pathway

Based on this analysis, the practical pathway for maintaining agricultural mechanisation without fossil fuels combines several of the technologies described above. In the earliest years, rationed fuel stocks and repurposed electric vehicles bridge the gap. As fuel is exhausted, grid-connected electric farming—tethered tractors and stationary electric equipment—takes over for flatland arable regions, while animal power supplements mechanisation elsewhere. Over a longer horizon, hydrogen produced from surplus renewable electricity extends mechanised agriculture to areas beyond grid reach, initially through converted internal combustion engines and later through purpose-built fuel cell equipment. The specifics will vary enormously by region depending on existing infrastructure, climate, and agricultural profile, and the timelines for these transitions are discussed in Part V.

Part IV: The Knowledge Accelerant—AI Inference as Recovery Infrastructure

4.1 The Knowledge Bottleneck

Parts II and III described what a recovering society needs to do—grow food without sunlight, maintain agriculture without fossil fuels, rebuild industrial capability from a diminished base. But knowing what needs to be done and knowing how to do it are very different things. Boyd and Wilson identified a critical vulnerability: “New Zealand… lacks the ability to manufacture many replacement parts for farm and food processing machinery.”45 This illustrates a broader problem: modern technological society depends on an extraordinary depth of specialized knowledge distributed across millions of experts globally. Even with hundreds of millions of survivors, the density of expertise in any given domain—metallurgy, electrical engineering, chemical processing, semiconductor fabrication—would be drastically reduced. The knowledge exists in libraries and digital archives, but translating textbook knowledge into practical capability requires expert guidance. This translation gap—between preserved knowledge and practical implementation with the specific materials and equipment actually available—may be the most important bottleneck of all. It is also, we believe, the one most amenable to a concrete solution.

4.2 A Pre-Positioned AI Inference Facility

[SPECULATIVE] This section describes a proposed facility concept. The commercial and technical feasibility of this specific design requires further analysis, but the general principle—that preserved computational systems could dramatically accelerate recovery—is supported by the Boyd and Wilson recommendation to ‘research actions NZ might take to increase the chance of rebooting a collapsed global civilization, such as developing local digital manufacturing, renewable energy, and other independent high-tech sectors.’

We propose the construction of a secure, robust AI inference facility in New Zealand, powered by the country’s renewable electricity grid, designed to serve dual purposes: commercial AI computation services pre-catastrophe (funding its construction and operation), and civilizational recovery guidance post-catastrophe.

Such a facility, containing large language models trained on the totality of human technical knowledge, could provide expert-level guidance across every domain simultaneously: agricultural optimization for nuclear winter conditions, industrial process adaptation for available materials, energy infrastructure planning, medical guidance, and engineering design. This level of capability would be unique among surviving populations and could make an immense difference to the recovery trajectory.

The facility would not operate in isolation. Both the knowledge it generates and the computational capability it represents can be distributed outward through a layered system of communication, transport, and pre-positioned technology—each tier suited to different levels of urgency, bandwidth, and distance.

At the local level—within surviving nations—much of the existing communications and transport infrastructure would remain functional. Domestic fibre-optic networks, cellular towers, and WiFi equipment in non-targeted countries would largely survive, particularly where renewable electricity keeps them powered. This means that within a country like New Zealand or Australia, electronic distribution of knowledge could begin immediately: technical documents, agricultural guidance, and medical protocols transmitted over surviving domestic networks to any functioning computer or phone. Local transport would also remain viable. Electric vehicles, already numbering in the millions across surviving regions, require no fuel supply chain—only electricity. Cars, vans, and light trucks could handle urgent deliveries of USB drives, printed materials, and computer equipment. Electric bicycles and, where available, drones extend this last-mile capability further.

At the regional level—between nearby surviving nations, such as New Zealand and Australia—some telecommunications infrastructure may persist for a period. Undersea cables between non-targeted countries could carry traffic as long as their terminal equipment is maintained and powered, though the longevity of this capability is uncertain. Satellite communication systems may also remain partially functional, though they are vulnerable to disruption from nuclear detonations in orbit and would degrade as ground stations fail and satellites reach end of life without replacement. HF (high-frequency) radio, which functions without satellites or internet infrastructure and has been used for trans-oceanic communication since the 1920s, provides a reliable fallback for text-based exchange. For physical transport of knowledge and computing resources—inference hardware, USB libraries, printed technical documentation—air transport would be available in the early years. Commercial and general aviation aircraft in surviving regions would remain serviceable, and stored jet fuel with stabilisers can last several years. Cannibalization of grounded aircraft for spare parts extends maintenance windows further. Air capability would decline over roughly 5–10 years as fuel stocks and maintainable airframes are exhausted, but this window covers precisely the critical early period when distributing inference capability and technical knowledge matters most. (The broader role of air transport in moving critical goods—medical supplies, specialist personnel, seed stocks—is discussed in Part VI.)

At the intercontinental level—connecting, say, New Zealand to South America or Southern Africa—sail would be the primary transport mode from the outset for heavier cargo: computing hardware, inference equipment for redistribution, printed reference libraries, and bulk supplies. Air transport, rationed carefully to extend fuel stocks as long as possible, would be reserved for the most time-critical deliveries—USB drives carrying vast technical libraries at negligible weight, critical spare parts, specialist personnel. As air capability declines over roughly 5–10 years, HF radio continues to provide text-based exchange and sail handles all physical delivery. Sail is slow by modern standards, but it is reliable, requires no fuel, and draws on well-understood technology.

A critical complement to all of these channels is the pre-positioning of knowledge and AI capability on existing devices before a catastrophe occurs. The world currently contains billions of personal computers, tablets, and smartphones. How many would survive depends heavily on the scenario—devices in targeted Northern Hemisphere regions would largely be lost, but hundreds of millions in the Southern Hemisphere and other non-targeted regions would remain functional for ten years or more, with potentially many more recoverable from less affected areas. The challenge is not hardware but ensuring these devices carry useful information. USB drives and solid-state storage devices loaded with comprehensive technical libraries—agricultural manuals, medical references, engineering databases, educational curricula—are tiny, inexpensive, and essentially indestructible in storage. Pre-distributed by the thousands to communities, libraries, hospitals, and institutions worldwide, they could make a civilisation-rebuilding knowledge base available to anyone with a functioning screen. Even small language models can now run on ordinary smartphones, providing basic question-answering and knowledge retrieval capability with no network connection required. More capable AI inference systems—compact devices with large unified memory running more powerful quantised models—could serve as regional knowledge bases and AI assistants, handling queries that smaller models cannot. These could be pre-positioned in the hundreds or thousands at modest cost. Larger inference installations, while not matching the central facility’s capability, could be established in countries with existing data centre infrastructure and reliable renewable power—Australia, Brazil, and others. If the New Zealand facility is built with capacity beyond its own peacetime needs (which makes commercial sense for serving global data sovereignty demand), surplus equipment could also be redistributed to other regions by emergency air transport or sail whenever such networks are available, with the facility itself generating detailed, customised installation guidance for whatever building and power supply the recipient has available.

The resulting knowledge distribution architecture is therefore not a single channel but a hierarchy. Pre-positioned devices and surviving local networks handle the vast majority of routine queries from day one. Regional inference installations and air-transported hardware address more complex needs. The central facility tackles the hardest problems—novel engineering challenges, comprehensive technical documentation, recovery coordination across regions—and generates the detailed guidance that flows outward through every available channel. Over the facility’s 10–20 year operational window, this layered system could transfer an enormous body of practical knowledge to surviving regions worldwide, far more effectively than any single channel could achieve alone.

New Zealand is the natural home for such a facility for reasons that go beyond catastrophe resilience. Commercially, data sovereignty is a rapidly growing concern—organisations worldwide, particularly in Europe, are increasingly uneasy about hosting sensitive computation with providers subject to the jurisdiction of larger powers. AI inference on confidential data—proprietary corporate information, strategic datasets, sensitive records—is an emerging frontier of this same concern, since it requires feeding that data directly through computational models in a specific jurisdiction. For organisations with these needs, where their AI runs matters as much as where their data is stored. New Zealand’s strong rule of law, political neutrality, transparent governance, and distance from great-power rivalries make it unusually well-suited to this role. Powered by NZ’s abundant renewable electricity, such a facility would be both commercially competitive and aligned with growing demand for low-carbon computing. There is also a strategic benefit to New Zealand itself: a facility serving corporations and institutions across competing geopolitical power centers gives all of those actors a direct interest in New Zealand’s continued stability, security, and prosperity—reinforcing the country’s position as a trusted neutral jurisdiction rather than tying it to any single bloc.

From a resilience perspective, these same qualities take on additional significance. A facility designed to survive and serve its purpose through a global catastrophe needs to be in a jurisdiction that is unlikely to be targeted, that can feed and power itself independently, and whose institutions are robust enough to maintain order and purpose under extreme stress. New Zealand meets these criteria more convincingly than perhaps any other nation. Critically, the facility’s post-catastrophe purpose would be to serve New Zealand and the world—not its investors or operators. While the initial development of such a facility would likely be driven by commercial logic and private capital, it would be appropriate for the governance structure to mature over time to include international institutional participation—and in a catastrophe scenario, the facility should pass from private hands to some form of public stewardship—whether national, international, or both—to serve the recovery of all surviving regions. Such a trajectory would make it genuinely public infrastructure: generating economic value and employment in normal times, reinforcing the independence and security that New Zealanders cherish, and positioning the country to play an extraordinary international role if the worst should happen.

4.3 Hardware Lifespan and Knowledge Persistence

Modern GPU hardware has a functional lifespan of 5–15 years with proper maintenance and cooling. With degraded maintenance capability, the facility might operate for 10–20 years post-catastrophe. During this window, its highest-value activity would be generating comprehensive technical documentation—producing a civilization-rebuilding library tailored to available resources and conditions in each surviving region, distributed both digitally through the layered network described above and in printed form for long-term preservation.

Even after the hardware fails, the knowledge it generated—in digital, printed, institutional, and educational form—would persist indefinitely. The AI’s most lasting contribution would not be real-time computation but the body of translated, adapted, practical technical knowledge it produced during its operational years.

Importantly, this documentation work need not wait for a catastrophe. The facility could include dedicated inference capacity for recovery research, generating, refining, and continuously updating a comprehensive recovery knowledge base during normal operations. Results could be published openly for review and correction by domain experts worldwide, and other facilities in other regions could mirror the library. By the time a catastrophe occurs—if it ever does—the recovery documentation would be a mature, peer-reviewed resource rather than something generated under crisis conditions. This also means the facility is producing tangible public-benefit output from day one.

4.4 From AI to Basic Computing: Bridging the Technology Gap

There is an enormous gap between a pre-positioned AI inference facility running modern GPUs and the kind of computing technology that a recovering society could manufacture for itself. Understanding this gap—and how to bridge it—is essential to the facility’s long-term value.

Modern AI inference requires not just advanced hardware but enormous quantities of it—thousands of GPUs working in concert, supported by high-bandwidth memory, networking interconnects, and cooling systems, all operating at scales that would have been unimaginable even twenty years ago. And each individual GPU represents perhaps the most complex manufacturing achievement in human history: billions of transistors fabricated at nanometer scales in facilities costing tens of billions of dollars, requiring thousands of specialized chemicals, ultra-pure materials, and precision equipment that itself took decades to develop. If this litany of dependencies sounds daunting, that is the point: no recovering society will manufacture GPUs for generations. The AI facility is, in a real sense, a one-time gift—a finite window of access to capabilities that cannot be reproduced until civilization has substantially recovered.

But computing itself has a much longer history than modern semiconductors, and a recovering society would not need to retrace every step. The first electronic computers in the 1940s used easily manufacturable vacuum tubes, but this phase can likely be skipped entirely. Transistors, invented in 1947, can be fabricated from germanium or silicon with 1950s-era equipment—and with the advantage of knowing the destination, a well-prepared society could move directly to transistor-based computing without the vacuum tube detour. Integrated circuits at the scale of the early 1970s—thousands of transistors on a single chip, sufficient for basic microprocessors—require clean rooms and photolithography (using light to etch circuit patterns onto silicon) but not the extreme ultraviolet lithography of modern fabs. Each of these steps is documented in exhaustive detail in the technical literature. Critically, aggressive stockpiling of existing computers and solid-state storage before a catastrophe could extend the window of key pre-war computing availability to 20–30 years or more, bridging much of the gap before locally manufactured transistor systems come online.

The AI facility’s most strategic use of its operational window may therefore be not just answering questions but actively producing the documentation, designs, and educational curricula needed to bootstrap a local computing industry from first principles. This means generating detailed, adapted-to-available-materials instructions for: basic semiconductor processing using locally available silicon (abundant in sand) and germanium; discrete transistor circuit design as the first achievable step; simple integrated circuit design at 1970s density; and the programming knowledge to make these systems useful. A 1970s-era minicomputer is not an AI, but it can run agricultural optimization models, engineering simulations, medical databases, and communication protocols—capabilities that would be transformative for a recovering society.

The path from there to modern computing is long. Moving from 1970s-scale integration (thousands of transistors) to even 1990s capability (millions of transistors) requires progressively more sophisticated fabrication equipment, cleaner clean rooms, and more precise photolithography. Each generation of improvement enables the tools needed for the next generation—a bootstrapping process that originally took the global semiconductor industry roughly 40 years with massive investment and a population of billions supporting extreme specialization. A recovering civilization with hundreds of millions of people and competing priorities would likely need considerably longer. But the key insight is that even basic computers—comparable to what existed in the 1970s—can run agricultural optimization models, engineering simulations, medical databases, and communication protocols. A recovering society does not need to rebuild modern AI to benefit enormously from computing; it needs to get back to where computing was fifty years ago, which is a far more achievable goal.

[SPECULATIVE] We estimate that a well-prepared recovering society could extend the useful life of pre-war computing hardware to 20–30 years through aggressive stockpiling of devices kept in storage, plausibly manufacture discrete transistor computer systems within 20–40 years of the catastrophe, and achieve simple integrated circuits (equivalent to early 1970s technology) within 40–80 years. Vacuum tube computers, while technically easier to build, would likely be skipped entirely—the performance gap between vacuum tubes and transistors is enormous, the materials for transistor fabrication (silicon, germanium) are abundant, and pre-war documentation would provide detailed manufacturing instructions. Reaching modern semiconductor capability—the kind needed to manufacture GPUs and run AI systems again—would likely require at least 100–200 years and a substantially recovered global economy with deep specialisation. These timelines are highly speculative but are informed by the pace of the original development, compressed by the advantage of knowing the destination and by the bridge that stockpiled pre-war hardware provides.

This timeline underscores why the AI facility’s operational window is so valuable. It represents a 10–20 year period during which a recovering society has access to capabilities it will not be able to reproduce for a century or more. Every hour of that window spent generating practical, actionable knowledge—printed, distributed, institutionalized—pays dividends across the entire recovery arc.

Part V: The Recovery Trajectory

5.1 What Changes with Hundreds of Millions of Survivors

The preceding sections have examined the critical building blocks of recovery—food production, energy infrastructure, and the knowledge systems that make everything else possible. But recovery is not a checklist; it is the messy, simultaneous pursuit of all of these at once, constrained by the same scarce labour, degraded supply chains, and difficult choices about where to invest limited resources. What does that actually look like? How do surviving regions reconnect, rebuild industry, and begin the long climb back toward modern capability? The answers depend heavily on the scale of surviving human capital and infrastructure.

Tens of millions of people globally hold meaningful engineering or scientific expertise today—the United States alone employs roughly 7 million in science and engineering occupations. But the overwhelming majority of the world’s technical workforce is concentrated in the Northern Hemisphere—the United States, Europe, China, Japan, South Korea, Russia—precisely the regions that would suffer the heaviest casualties. The significant Southern Hemisphere technical populations are in Brazil, India, Australia, South Africa, and Argentina, but these are much smaller in absolute terms. The number of surviving engineers and scientists is highly uncertain, but even so, several hundred thousand people with meaningful technical expertise would plausibly survive—a substantial resource that becomes far more valuable when surviving societies can communicate and coordinate across regions.

Existing industrial infrastructure in the Southern Hemisphere partially survives. Brazil’s steel industry, Australia’s mining operations, South Africa’s manufacturing base—all damaged and degraded by supply chain collapse but not destroyed. Maintaining and repairing existing infrastructure is fundamentally easier than building new. The renewable energy assets described in Part III—Brazil’s hydroelectric dams, Australia’s solar installations, East Africa’s geothermal plants—are unlikely military targets and require no fuel supply chain to continue operating.

[SPECULATIVE] The recovery timeline that follows is necessarily speculative and not derived from formal models, but is informed by historical industrialization rates. One critical variable is connectivity: a multi-continental network of hundreds of millions of survivors, sharing expertise and resources, could recover far faster than an isolated population of a few million starting from a near-zero industrial base—potentially by centuries.

5.2 Trade and Reconnection Between Regions

Trade between surviving regions would rely on multiple modes of transport, each suited to different distances and timeframes. In the early years, existing aircraft and rationed fuel reserves could sustain limited but critical air links between regions, while ground transport—electric vehicles, and conventional trucks and rail using stockpiled fuel—would handle overland trade within and between neighbouring countries. But air and fuel-dependent transport would decline over roughly 5–10 years as fuel stocks and maintainable airframes are exhausted. For long-distance maritime trade, sailing vessels would be the primary mode from the outset and the only reliable option over the longer term. This is not as limiting as it might appear—sailing ships maintained global trade networks for over 400 years before steam power, and Polynesian navigators crossed the Pacific in sailing canoes centuries before European contact. A Tasman Sea crossing (approximately 2,000 km) takes 1–2 weeks under sail versus 3 days for a powered vessel. NZ’s strong maritime tradition and access to timber for shipbuilding position it well for early maritime reconnection.

But sailing vessels are dramatically smaller than modern powered cargo ships, which limits not just speed but the total volume of goods that can move between regions. A return to sail means a return to trading only what truly matters: seed stocks, medicines, critical machine parts, technical documents, and specialists themselves. Bulk commodity trade of the kind that underpins modern economies is simply not feasible under sail. This is a real constraint, and it means that surviving regions need to be substantially self-sufficient in food, energy, and basic materials—maritime trade supplements local production rather than replacing it.

Battery-powered cargo ships are not feasible—the energy density requirements for trans-oceanic voyages vastly exceed what any manufacturable battery chemistry could provide. Synthetic fuels (hydrogen, ammonia, methanol produced from renewable electricity and atmospheric CO₂) could eventually power larger ship engines, but the chemical engineering required is at minimum 1950s–1960s level. Sailing ships are the honest answer for maritime transport for decades or longer.

5.3 Rail: The Recovery Backbone

Electric rail is likely the most valuable infrastructure investment for a recovering civilisation. Rail is approximately 10 times more energy-efficient than road transport per tonne-kilometer. The technology is mature—electric railways were widespread by 1900—and requires only steel rails, copper wire, electric motors, and a power source, all achievable with moderate industrial capability.

NZ’s existing rail network, while degrading, provides cleared and graded corridors that persist for decades. Rebuilding rail along existing routes is significantly easier than building from scratch. The NZ national grid’s 11,000 km of transmission lines46 also provide a template and, potentially, material for electrified rail reconstruction.

5.4 Steel and Construction Materials Without Fossil Fuels

Modern steel production is heavily dependent on coking coal, which serves both as fuel and as a chemical reductant—the agent that strips oxygen from iron ore. However, alternative pathways exist. Electric arc furnaces, which use electricity to melt steel (typically scrap metal), already account for approximately 25% of global production and have been in use since the early 1900s. For primary steelmaking from ore, hydrogen-based direct reduction (H-DRI) is being piloted by companies including SSAB and Hybrit, replacing coal’s chemical role with hydrogen produced from electrolysis.

NZ specifically has iron sand deposits on the North Island’s west coast (titanomagnetite), already used for steelmaking at the NZ Steel Glenbrook operation. In the near term, NZ’s existing stock of steel—in derelict vehicles, disused machinery, shipping containers, and structures repurposed as needs change—provides ample feedstock for electric arc furnaces. Longer-term, hydrogen-DRI using NZ’s renewable electricity becomes feasible as industrial capability matures.

For construction, timber—which NZ grows abundantly—substitutes for many applications. Lime mortar with volcanic ash (NZ has both limestone and volcanic geology) replicates Roman concrete technology that has lasted 2,000 years. Portland cement production in electric kilns is achievable once industrial capacity supports it.

5.5 The Long Arc: Recovery Without Fossil Fuels

The original Industrial Revolution was, in a fundamental sense, a fossil fuel revolution. Cheap, abundant, energy-dense coal and oil enabled rapid scaling of every industrial process—from smelting to transport to chemical synthesis. That shortcut, as discussed in Part III, is no longer available. The easy deposits are exhausted, and the remaining reserves require the very industrial sophistication that a recovering society lacks. This means the recovery trajectory is not a replay of the 18th–20th centuries but something qualitatively different: a slow, electricity-first rebuilding of industrial capability.

The analogy to the Industrial Revolution is instructive in another way. The original industrialization, from Newcomen’s first steam engine in 1712 to widespread electrification in the 1920s, took roughly two centuries—and that was with easy fossil fuels, no prior catastrophe, and growing populations providing expanding labor and markets. A post-catastrophe recovery, starting from a much higher knowledge base but a much smaller population and degraded infrastructure, faces a different set of constraints. The knowledge of how to build things exists (especially if preserved through the AI systems described in Part IV and the printed documentation they generate), but the workforce, supply chains, and economies of scale needed to actually build them must be painstakingly reconstructed.

Given all of this, what might a realistic recovery timeline actually look like? This brings us deep into speculative territory, but this paper is after all, at its core, a thought experiment. With that in mind, let’s imagine how recovery might unfold:

Decades 1–3: Stabilization and survival. The immediate priority is food production and maintaining existing infrastructure. Societies reorganise around available energy—hydro, geothermal, wind—and learn to operate without fossil fuel inputs. Electric vehicles serve as improvised farm equipment and local transport while batteries last. Basic manufacturing resumes using electric arc furnaces for steel, electric kilns for ceramics and cement, and electrically-driven machine tools. Maritime trade under sail reconnects surviving regions. In terms of manufacturing capability, this period is roughly analogous to the 1900s–1920s, powered by renewables rather than coal—but with one critical difference: hundreds of millions of surviving personal devices, pre-positioned USB knowledge libraries, and regional AI inference systems give this society access to information and expert guidance that no comparable civilisation in history has possessed. Aggressive pre-war stockpiling of computing hardware could extend this digital advantage for 20–30 years, bridging the gap to locally manufactured replacements.

Decades 3–7: Reconstruction and industrial expansion. With food security established and basic industry functioning, societies can invest in expanding their industrial base. Hydrogen production enables portable energy for transport and remote agriculture. New electrical generation capacity is built—additional hydro, geothermal, and especially wind and solar, which require less specialized construction than large dams. Chemical industry slowly rebuilds: ammonia production for fertilizer via the Haber-Bosch process (the century-old method of synthesizing ammonia from air and hydrogen under high pressure, which underpins modern agriculture), basic plastics, pharmaceuticals. This phase roughly parallels the 1940s–1960s in capability, though the specific technologies differ significantly.

Decades 7–15: Approaching modern capability. This is the phase where the recovery trajectory diverges most dramatically from the original industrialization. Rebuilding semiconductor fabrication, precision optics, advanced materials science, and the thousands of specialized sub-industries that underpin modern technology requires not just knowledge but enormous economies of scale and workforce specialization. A population of several hundred million, even well-organized and well-informed, simply cannot support the degree of specialization that 8 billion people enabled. The path to modern computing, telecommunications, and advanced manufacturing is measured in generations, not decades.

[SPECULATIVE] The timeline above is optimistic and assumes substantial pre-war preparation, surviving industrial infrastructure in multiple regions, and effective coordination between surviving populations. Without preparation, recovery stalls entirely. The longer a society remains in a degraded state, the more existing infrastructure deteriorates, the more institutional knowledge is lost, and the more likely a downward spiral into a permanent pre-industrial or even pre-agricultural condition becomes. The key variable is not knowledge—which can be preserved—but population size and the resulting capacity for economic specialization. A world of 500 million people simply cannot support as many specialized roles as a world of 8 billion, regardless of what those people know.

The critical implication is that recovery is not a sprint but a marathon—and the first decades matter enormously because they determine whether surviving societies stabilize at a level from which further progress is possible, or enter a spiral of infrastructure degradation from which recovery becomes progressively harder. The investments discussed in Part VI are designed to ensure that the stabilization phase succeeds.

Part VI: The Case for Pre-War Resilience Investment

A central theme of this paper is that achievable investments made before a catastrophe—substantial in absolute terms, but plausible given the stakes—could save billions of additional lives and compress recovery timelines by a century or more. The interventions are extensions of existing technologies and programs. Many carry substantial co-benefits for climate adaptation, energy independence, food security, and economic development even if nuclear war never occurs. And critically, each can be resourced through some combination of commercial profit motive, philanthropic commitment, and state action—meaning that the question is not whether these preparations are affordable, but whether we choose to prioritize them. It is also worth emphasising that these investments are not zero-sum. The survivor estimates in this paper reflect a world with minimal preparation; sufficient investment in food resilience, energy infrastructure, and knowledge preservation does not merely improve outcomes for a fixed number of survivors—it dramatically increases the number of people and nations that survive in the first place. Every region that maintains food security and functioning infrastructure through the crisis is a region that contributes to global recovery rather than drawing on it.

6.1 Food Resilience Preparations

Pre-positioned seed banks of cold-tolerant crop varieties adapted to nuclear winter conditions. Seaweed farming infrastructure and training, particularly in tropical nations, enabling rapid scale-up from current production to millions of tonnes. National rationing plans and food distribution protocols. ALLFED estimates that planned deployment of resilient food solutions could maintain caloric sufficiency for the majority of the global population even under severe nuclear winter conditions.47 As discussed in Part I, the survivor estimates in this paper reflect a world that has made little food resilience investment. Food resilience is the area of pre-war preparation where investment could save the most lives—the difference between a world that has invested seriously in resilient food systems and one that has not could be measured in billions of survivors. A comprehensive treatment of food resilience strategies is beyond the scope of this paper, and the work being done by ALLFED and allied researchers deserves a far more detailed examination than we can provide here. What we can say is that the research base exists, the interventions are technically feasible, and the primary barrier is funding and political will rather than knowledge.

6.2 Energy Infrastructure

Expanding renewable electricity generation worldwide is perhaps the most commercially straightforward resilience investment because it aligns directly with existing climate policy and market incentives. Renewable generation is already cost-competitive with or cheaper than fossil fuel alternatives in many regions, and global energy demand continues to grow as economies develop and electrify—there is essentially insatiable demand for new clean generation capacity. Every dollar spent on hydroelectric, geothermal, wind, or solar capacity serves decarbonisation and economic development goals today while building the energy foundation that post-catastrophe recovery would depend on—and the more widely distributed that capacity is, the more resilient the overall system becomes. Stockpiling critical grid maintenance components—transformers, switchgear, high-voltage cable—is less commercially exciting but arguably more important per dollar spent, since a grid that cannot be maintained is a grid that fails regardless of generation capacity. Developing tethered electric farm equipment and hydrogen production infrastructure represents a further category of dual-use investment: technologies that serve agricultural decarbonization in peacetime and become essential survival infrastructure in a catastrophe. The hydrogen tractor prototypes discussed in Part III are being developed by major manufacturers for climate-driven commercial reasons; accelerating that work has obvious resilience co-benefits.

6.3 Knowledge Preservation

The layered knowledge distribution architecture described in Part IV—from the central AI inference facility down through regional inference systems, pre-positioned USB libraries, and the hundreds of millions of personal devices that would survive in non-targeted regions—represents an extraordinarily cost-effective resilience investment. The central facility has a clear commercial revenue model through data sovereignty demand, maintaining the resilience capability as a free rider on commercially justified infrastructure. The distributed layers span a wide cost range. At the lowest tier, millions of pre-loaded USB drives and storage devices cost almost nothing per unit. Small inference devices—compact systems running capable quantised models—could be pre-positioned in thousands of communities for a few thousand dollars each. Larger regional inference installations, capable of serving as substantive AI assistants for entire districts, require significantly more investment but could be deployed in dozens to hundreds of locations, potentially leveraging existing data centre infrastructure in multiple countries. Comprehensive printed technical libraries in multiple languages complement the digital capability with information that requires no power source to access. Training programmes in essential pre-industrial and transitional skills—blacksmithing, sailing, animal husbandry, basic electrical engineering—preserve practical knowledge that exists today in ageing populations and would otherwise be lost within a generation, while serving cultural preservation and educational purposes that justify their existence independent of catastrophe planning.

6.4 Transport and Communication Capability

Pre-positioned sailing vessels or shipbuilding materials and expertise in maritime nations worldwide provide the foundation for long-distance bulk transport of goods, equipment, and agricultural inputs. HF radio communication equipment and trained operators, ideally networked through existing amateur radio communities that already maintain global communication capability independent of the internet, ensure baseline connectivity between all surviving regions. Strategic reserves of aviation fuel, maintained and stabilised for emergency use, could sustain critical air transport between surviving regions during the first years after a catastrophe. Commercial and general aviation aircraft in non-targeted countries would largely survive, and with stored fuel and cannibalisation of grounded aircraft for spare parts, meaningful air capability could persist for roughly 5–10 years—long enough to deliver medical supplies, specialist personnel, seed stocks, critical spare parts, and computing hardware during the period when rapid coordination matters most. As air capability declines, sail takes over for long-distance transport, while the electric vehicles and ground infrastructure discussed in Part III handle distribution within regions. These investments enable rapid reconnection between surviving populations, accelerating the transition from isolated survival to cooperative recovery. The cost is modest—HF radio equipment is inexpensive, aviation fuel reserves require only proper storage and rotation, and maintaining a distributed network of trained operators is largely a matter of institutional support for communities that already exist. Sailing vessel construction and maintenance similarly draws on established traditions and could be supported through maritime heritage programs that serve recreational and educational purposes in peacetime.

6.5 Resourcing the Work

A recurring theme across these interventions is that most of them are not pure insurance policies—they generate value in peacetime. This matters because it means they can be funded through three complementary channels, each with different strengths: commercial investment driven by profit, philanthropic funding driven by mission, and state action driven by strategic interest. The most effective approach will combine all three.

The commercial case is strongest for energy infrastructure and AI inference. Renewable electricity generation is already attracting substantial private capital worldwide for climate reasons; the resilience benefit comes at essentially zero marginal cost. The sovereign AI inference facility described in Part IV has a clear revenue model serving data sovereignty demand from corporations and governments worldwide. Hydrogen fuel cell development for agriculture is being pursued by John Deere, AGCO, Kubota, and others for straightforward commercial reasons. In each case, the catastrophe resilience value is a co-benefit of investments that make economic sense on their own terms. The role of policy and philanthropy here is not to fund these technologies directly but to accelerate their deployment in configurations that maximize resilience—distributed rather than concentrated infrastructure, open rather than proprietary designs, and broad geographic coverage that ensures no single region’s destruction eliminates the capability entirely.

Philanthropic funding is most appropriate for the interventions with the weakest commercial case but the highest humanitarian impact. Some of these are remarkably inexpensive: maintaining HF radio networks, preserving traditional skills through training programmes, and pre-loading USB drives and portable storage with comprehensive technical libraries cost very little relative to their value. Pre-positioning small AI inference devices—compact systems costing a few thousand dollars each—across regions worldwide could place capable knowledge bases in thousands of communities for modest total investment. Medium-scale regional installations are more substantial, ranging into tens of millions of dollars, but remain fundable relative to the stakes. At the higher end, food resilience research and deployment, construction of a sailing fleet for post-catastrophe trade, and establishing larger regional inference installations represent significant but fundable commitments. A number of prominent philanthropists and foundations have already demonstrated interest in existential risk reduction—Open Philanthropy, the Survival and Flourishing Fund, and individual donors including several of the world’s wealthiest individuals have directed significant resources toward catastrophic risk research. The interventions described in this paper complement the vital theoretical and analytical work of the existential risk community by offering something that field has often lacked: concrete, implementable projects with measurable outcomes that can proceed in parallel with ongoing research. For donors looking to move from funding research about risks to funding preparation against them, this is a natural next step.

State action is essential for the interventions that require institutional authority: national rationing and food distribution planning, strategic stockpiling of grid components, regulatory frameworks that support resilient infrastructure, and international coordination agreements. Governments are also natural partners for the larger-scale investments that exceed philanthropic budgets—food resilience research and deployment, strategic fuel reserves, and national-level inference infrastructure—particularly where these align with existing policy goals around food security, energy and technology independence, and disaster preparedness. These preparations are valuable for any nation—the same plans and stockpiles that would matter in a nuclear scenario also apply to pandemics, climate disruptions, and other systemic shocks. New Zealand’s government has a particular incentive here, given the country’s unique position as a potential recovery hub—and investments in renewable energy, food security, and technological self-sufficiency align with policy objectives that any New Zealand government would recognise as valuable regardless of catastrophe planning.

There is a broader point worth making about political will. Catastrophic risk preparation is one of the rare areas where the usual ideological divisions are largely irrelevant. Concern about civilizational resilience spans the political spectrum—from those motivated by national security and self-sufficiency to those driven by humanitarian obligation and global cooperation, from technologists who see AI and renewable energy as civilization’s best tools to traditionalists who value the preservation of practical skills and local community resilience. The specific interventions described in this paper will appeal to different constituencies for different reasons, and that is a feature, not a bug. What matters is that the work gets done. The stakes—billions of lives, the continuity of human civilization—are simply too large to allow disagreements on other matters to prevent cooperation on this one. A donor who disagrees profoundly with another donor’s politics can still fund complementary pieces of the same resilience infrastructure, and a government that distrusts another government’s motives can still participate in coordination agreements that serve both nations’ survival.

Finally, many of these investments have applications beyond terrestrial catastrophe resilience. The challenge of building a self-sufficient technological society in a harsh environment with limited population and no access to external supply chains is, in essence, the same challenge that faces any serious attempt at permanent human settlement beyond Earth. The technologies discussed in this paper—renewable energy systems designed for autonomy, food production under adverse conditions, knowledge preservation for small populations, hydrogen-based energy storage, electric rail, localized manufacturing—are precisely the capabilities that a lunar or Martian colony would require. Research and development in catastrophe resilience therefore contributes directly to the broader human project of becoming a multi-planetary species, and vice versa. Both problems demand the same core competency, which is the ability to sustain a complex technological civilisation without relying on a global supply chain of eight billion people.

Conclusion: Recovery Is Possible—But Not Guaranteed

A full-scale nuclear war would be the worst catastrophe in human history. But it would not end the human story. The difference between a world that recovers complex technological civilization within a century or two and one that takes a millennium—or one that never recovers at all, spiraling into permanent regression until our species joins the long list of evolutionary dead ends—may come down to preparations made today.

New Zealand occupies a unique position in this calculus. Not because it would be the sole survivor—hundreds of millions would survive elsewhere—but because its combination of geographic isolation, food security, renewable energy infrastructure, social cohesion, and potential to host preserved knowledge systems makes it the most likely candidate to maintain a stable, functioning node of technological society through the catastrophe. NZ’s role would not be to rebuild civilization alone, but to serve as a hub of stability and knowledge that helps every other surviving region recover faster.

The investments required are wide ranging—seed banks, renewable energy expansion, tethered electric farm equipment development, pre-positioned USB knowledge libraries and stockpiled computing hardware, AI inference capability at multiple scales, HF radio communication networks, and the sailing and transport infrastructure to physically deliver knowledge and equipment where it is needed—but they are achievable, and as discussed in Part VI, each can be resourced through some combination of commercial profit motive, philanthropic commitment, and state action. None requires exotic technology. Most generate substantial peacetime value. They are, in the language of catastrophic risk management, “no-regret” preparations.

Without deliberate preparation, the most likely outcome of a full-scale nuclear war is not slow recovery—it is no recovery. A gradual slide into permanent pre-industrial existence, the loss of everything humanity has built, and the eventual extinction of a species that may have been the universe’s only attempt at consciousness. The preparations described in this paper are not optimizations. They are the difference between that outcome and a future worth having.

Endnotes


  1. Xia, L., Robock, A., Scherrer, K., et al. (2022). “Global food insecurity and famine from reduced crop, marine fishery and livestock production due to climate disruption from nuclear war soot injection.” Nature Food 3(8): 586–596. doi.org/10.1038/s43016-022-00573-0↩︎

  2. Coupe, J., Bardeen, C.G., Robock, A., & Toon, O.B. (2019). “Nuclear winter responses to global nuclear war.” J. Geophys. Res. Atmos. 124: 8522–8543. doi.org/10.1029/2019JD030509↩︎

  3. Ibid.; Turco, R.P., Toon, O.B., Ackerman, T.P., Pollack, J.B. & Sagan, C. (1983). “Nuclear winter: global consequences of multiple nuclear explosions.” Science 222: 1283–1292. doi.org/10.1126/science.222.4630.1283↩︎

  4. Mills, M.J., Toon, O.B., Lee-Taylor, J., & Robock, A. (2014). “Multidecadal global cooling and unprecedented ozone loss following a regional nuclear conflict.” Earth’s Future 2: 161–176. doi.org/10.1002/2013EF000205↩︎

  5. Shi, Z. et al. (2025). Penn State study modeling corn production under nuclear winter scenarios. Published in Environmental Research Letters. See: “Simulating the unthinkable: Models show nuclear winter food production plunge,” Penn State University news release. psu.edu↩︎

  6. Xia et al. (2022), op. cit., note 1. doi.org/10.1038/s43016-022-00573-0↩︎

  7. Ibid. The study found that redirecting livestock feed and eliminating waste had “limited impact on increasing available calories” under large soot injection scenarios. doi.org/10.1038/s43016-022-00573-0↩︎

  8. Boyd, M. & Wilson, N. (2023). “Island refuges for surviving nuclear winter and other abrupt sunlight-reducing catastrophes.” Risk Analysis 43(9): 1824–1842. doi.org/10.1111/risa.14072↩︎

  9. NZ Ministry of Business, Innovation & Employment (2025). Energy in New Zealand 2025. 85.5% renewable electricity generation in 2024; approximately 43,900 GWh total generation. mbie.govt.nz↩︎

  10. Boyd, M. & Wilson, N. (2022). “Sustained Resilience: The impact of nuclear war on New Zealand and how to mitigate catastrophe.” Adapt Research Ltd / University of Otago. adaptresearchwriting.com↩︎

  11. ALLFED (Alliance to Feed the Earth in Disasters) is based at the University of Canterbury, Christchurch, New Zealand. Director: Dr. David Denkenberger, Department of Mechanical Engineering. allfed.info↩︎

  12. Denkenberger, D. et al. (2024). “Resilient foods for preventing global famine: a review of food supply interventions for global catastrophic food shocks including nuclear winter and infrastructure collapse.” Critical Reviews in Food Science and Nutrition. doi.org/10.1080/10408398.2024.2431207↩︎

  13. Jehn, F.U., Dingal, F.J., Mill, A., et al. (2024). “Seaweed as a Resilient Food Solution After a Nuclear War.” Earth’s Future (AGU). doi.org/10.1029/2023EF003710↩︎

  14. Ibid. “The seaweed farm design in this study assumes a very low tech approach, which mainly consists of ropes that are kept in place by anchors and buoys.” doi.org/10.1029/2023EF003710↩︎

  15. Ibid. “Areas that are currently seen as resilient in a nuclear winter like New Zealand might not be able to further improve their resilience with seaweed, as their coastal waters are either too cold or nutrient poor.” doi.org/10.1029/2023EF003710↩︎

  16. Denkenberger et al. (2024), op. cit., note 12. ALLFED Low-Tech Solutions page: allfed.info/resilient-foods↩︎

  17. Shi et al. (2025), op. cit., note 5. psu.edu↩︎

  18. Solar Foods, “Solein transforms ancient microbes into the future of food.” Factory 01 began commercial production in Finland in 2024, producing up to 160 tonnes of Solein protein annually using renewable electricity. solarfoods.com/science↩︎

  19. Throup et al. (2022), cellulosic sugar from agricultural residues. See also Denkenberger et al., “Feeding Everyone No Matter What” (2014), which first proposed repurposing paper mills for cellulosic sugar production in catastrophe scenarios. ALLFED’s integrated model includes cellulosic sugar as a core resilient food source. github.com/allfed/allfed-integrated-model↩︎

  20. Iceland produces approximately 70% of its tomatoes and nearly 100% of its cucumbers in geothermally heated greenhouses. The first geothermally heated greenhouse in Iceland was built in 1924. See Icelandic Agricultural Advisory Centre and Orka Náttúrunnar, “Then and Now: Greenhouses.” on.is/en/geothermal-exhibition/geothermal-culture/then-and-now-greenhouses↩︎

  21. National Emergency Supply Agency (NESA), Finland. Finland maintains strategic grain reserves of 6–8.5 months of normal population consumption, increased from the statutory minimum of six months in 2022 in response to the changed security environment. Sweden, Norway, and Poland have similarly expanded strategic reserves. huoltovarmuuskeskus.fi↩︎

  22. Rivers et al. (2024), “Food system adaptation and maintaining trade could mitigate global famine in abrupt sunlight reduction scenarios,” Global Food Security. This paper models the combined effect of resilient foods (including cellulosic sugar, methane SCP, greenhouse crops, and seaweed) and finds that maintaining food trade combined with resilient food deployment could potentially feed the entire global population even in a severe nuclear winter scenario. sciencedirect.com/science/article/abs/pii/S2211912424000695↩︎

  23. Boyd & Wilson (2022), op. cit., note 10. adaptresearchwriting.com↩︎

  24. Watson, J. “Farm mechanisation—Machines powered by humans and animals.” Te Ara—the Encyclopedia of New Zealand. teara.govt.nz/en/farm-mechanisation/page-2↩︎

  25. Meyer, E. “Horses—Horses and farming.” Te Ara—the Encyclopedia of New Zealand. “By the mid-1950s, farmers’ transition from horse power to tractor power was almost complete.” teara.govt.nz/en/horses↩︎

  26. Rare Horse Society of New Zealand. “Clydesdale.” “New Zealand currently has 750 of the world’s 5000 Clydesdales… In 2025 only 44 registered foals are due born.” rarehorsesocietynz.org/clydesdale↩︎

  27. RNZ (2024). “A love for Clydesdales—‘they just become your mates.’” Interview with Steve, President of the Clydesdale Horse Society of NZ. rnz.co.nz↩︎

  28. Meyer (Te Ara), op. cit., note 20. “Horses ate as much as eight men or four sheep.” teara.govt.nz/en/horses↩︎

  29. MBIE (2025), op. cit., note 9. Over 5,000 MW installed hydro capacity; 85.5% renewable in 2024. mbie.govt.nz↩︎

  30. Transpower New Zealand. “Our Grid.” The national grid comprises 10,969 kilometres route-length of high-voltage transmission lines and 178 substations. transpower.co.nz↩︎

  31. Green, W., Cairns, T., & Wright, J. (1987). New Zealand After Nuclear War. Wellington: New Zealand Planning Council. Referenced in Boyd & Wilson (2022). mcguinnessinstitute.org [PDF]↩︎

  32. John Deere GridCON research project. Reported in Farmers Weekly (2019), The Western Producer (2019), and Future Farming (2019). “The first vehicle to be fully electric, permanently cable-powered and capable of fully autonomous operation in the field.” futurefarming.com↩︎

  33. OXE-E tethered electric tractor. Reported in Future Farming (September 2023). futurefarming.com↩︎

  34. John Deere GridCON, op. cit., note 27. The Western Producer: “Researchers in Germany have modelled the pairing of additional units in the field in a daisy-chain configuration.” futurefarming.com↩︎

  35. Farmers Weekly (UK), “Machinery Milestones: Electric tractor power” (June 2019). Major McDowall’s 1925 tractor and Zimmermann’s 1894 electric ploughs. fwi.co.uk↩︎

  36. Ibid. “Russia’s electric tractor programme may have been less ambitious than the publicity suggested and… had been abandoned in the early 1950s due to technical problems.” fwi.co.uk↩︎

  37. Richmond Fed, “Electrifying Rural America” (2020). “Utilities estimated that it would cost as much as $2,000 per mile—more than $30,000 in 2020 dollars—to build transmission lines out to farms.” richmondfed.org↩︎

  38. NW Council, “Rural Electrification.” “By the early 1970s nearly all farms in the United States had electricity.” nwcouncil.org↩︎

  39. ScienceDirect, “Rural Electrification” overview. “In places where rural services were nationalized or subsidized, such as in New Zealand, the countryside was rapidly brought into the grid.” sciencedirect.com↩︎

  40. ESMAP/World Bank (2000). Reducing the Cost of Grid Extension for Rural Electrification. ESM227. Cost benchmarks of $4,000–$20,000/km for MV distribution lines. worldbank.org↩︎

  41. Farm Progress (December 2025). “Concept tractor turns hydrogen into horsepower.” AGCO’s Fendt Helios H2 tractor: 100 kW fuel cell, 25 kW battery, ~135 hp. farmprogress.com↩︎

  42. Kubota hydrogen fuel cell tractor prototype. Reported in Future Farming (October 2025) and Power Systems Research (June 2024). ~60 hp, 7.8 kg hydrogen, ~4 hours operation. futurefarming.com↩︎

  43. Fuel Cells Works (March 2025). “Massey Ferguson Targets 2026 Launch for Hydrogen-Powered Tractor.” €4.4 million government funding. fuelcellsworks.com↩︎

  44. H2 Dual Power, Netherlands. Based on New Holland T5.140, hydrogen-diesel dual fuel system. h2dualpower.com↩︎

  45. Boyd & Wilson (2023), quoted in Otago University press release (February 2023). sciencedaily.com↩︎

  46. Transpower New Zealand, op. cit., note 25. transpower.co.nz↩︎

  47. ALLFED, “Low-tech solutions.” “Global seaweed production could be ramped up in less than a year to provide enough calories to feed the entire global population.” allfed.info/resilient-foods↩︎