Warning signs

To download the full Spring 2020 issue in PDF format, please click here. 

Climate change presents a great and growing threat to lives and livelihoods the world over. Its impact on flood risk is notoriously difficult to quantify, and in many cases the projected indicators of change fall within the area of uncertainty inherent in modelling the risk under present-day conditions.  

While we are pretty sure the intensity and magnitude of flooding will increase in more places than it will decrease, the overwhelming driver of both historic and expected future increases in flood losses relates to development patterns.  

The US government’s own models of future habitation suggest people will be living in “riskier” areas. This is a behaviour we see globally and is a consequence of urbanisation, with people being driven further into marginal areas as the higher areas – less exposed and less vulnerable to flooding – are already occupied. 

As the trend of populations inhabiting flood-prone areas continues to grow – regardless of the impact of climate change – so too will flood losses.  

Worsening flood risk 

Using US government projections of development patterns, research by Fathom, published in 2018 in Environmental Research Letters, found US flood exposure could almost double by the end of the century. It also found population growth will not only increase in risky areas, but also significantly accelerate. These findings suggest the already crippling problem of flood risk will simply get worse. 

With this in mind, research has shown the tools with which flood risk has historically been managed in the US substantially underestimate the scale of the problem.  

Federal Emergency Management Agency (Fema) flood maps of the so-called 100-year floodplain – or put more simply, inhabitants of an area where there is a greater than 26 percent chance of being flooded during a 30-year mortgage – are the principal source of information for US flood risk.  

These maps are generated using a patchwork of local-scale, engineering-grade flood inundation models which cover only ~60 percent of the land area of the lower 48 states. There is no more accurate a way to understand flood risk locally than with such a model, which incorporates granular, locally surveyed information related to channel bathymetry, floodplain topography and other important features such as levees.  

However, this method’s hunger for local data and requirement for manual operation by a skilled practitioner renders it financially (and practically) unsuitable for execution on every US river. A complete view of risk is sacrificed on the altar of local point-precision. 

Thus, by virtue of their spatial paucity, particularly on smaller rivers (even in the ~60 percent of land area allegedly covered), Fema maps underestimate the number of people at risk of flooding by two thirds.  

Top-down modelling 

Fema estimates that 13 million people are potentially exposed to a 100-year flood, while a spatially complete flood model built using an alternative modelling philosophy suggests the real figure is closer to 41 million.  

This philosophy can be conceptualised as “top-down” (in contrast to Fema’s “bottom-up”), where seamless, remotely sensed information on land elevation, river location and other spatial and hydrologic data are integrated into an automated model-build routine which simulates flooding everywhere – with no gaps. 

A study by Fathom in 2017 in Water Resources Research found that, where Fema maps do exist, they give similar realisations of flood risk to the “top-down” model developed by the authors. That is, a model built automatically with national-scale data can replicate locally built Fema maps within error. The caveat “where Fema maps do exist” is crucial, since two models which are locally similar transpire to produce estimates of aggregate risk that differ by over 300 percent. 

More than a quarter of all historic insurance claims have been made outside of Fema flood zones and, in hurricane-prone areas such as the Texas Gulf coast, this figure can be as high as three quarters. Around 60,000 people are situated in Fema floodplains in the wider Houston area, yet hundreds of thousands were inundated there during Hurricane Harvey in 2017.  

There is growing recognition, then, that traditional approaches to understanding flood risk at large spatial scales are not fit for the task of managing it appropriately. 

Exposure growth 

The 2018 study further demonstrated that population and GDP growth alone are expected to lead to significant future increases in exposure, and this change may be exacerbated in the future by the impact of climate change.  

However, recent scientific developments can ensure such a bleak outlook does not come to pass should such data be utilised appropriately. US flood exposure being triple that when calculated using outdated technologies means the potential for private flood insurance penetration is high.  

Consistent and spatially comprehensive flood mapping not only fills in the considerable gaps in current flood data; the ability to rapidly re-run models as new information becomes available permits risk to be managed amidst data richness rather than data scarcity.  

Further, the simulation of multiple return period floods – from frequent five-year events to rare 1,000-year events – allows a much more nuanced view of risk than with the traditional, arbitrary, in-or-out, single 100-year return period event simulated by Fema.  

Large-scale model structures also allow “what if?” scenarios to be explored. How might flood risk change due to global warming, under changes to land use, with investment in flood protection? 

This dense tapestry of flood modelling scenarios is permitted by fast and automated model simulations. This is not possible with laborious Fema-style models, which have cost tens of billions of dollars to produce and will cost millions of dollars simply to prevent decay of existing low-coverage data.  

“Top-down” models cost only a fraction of this, yet provide a richer array of tools for managing risk at large spatial scales.  

Quantifying flood risk 

One such successful application of these large-scale model structures can be evidenced in the conservation of natural floodplain lands, which has multiple benefits in terms of ecosystem services, biodiversity and recreational purposes.  

Not only can this data help answer “what is a floodplain?” across the US, it is now also possible to work out the potential damages from flooding should these natural lands ever be built upon. 

Incorporating US government projections of land use throughout this century, research shows that there are vast swathes of land that are cheaper to purchase at market value to prevent development, rather than permitting risky developments to occur unabated and ultimately footing the inevitable bill for flood damages.  

For an area twice the size of Massachusetts for example, spending $1 on land conservation results in $5 of avoided damage costs. This is just one example of how new data can be used to begin mitigating the expected spiralling of US flood losses. 

With such comprehensive data informing our view of flood risk across the US now available, the way risk will be managed in the future will be revolutionised.  

The changes in flooding induced by climate change can and are being quantified by cascading state-of-the-science climate model output through these “top-down” models.  

Updating our current understanding of risk with these new technologies and refining this with scenarios of future climates will permit nuanced risk management with data from the forefront of scientific endeavour.  

Action can be taken by risk managers now to ensure austere business-as-usual projections do not become reality. 

Oliver Wing is a flood risk scientist at Fathom 

Related articles