Defining a disaster as an “event” has a history extending to the 1960s, when federal funds were spent to model the social impact of a nuclear attack. To do this, Civil Defense officials commissioned studies of proxies of an attack, like earthquakes, tornadoes and hurricanes, all disasters that affect multiple systems with limited warning. They wanted to understand the immediate post-attack consequences — would society survive, or would people panic and society quickly descend into chaos? The focus was on the event itself, the nuclear blast. What led up to it and what its long-term effects would be were irrelevant.
Social science disaster research was born from this funding. Over the decades, it evolved into a multidisciplinary endeavor that has told us a lot about human behavior in disasters. (And it’s not what those Civil Defense officials in the 1960s expected. People don’t panic. They are resilient. They help one another.)
By the 1990s there was an emerging consensus that the historical background and long-term aftereffects of disasters were just as important, if not more so, than the event itself, because a disaster is really an interconnected chain of occurrences.