Lucas Jackson / Reuters
On a quiet afternoon, two medium-sized nuclear blasts level portions of Manhattan.
If this were a movie, hordes of panicked New Yorkers would pour out into the streets, running around and calling out for their loved ones. But reality doesn't usually line up with Hollywood's vision of a disaster scene, says William Kennedy, a professor in the Center for Social Complexity at George Mason University. Instead, he expects people would stay in place, follow instructions, and tend to the injured nearby.
To come up with a picture of what would really happen, Kennedy and Andrew Crooks, another researcher at the center, are working with a pair of Ph.D. candidates to study the immediate social aftermath of a nuclear blast in an American megacity.
The Center for Social Complexity was awarded a grant worth more than $450,000 last May to develop a computer model that simulates how as many as 20 million individuals would react in the first 30 days after a nuclear attack in New York City. The grant, which came from the nuclear-focused Defense Threat Reduction Agency, or DTRA, will fund a three-year project. In the simulation, individual "agents" will make decisions and move about the area based on their needs, their surroundings, and their social networks.
I spoke with Kennedy about his progress, and the challenges of simulating the aftermath of a disaster in one of the world's biggest cities. A transcript of our conversation, lightly edited for concision and clarity, follows.
Waddell: How will your computer model be able to accurately simulate people's responses?
Kennedy: First, we're doing basic research to try and identify how we expect people to respond, and how the environment and infrastructure and facilities would respond. When we get verbal descriptions that we are comfortable with, we will represent them more precisely as computer programs. We'll start with the environment, the weapon and its effects, and then move on to the people, the infrastructure, and their response.
We've done other models of similar-sized areas, modeling natural disasters and things like that. So we have some infrastructure to support us. We're using the MASON framework: It's open source—our computer-science department distributes it and maintains it—and we've used it in several projects here.
We'll be bringing in graphical information on New York City and surrounding area, and we'll model a small nuclear weapon—or possibly multiple small nuclear weapons—going off. They'll be in the neighborhood of 5 to 10 kilotons: That's half of Nagasaki/Hiroshima, which was in the 20 kiloton range. The Oklahoma City bomber used 5,000 pounds of TNT, so that's two and a half tons. That destroyed most of the one building where it blew up, but it affected something like 16 city blocks.
Waddell: So we're talking damage to something like a chunk of Manhattan?
Kennedy: Yes, something like that. The number 10 isn't driven by any particular intelligence, but to put it into perspective, North Korea has done tests in the neighborhood of two kilotons—or maybe as many as five. So we're talking a relatively small, though still nuclear, weapon.
Waddell: What are the social responses you'll look at?
Kennedy: We're planning to model at the individual level. A megacity is more than 10 million, and in the region we're talking about, we'll potentially get to 20 million agents.
We've found that people seem to be reasonably well behaved and do what they've been trained to, or are asked or told to do by local authorities. Reports from 9/11 show that people walked down many tens of flights of stairs, relatively quietly, sometimes carrying each other, to escape buildings.
We're finding those kinds of reports from other disasters as well—except after Hurricane Katrina. There, we have reports that people already didn't trust the government, and then with the isolation resulting from the flooding, they were actually shooting at people trying to help.
Waddell: So is the difference between the two disasters trust and communication?
Kennedy: I suspect that's a large part of it, yes.
"We will be modeling people very carefully. The challenge is how precisely we can do that."
Waddell: Do you mainly build the verbal models using interviews and reports?
Kennedy: We're reading studies about disasters, and we're looking back to events like the Halifax munitions explosion of 100 years ago—that was in the kiloton range, and followed immediately by a blizzard—and natural disasters like earthquakes, flooding, and hurricanes.
We're going to have millions of agents, each with characteristics like where the agent lives, where it works, if it's part of family, where the other members of the family are. That's the first network that people respond to. But they're also closely linked to the people they're working with, or the people who are a part of their new family when they become isolated: the other people going down the stairs at the same time. That community now has a common experience.
So we're going to model individuals responding to the immediate situation around them. They're trying to leave the area, find food, water, and shelter: basic Maslow-like necessities.
DTRA wanted us to look at the reaction, not the recovery. They want to limit it to the first 30 days. Emergency responders will try to respond within minutes, so there will be some response. But no recovery, in the sense that infrastructure and business will start reestablishing normal behaviors.
Waddell: So the players will include individuals and rescue agents—who else is involved in the model?
Kennedy: When you broaden it more than just the collection of city blocks that are affected, you get into other infrastructure like police departments—not just fire and rescue that respond immediately, but others in the area, local governments, school systems, the utilities that provide food, water, clothing, shelter, etc. It's a significant undertaking.
Where we are is that we have done the basic literature research on how people respond to this kind of a disaster, and we are starting now to collect the geographic information system data—GIS data—on the New York area: the road systems, subway, bus routes, bridges, and things like that.
It's frustrating us a little bit that the publicly available data is not very clean. We've found lots of road segments that aren't connected. We can't just import somebody else's map of New York and the surrounding areas and have our agents fleeing the area, so we're spending some effort in the last several weeks trying to collect and clean up that data so that we can actually use it.
Waddell: What are the goals that each individual agent will be balancing? Safety, hunger, family and friends, getting out of the area—how will the model treat those needs?
Kennedy: One of the aspects we'll be modeling is the individual agents' social networks. Communications with those people, and confirmation of their status, seems to be one of the first urgencies that people feel, after their immediate survival of the event.
Part of our modeling challenge is going to be figuring out if a parent would go through a contaminated area to retrieve a child at a daycare or school, putting themselves at risk in the process, because it's important to them to physically be there with their children. Or do they realize that they're isolated, that communications aren't going to be available in the near term, and they only deal with their local folks who are now their family? That's the sense we get.
For a small nuclear weapon, especially a ground burst rather than an air burst, communications will not be as affected as they might be with an airborne electromagnetic pulse weapon. So communications may be available in the not-terribly-distant future from the initial event.
Waddell: So your hypothesis is that a parent will tend to the people immediately around them, in hopes that better communication will allow them to get in touch with family later?
Kennedy: Yes. And that changes what we expect our model will show from the Hollywood version of a disaster—people running down the streets. For disasters where people are injured, like a nuclear blast, as opposed to threatened with injury, we expect that everybody won't exit the area en masse, because there will be people who need immediate help.
Waddell: And all of this intelligence is coming from research and reports from previous disasters?
Kennedy: Yes. Computational social science is not experimental. We don't terrorize people and see how they behave.
"Sometimes, it might be appropriate to have irrational behavior."
Waddell: How do you take these insights from research and build them into a model, so that your agents mimic reality?
Kennedy: Through code that implements decisions trees, or needs that need to be fulfilled, for the individual agents in the models themselves. To give you an example from a previous model: We modeled herders and farmers in East Africa for the Office of Naval Research. We developed models of household units that had to make a living. They would make the living based on the terrain fertility, the water availability, and the weather in the local area. So we had a basic household unit that had these capabilities and then its behavior was, in a sense, driven by the environment that it's in.
Here, the environment is a lot more hostile, and we're going all the way down to the individual level rather than the household level.
Waddell: Can you also model psychological effects, like terror?
Kennedy: That certainly does affect how people behave. Some people will be frozen and unable to function as a result of terror, as well as their injuries and the environment around them. We will be modeling those effects. But we're not, per se, modeling the internal states of those individuals. We're primarily modeling their behavior.
Waddell: But to some extent, don't you need to understand essentially what a person is feeling to try and guess what they're going to do?
Kennedy: We will be modeling people very carefully. The challenge is how precisely we can do that.
I sometimes work on modeling individuals using a cognitive model that deals with memory and perception and actions at the near-millisecond level. Here, we probably don't need a research-level cognitive model of every individual at the millisecond level. We are anticipating modeling people in the five-minute increment for the first several hours, and then expanding those steps, so that we talk about their actions in 15-minute intervals. That's driven partly by the number of people, and the duration of the study.
Waddell: With so many agents and such a long period of time, how much computation power will this require?
Kennedy: It's a lot! We do have significant university resources: We have a couple of clusters of computer systems, which we will probably tax significantly. We're going to start small and find out how much we need tax them. We may be enlarging those facilities to provide the computation that we need.
But to provide some scale, we did modeling for the National Science Foundation on the effects of climate change in Canada and how people might migrate. We were modeling millions of people, moving over the course of 100 years. That would run slowly on a desktop, and to do experiments, we went to the cluster so that we could run different scenarios.
We don't need to go 100 years, but we do need to have more people. We expect it'll be taxing but within our resources.
Waddell: Do you have an estimate of how long a single simulation run might take?
Kennedy: I expect a single run may take a couple of days—and that's at full scale, with all the people.
Waddell: Will you be able to make changes to the model while a simulation is running?
Kennedy: Not in the sense that you could change agents' behaviors. But you might realize that because of the setup that you have people are not behaving as you expect. So the modeling of your behavior doesn't make sense, and you have to go back and reconsider the model.
Waddell: It sounds like common sense plays a big role. If you see people acting irrationally, would you stop the scenario to try and figure out what's going and fix it?
Kennedy: It depends on what you mean by irrational. Sometimes, it might be appropriate to have irrational behavior.
What you're describing about common sense is referred to as "face validity." If you have a simulation that comes up with something that's just incredible on its viewing, it's very hard to convince anybody that that's reality.
Waddell: And how do you separate the actually inaccurate from the surprising but true?
Kennedy: There's a couple of methods. One is that you try to be very careful about the reality of the model as you're building it. This is sometimes called unit testing: You want small pieces to behave appropriately so that when you assemble them, the overall behavior is credible.
You can also, where available, run scenarios that are intended to replicate historical events so that you can compare how your model behaves with the actual, in a sense, natural experiment. I expect we'll be able to do that. We have a surprising amount of data on how people responded in Nagasaki and Hiroshima. The U.S. occupied Japan immediately afterwards, and photographed, interviewed, and tracked people over a period of time. A lot of that data is available.
Waddell: What will your final product look like?
Kennedy: It's interesting how DTRA describes what they want as a result: They told us that they are funding basic research. They're expecting published papers and academic advancement of students. We are not expected to deliver them a model, or a system they can go play with.
Waddell: What's the timeline going to look like?
Kennedy: The funding is for three years, with a possibility of two additional years. We hope to have something running in the three- to six-month time frame that we can use for codifying our practical theories about how people behave. Our basic plan is to have something running and then try to set up experiments, run those, and do the validation and verification, so that we are comfortable starting to report results in a year or so.
Waddell: How much of the work will be handled by the existing MASON system, and how much will need to be built from scratch?
Kennedy: I would expect that most of what we're dealing with in this project is doable within the current MASON. It will support very large numbers of agents over very large areas, and their interactions, reasonably responsively. The code is very fast—it is an industrial simulation system.
We are exploring whether we should model individuals taking up a square meter of space when they move; we are wrestling with whether we need to model doors for each building or let people leave from anywhere around the block.
One of the interesting challenges we're facing is that we don't have a lot of data about the height and number of floors of buildings. We have population density, and from that we can extrapolate how many floors are in the buildings and how many people are on each floor, so that we can deal with evacuations.
Waddell: How does this simulation compare with others you've done on scale?
Kennedy: There's some work at the center that involves modeling the U.S. economy at full scale, which is over 100 million agents. But those are what you might call lighter agents: They are simpler so that we can model them at that scale. They're individual people, but all they do is their business. They don't do anything else.
This is on the larger side of heavy-agent modeling, on the 20 million agent scale. We've been in the 5 to 10 million agent scale before.
We can make things easier by modeling different parts of the system separately. When we modeled climate change, for example, the climatologists did their simulation of the environment and provided us with that data, so we didn't have to spend computer time on those calculations. We could process that sequentially, day after day, for several years. So it's breaking down the simulation into its parts, which allows us to pre-process them so that they're easier to deal with in the overall social simulation.