Virginia Tech® home

AI and Emergency Management with Shalini Misra

Shalini Misra joined Virginia Tech’s “Curious Conversations” to talk about how artificial intelligence (AI) might be used in the field of emergency management.

She shared some of the different ways AI is currently being used and the concerns she’s heard from emergency managers. Misra also talks about the steps she believes will be necessary for the technology to reach its full potential in this field.

(Music)

Travis

During the past two years, artificial intelligence has become increasingly used by the general public. And while I know that artificial intelligence can help you maybe write a good email or create an image of a frog on a surfboard that one of my parents will likely believe is real, I'm curious how this technology is being used in some of our more critical fields, specifically fields related to emergency management.

And perhaps an even better question, how do emergency managers feel about incorporating these new technologies into their workflow? Thankfully, Virginia Tech's Shalini Mishra has recently done some work at this very intersection and was kind enough to let me ask all kinds of questions about it.

Shalini is an associate professor of urban affairs and planning in the School of Public and International Affairs. She's also an administrative fellow in the Institute of Society, Culture, and Environment, and her research interests include social, psychological, and health implications of the internet and digital communication technologies, as well as public interest technology, its design and deployment, and the governance of digital technologies. Shalini and I talked a little about how artificial intelligence is currently being used in the field of emergency management. She explained to me some of the concerns that she has heard from emergency managers. And we talked about what she sees as the challenges that will need to be overcome for us to reach our full potential when it comes to incorporating this emerging technology in the space. I'm Travis Williams and this is Virginia Tech's Curious Conversations.

(music)

Travis

When we talk about artificial intelligence, I know artificial intelligence is this big umbrella, what types of artificial intelligence are we specifically talking about related to emergency management?

Shalini

Yeah. So we're talking about generative AI, large language models, predictive algorithms, visualization, and computer vision techniques for the most part data analytics that have AI embedded into them. So these are the kinds of AI that we're really talking about when we think about emergency management and also AI in the public sector more broadly. What type of emergencies are you most interested in trying to explore how AI can help? A wide range of emergencies from floods, fires, terrorist attacks, nuclear disasters, riots, school shootings. So one of the big kind of transformations that's happening in the field of emergency management is that their range of duties, things that emergency managers have to address and deal with has been widening over the years. traditionally, emergency managers, for the most part, dealt with floods because that's the most common natural disaster.

It still is the most common natural disaster, hurricanes, for example, earthquakes. But now we have disease pandemics, homelessness, climate migration crises, and the cascading and over layering of these crises on each other. One could say that we in a crisis all the time, given our climate conditions and our need to adapt to the precarious weather conditions that we are experiencing.

Travis

Yeah. And what are some, I guess, just really basic ways that artificial intelligence, any of those topics that you mentioned in the range of artificial intelligence, could be of benefit to emergency managers? I think the first thing to really mention over here is that artificial intelligence use in the emergency management sector is really nascent. And it varies a lot within the US county to county. So some counties and states have laws prohibiting the use of AI within emergency management and within public sector agencies. Others have more open policies and allow the use of some types of artificial intelligence for certain tasks and purposes. For example, Washington DC has a handbook for AI values and principles. And any kind of AI use needs to be assessed and justified according to those principles. Now, what are those principles? These principles are ecological and financial sustainability, equity, cybersecurity and privacy, democratic accountability. Now, as you can imagine, it's really difficult to assess AI tools to meet these values and principles. It's still not clear how that can be done. At the same time, emergency managers are thinking of, considering, and testing tools, specifically generative AI tools for a wide variety of emergency management tasks. So for example, for predictive modeling of floods and fires, for scenario simulations of disasters and how decisions could be made in those disasters for streamlining the hazard mitigation planning process, for training and coaching, for automated chat bots, for public messaging, especially preparedness public messaging. So these are the kinds of examples that emergency managers are really considering the use of AI. It's not really widespread.

Travis

One of the reasons that I wanted to talk to you was artificial intelligence is very complex. The nuts and bolts and AI assurances is very complex as well, also human beings are very complex. And I know one of the areas that you've been studying is how basically how emergency managers feel about using artificial intelligence. And so I'm curious, what did your most recent study reveal about that relationship that they have with artificial intelligence?

Shalini

We surveyed US based emergency managers on their attitudes and orientations towards artificial intelligence. And we presented them with hypothetical emergency management scenarios and asked them whether they would be willing to rely on AI or not rely on AI to make a decision given those scenarios. And they were realistic scenarios that emergency managers have to commonly make decisions about in their work. And what we found was that emergency managers are less exuberant about or less positive about artificial intelligence in general in comparison to a general sample of US-based adults. They're also less likely to rely on AI on any of those scenarios that we presented to them compared to a US-based general population. And we dug a little bit deeper into this. So one thing that we concluded was that, okay, emergency managers are less exuberant about artificial intelligence and algorithmic systems than the general population. But why, right? So we dug a little deeper into that question and we asked them about what the challenges really are in terms of integrating AI into their work. And surprisingly, we hear a lot about but people's concerns about AI replacing human workers, right? In the news, especially in academic, there's a lot of academic literature on it and in the field of economics and there are a lot of news reports and press releases on this. However, in contrast to those, our emergency managers, US-based emergency managers, were not so much concerned about replacement of their jobs with artificial intelligence. They were interested, they were really keen or concerned about the implementation of these AI tools within their work. So their concerns were about the nuances of how this can be really integrated into their work. And the second thing that we're really concerned about is enhancing, is how AI might diminish or decrease their own human skills and competencies. So in their responses to us, they were very concerned about improving their own skills and competencies and their team members skills and competencies rather than furthering the competencies of AI, right? The skills of AI. So they emphasized the humanistic aspects of their work, their ability to collaborate with others, the knowledge that they have about their communities, the cities that they work in, the empathy that they share with the people who they serve, right? And the colleagues who they work with their capacity, they know that they have biases. It's not only that AI has all the biases, they know that they have biases and they know that, you know, technologies can exacerbate some of those biases. So they're concerned about that. So their concerns were really about, you know, what impact this would have on their own skills and competencies, their own decision-making processes, their own thinking and how this would be actually implemented in their organization. So in terms of the implementation within their organization, the set of concerns that they have is what was this AI trained on? You know, what kind of data and processes and approach were used to train this AI tool? They are suspicious, they're skeptical about off the shelf AI tools created by corporations that don't keep the public interest in mind. Public sector professionals are concerned about the common good. They claim to serve the public interest. principles like diversity, equity, justice, democratic accountability, legitimacy, transparency, these are key in their profession. And it's not really clear that AI tools that come from the private sector really center these values. So they are curious about what kind of data this was trained on. Was it tested on, within their organizational setting with managers, with actual managers? Who has Democrat, who is accountable if there are mistakes and errors in outputs of AI, right? Is there a routine and a system to make decisions and collaborate with AI? They don't still have that within emergency managers. In fact, emergency management, another big concern was that it's not really necessary despite the narrative that AI is going to reduce your work, it's going to reduce your burden, you can kind of offload all of these mundane routine tasks to AI and really focus on the creative work of emergency management. They're not buying that. They legitimately think that AI has the potential to increase administrative and citizen burden. So for example, if they implement AI within their organizations, there is a need to validate and cross check that AI output, right? So that increases their burden as well. There is an increase in citizen burden. They are concerned about whether if they label things as AI generated, is it going to be credible to the public? Is the public going to trust messages that come from the emergency management agency that is AI generated? So these are really legitimate concerns that they have about the implementation of AI within emergency management.

Travis

Yeah, all of those are, I mean, all of those sound extremely legit. And if I, guess the first part of that, it sounded like they want to make sure that they can both trust the AI, but also it sounded almost like if we start using artificial intelligence, I don't want it to make me lazy and to like, I don't want it to make me to not keep up my own skillset.

Shalini

Yeah. Isn't it, isn't it amazing how thoughtful, how thoughtful the emergency managers are?

Travis

Yeah. Like it reminds me of, I have backup alarms on my car, on a newer car that I got. And when I got it, I drove it around about a month and every time I would back up, if somebody was pushing a shopping cart behind me, would beep. I realized after about a month of doing that, I got in a different car. was a little bit older. I wasn't checking as much behind myself because I was relying on the beeps.

Shalini

Yes, Travis. That's exactly, that's exactly the kind of, you know, kind of habituation and over-reliance in academia, we call this unhelpful over-reliance on digital technologies. And we see that in many domains. Even without AI, for example, we see there's a reams of studies now on the impact that global positioning systems and GPS has on our spatial cognition, for example. So the more and more we rely on Google Maps or any kind of GPS device to show us the way the less we are able to navigate these environments on our own. And we see this example everywhere. We see this in spelling, for example. There have been studies on the capacity to spell, for example, for children who use digital technologies. Because it anticipates what you want to write and predicts what you want to write, spells things correctly for you, and it diminishes our ability. And it's not only for kids, it's for us as well.

Travis

I don't want to talk too much about my reliance on spellcheck because this is not a confessional type podcast. But I am curious with all of those challenges that they have and many very reasonable concerns, how do we go about overcoming some of these challenges so that we can get the best of both worlds, get the best of people that are doing that work, but also get the best of technology?

Shalini

I think it starts with really collaborating with emergency managers and with community members to question is an appropriate use of AI for what particular task? Certainly in emergency management and in other public sector agencies, there are things that are overloading, that are laborious, that can be offloaded onto technologies, right? But often those are not, we don't start the technology design process from that stage, technologies are designed in corporations by experts, know, computer scientists and computer engineers who often work in silos and do wonderful work, right? But are often not tuned in to the particular organizational context and nuances and the work that this technology is going to be eventually adopted into. So it really starts with changing the design process, right? How we design these tools. It should not be just based on a set of disciplinary experts sitting together and kind of being in their own cocoon and designing something fantastic and creative, right? That they think is fantastic and creative. It has to start with bringing in the knowledge and expertise of people who are actually going to be end so-called users of this technology, right? And the number one thing I would kind of focus on over here, what the research suggests is that develop a list of AI appropriate and inappropriate tasks And it sounds like a relatively, know, developing a list. How difficult could that be? But it is more complex than it actually sounds because then you really get at the values, right? What should be offloaded onto technologies? What are the consequences of that offloading of these aspects of your thinking onto these technologies? What are the consequences and risks going to be? How are we going to set up an organizational routine if we start collaborating with AI. So once you start thinking about that list, you start thinking about the values that drive the design of that technology, right? So that's step number one. And then this other thing is when you think of implementing or integrating AI into any organization, it could be emergency management, it could be any other public sector organization, or even in the private sector, education or academia, it should not be a one-off

 

implementation. There should be a lot of decision points and deliberation kind of built in into that implementation. So I would experiment with the technology, really see how it is working out, right, for people, build in that conversation and deliberation within that process so that you are learning at the same time rather than overly relying on these technologies because they're on the market. And, know, emergency managers, just like anyone, any other public sector kind of manager, they're often told, yo, your work is going to be better, right, with AI. But they often don't know who to trust and how it's going to be better and where to adopt it. So it starts with having a a collaborative way of kind of designing these technologies together. And that's another thing that sounds, what a no-brainer. We should be collaborating with others. It's really easier said than really over here, because there's not a lot of trust between these communities. Now, you you might say that, working with emergency managers, you know, they don't always speak for the public. That's why it's very important to get the community members who are going to bear the brunt of these decisions that the emergency manager, managerial or emergency management agencies make. So a study of credibility, right? Are is the community going to trust an AI message if it is labeled as AI?

Travis

I was going to ask you, how do we go about getting that trust with community members because emergency management, would assume a large part of that is getting people to do things, getting citizens to do things, to act a certain way. How do we go about, you think, building that trust with people?

Shalini

Yeah, we know so little about how people really perceive AI. We do know that AI adoption rates are really high among...the US population much faster. We have started adopting AI much faster than the internet and smartphones. And that is saying something. So if the public is really adopting AI at such a high, at such a fast, speedy level, despite the concerns surrounding AI, we need a better understanding of what people are using these AI tools for. What are they getting out of it? What are the early consequences of doing this. It needs a lot of experimentation on messaging and credibility, right? I don't want to say that, know, the emergency managers struggle with their own credibility in the US public, right? We are seeing, as we talk, a disaster is unfolding, right, in Florida over here. And we know that there has been so much misinformation, disinformation, fakery, AI-generated lies that is surrounding this particular emergency, right? Hurricane Milton and Hurricane Helene, right? And emergency managers and our other research have really told us that the nature of their jobs has transformed because of these technologies. It's not as if it's not just two decades ago, they would only be communicating with public officials and with the public. It was a one way kind of information dissemination. And now it's multi-way and they are not often in control of it. And it's too late. It's too late to jump in into a conversation when the disinformation has spread. We know that facts do not change people's minds.

Travis

Well, with so much of a changing landscape and so many challenges related to this, I'm curious as you've studied this field and you've talked and surveyed emergency managers, what in this space gives you hope?

Shalini

I think there is a lot of potential. know, from our initial, initial surveys of emergency managers over here. And our prior work on other types of digital technologies. We've also studied how emergency managers use smartphones and other types of advanced digital technologies in the past. Number one, their lack of exuberance about AI in general compared to the general public, I think is a cause for hope because It's a healthy skepticism about the nature of these technologies and their potential short term as well as long term implications. So that's number one. It's wonderful to be able to see experts who are in charge of such critical decisions during disasters and emergencies, are rampant and often happening in our society all over the world.

They want to maintain their ability to make rationalistic and deliberate decisions. So that's a cause for hope. They also know about the limitations of these technologies, that seemingly, generative AI looks wonderfully smart, and it can create art and stories and poetry and make you laugh and be your friend, seemingly. But when it comes to when it comes to public values and the common good, it's not necessary that these tools are really designed to serve the common good. And that emergency managers and other public sector managers are thinking about the public good and the public interest is another really cause for hope over here that these values really need to be centered. I also think that the field of human computer interaction and disaster governance, for example, and my field of social and environmental psychology. There's a lot of potential for these fields to integrate their research insights, to create tools and technologies that are not only used, but that serve the common good.

(music)

Travis

And thanks to Shalini for sharing her insights at the intersection of emergency management and artificial intelligence. If you or someone you know would make for a great curious conversation, email me at traviskw at vt.edu. I'm Travis Williams and this has been Virginia Tech's Curious Conversations.

 

About Misra

Shalini Misra is an associate professor of urban affairs and planning in the School of Public and International Affairs, which is located in the Virginia Tech Research Center — Arlington. She is also an administrative fellow in the Institute for Society, Culture, and Environment. Misra’s research interests include the social, psychological, and health implications of the Internet and digital communication technologies, as well as public interest technology, its design and deployment, and the governance of digital technologies.