Three things that flood analytics providers should do to track their impact

A flood response operation in Southeast Asia, circa 2016, Source: Cloud to Street

By Sri Ramesh

Frequent catastrophic flooding affects millions of people every year. The World Bank estimates that in 2020, 1.47 billion people around the world experienced at least moderate flood risk. Those who are tasked with responding to these escalating crises—namely, first responders with the United Nations (UN), International Federation of the Red Cross (IFRC), and government agencies for disaster management — must invest in good data to coordinate disaster responses, and ultimately, save more lives. “Good data” in flood response includes timely, accurate estimates of key metrics such as the number of flood-struck civilians and flooded roads, and the amount of cropland destroyed by flooding. Such information is in high demand among first responders in charge of organizing flood response logistics and saving lives in an emergency.

Take for example, a flood analytics firm (data provider) that has recently been tasked with providing weekly flood damage estimates to the Federal Emergency Management Agency (FEMA), the U.S. government’s premier disaster management agency. The analytics firm provides FEMA with timely, accurate estimates of flood-affected population, roads, and cropland in the form of a weekly dashboard. However, one key question remains: How can data providers effectively track their impact?

This, to me, is the challenge of the black box, and it is an important one to solve, both for government agencies that invest in data products and the data providers that develop them. I refer to the ‘black box’ as the phenomenon that data providers face when handing off a data product, like a flood analytics dashboard, map, or a web application, to end-users like FEMA’s first responders.

Government agencies today rely heavily on third-party data collection and analytics, and need to understand their return on investment. A failure to do so leaves government agencies at risk of investing in data for the sake of it — an easy trap to fall into, given the deluge of data products in the world today, and the increasing fascination around data visualizations. Meanwhile, data providers more often than not simply hand off these products to end-users without checking if and how end-users used the product.

Over the last five years, I have worked on providing data to government agencies both as a federal analytics consultant and as part of a flood analytics startup Cloud to Street. I have also spent time in South Asia and Sub-Saharan Africa conducting randomized and quasi-experimental impact evaluations. These experiences have taught me how data providers can work to dismantle the black box by developing robust methods to track their impact after their data is delivered. Specifically, here are three things that flood analytics providers should do to track their impact:

1. Understand that end-users often triangulate different data sources to arrive at a given decision. From providing flood damage information to first responders in African disaster management agencies, I learned that end-users of data products often do not consume the data at face value. Rather, they triangulate different data sources together to evaluate their assumptions about a given situation. For example, if a flood analytics firm reports that 100 people were struck by a flood in a given district of the Congo, a Congolese first responder is unlikely to accept this data at face value. Rather, they are likely to compare this data point with other sources for the same information (in this case, field surveys and media reports), to confirm whether this is true or not. The end-user verifies using all sources of data before deploying aid to flood-struck victims. For data providers, this triangulation is an important insight when designing an impact tracking protocol for their clients. Data providers often tend to incorrectly draw a one-to-one relationship between their data and a given decision when in reality, the relationship is a many-to-one relationship (many data sources, one decision).

2. At the start of a given project, work with end-users to identify 2-3 key decisions they hope to optimize with your data product. In my experience, tracking impact as a data provider is a matter of tracking how an end-user’s decisions are being borne, transformed, and/or discarded throughout the project lifecycle. This kind of forward-thinking pays big returns. While working with the Government of Ghana’s central disaster management agency, I learned that one of the agencies wanted to reduce the average time to respond to a flood emergency. Knowing this beforehand, our team was able to design a flood analytics dashboard catered to optimizing this decision, such as including features that enabled end-users to receive topline flood damage estimates quickly. Our team was also able to follow-up with the first responders, checking in weekly to determine if the dashboard information was allowing them to reduce their response time. It frames the data provision task in the end-user’s very specific contexts, which markedly increases the value of the data in their eyes. It also increases the likelihood they will take-up, and use the data to drive decision-making.

3. Finally, invest in developing strong, consistent lines of communication with end-users throughout a given project. After a data-related project has started, success in tracking impact throughout the project is contingent on whether or not data providers have consistent lines of communications with end-users. In my experience, weekly or bi-weekly communication with end-users in government disaster management agencies proved to be fruitful. The conversations centered around if end-users were using the flood damage data provided as intended (in our case, to optimize response logistics), how they were doing so, and whether or not the data provided enabled end-users to achieve the desired results (in our case, to reduce response time).

All in all, tracking impact as a data provider is nothing but the art of piecing together the story of what happens in the end-users world after the data is delivered. Success in this endeavor of storytelling requires having open and consistent lines of communication with all those who consume the data, make decisions with it, and act on it. This approach ultimately increases the chances that the insights derived from data products materialize in policy-oriented decisions and quantifiable impact on the ground.


Sri Ramesh is a 2021 Berkeley Master of Development Practice (MDP) Candidate, focusing on information science for public policy. Sri has nearly 3 years of experience providing data to governments and in quantitative impact evaluation. She has worked as a federal financial services consultant in Washington, DC, where she developed business intelligence tools for US federal agencies, and as a monitoring and evaluation advisor at a flood analytics startup, where she worked closely with West African governments to track the impact of satellite-driven flood information on end-users. She has also supported randomized impact evaluations as a J-PAL Africa Research Associate, and with the United Nations Development Programme (UNDP) in Sri Lanka as a Fulbright Fellow.

The views expressed in this article do not necessarily represent those of the Berkeley Public Policy Journal, the Goldman School of Public Policy, or UC Berkeley.