Computing River Floods Using Massive Terrain Data

Wednesday Sep 28   03:30 PM to 03:50 PM (20 minutes)
flood modelingmassive datariver floodsI/O-efficiencyalgorithms design
Many times in history, river floods have resulted in huge catastrophes. To reduce the negative outcome of such floods, it is important to predict their extent before they happen. For this reason, specialists use algorithms that model river floods on digital terrains datasets. Nowadays, massive terrain datasets have become widely available. As flood modeling is an important part for a wide range of professions, it is crucial to process such datasets fast even with standard computers. Yet, these datasets can be several times larger than the main memory of a standard computer. Unfortunately, existing flood-modeling algorithms cannot handle this situation efficiently. Hence they have to sacrifice output quality for time performance, or vice versa. In this paper, we present a novel algorithm that, unlike any previous approach, can both provide high-quality river flood modeling and handle massive terrain data efficiently. More than that, we redesigned an existing popular flood-modeling method (approved by European Union and used by authorities in Denmark) so that it can efficiently process huge terrain datasets. Given a raster terrain G and a subset of its cells representing a river network, both algorithms estimate for each cell in G the height that the river should rise for the cell to get flooded. Based on our design, both algorithms can process terrain datasets that are much larger than the main memory of a computer. For an input raster that consists of N cells, and which is so large that it can only be stored in the hard disk, each of the described algorithms can produce its output with only O(sort(N)) transfers of data blocks between the disk and the main memory. Here sort(N) denotes the minimum number of data transfers needed for sorting N elements stored on disk. We implemented both algorithms, and compared their output with data acquired from a real flood event. We show that our new algorithm models the real event quite accurately, more accurately than the existing popular method. We evaluated the efficiency of the algorithms in practice by conducting experiments on massive datasets. Each algorithm could process a dataset of 268 GB size on a computer with only 22 GB working main memory (twelve times smaller than the dataset itself) in at most 31 hours.

My Schedule

Add to Your Schedule