Skip to navigation – Site map

HomeNuméros2.2MethodsMethods for visual quality assess...

Methods

Methods for visual quality assessment of a digital terrain model

Tomaz Podobnikar
Sébastien Gadal

Abstract

A Digital Terrain Model (DTM) is a continuous representation of a ground surface landform that is commonly used to produce topographic maps. DTMs are created by integrating data obtained from a wide range of techniques including remote sensing and land surveying. Quality assessment of data is a critical parameter for DTM production and it relies heavily on statistical methods. In contrast, visual methods are generally neglected despite their potential for improving DTM quality. In this paper, several enhanced visual techniques for quality assessment are described and illustrated with areas and datasets selected from Slovenia and the planet Mars. Four classes of visual methods are defined: visualisations according to spatial analytical operations based on one or multiple datasets; visualisations according to spatial statistical analysis; non-spatial visualisations; and other visualisation techniques/other algorithms. The four classes generate different outputs: the first two produce thematic maps, while the third is used for non-spatial visualisation. The fourth class gathers other possible visualisations and algorithms. It is suggested that applying visual methods in addition to the more objective statistical methods would result in a more efficient improvement of the quality.

Top of page

Editor's notes

Reviewed by two anonymous referees.

Received: 8 June 2008 – Revised: 16 January 2009 – Accepted: 26 January 2009 – Published: 29 January 2009.

Full text

Introduction

1A digital terrain models (DTMs) is a continuous surface that, besides the values of height as a grid (known as a digital elevation model—DEM), also consists of other elements that describe the topographic surface, such as slope or skeleton (Podobnikar, 2005). Different techniques for the generation of DTMs have been developed since their inception more than fifty years ago (Miller and Laflamme, 1958; Doyle, 1978). The first decades focused mainly on models’ reliability. The common techniques for quality assessment were based on the statistical comparison of small reference areas of higher quality with the created DTM in order to find outliers. Until the end of the 90s, high quality terrain data were acquired mainly photogrammetrically using aerial photographs and manual stereo measurements or matching techniques, or by vectorisation of contour lines from topographical maps and attribution.

  • 1  e.g. NASA’s SRTM (Shuttle Radar Topography Mission) with a horizontal /planimetrical/ resolution o (...)
  • 2  airborne LIDAR for local DTMs with resolution of around 1 m
  • 3  http://earth.google.com
  • 4  http://www.microsoft.com/VIRTUALEARTH
  • 5  http://worldwind.arc.nasa.gov.
  • 6  http://www.radroutenplaner.nrw.de

2The quality of DTMs significantly increased over the last decade due to three significant factors:
The first was the introduction and development of new methods for data acquisition, especially from satellites and airplanes. At small scales (coarser spatial resolution) radar interferometric techniques (IfSAR) had been applied to generate global DTMs1 (Burrough and McDonnell, 1998; Maune, 2001). For larger scales and more local usage, airborne laser scanning (ALS) techniques have been applied2 (e.g. Kraus and Pfeifer, 1998).
The second factor is the increasing availability of additional data sources that are useful for the DTM quality assessment or enhancement. In addition to the aerial photographs and contour lines, different point datasets with height attributes could also be applied, such as fundamental geodetic network points, boundary points of land-cadastre, databases of buildings, spot elevations, and other related datasets such as highway construction or hydrological network measurements. Even datasets without height attributes such as lines of a hydrological network, roads, railways, and standing water polygons can be used (Podobnikar, 2005). These additional data sources can provide valuable input for integrated DTM production, as exemplified in Slovenia (Podobnikar, 2005) and in Europe (EuroGeographics, 2008).
Thirdly, applications using DTMs are now part of our everyday lives (e.g., Google Earth3, Microsoft Virtual Earth4, NASA World Wind5, Radrouten Planer6…). This trend can also have some impact on the quality of the DTMs used if it influences usability significantly.

3The higher the resolution, the more difficult the evaluation of input data quality and the assessment of the resulting DTM are. Experience indicates that the effort is proportional to the square of the inverse value of horizontal resolution. High resolution DTMs are thus more prone to errors. Visual methods can be very important for the evaluation of spatial data and can balance some weaknesses of statistical methods. They are still underused for at least three reasons. Visual approaches being qualitative are generally more neglected than statistical ones which are considered to be more objective. The other reasons for the lower acceptance of visual methods lie in the insufficient graphical capabilities of computers until recently and in the longer tradition of using statistical methods. Finally, visualisation of spatial data has traditionally been part of cartography. The main emphasis of this paper is to focus attention on visual methods as a powerful tool for quality assessment.

Towards data quality assessment

4The quality of spatial analysis depends on data quality, (data) model relevance and on the way they interact (Burrough and McDonnell, 1998). The model (or nominal ground) is a conceptualisation and representation (abstraction) of the real world, i.e., a selected representation of space, time, or attributes (Aalders, 1996). The datasets—in our case the DTMs—are realised by the type of spatial object to which variables refer on the level of measurement of these variables. The model relevance is a semantic quality of the representation by which a complex reality is captured. Data quality refers to the performance of the dataset given the specification of the data model (Haining, 2003).

Model quality—a DTM definition

5The DTM dataset is an approximation of the reality, based on a nominal ground. A semantically reliable and high quality data model (as a base for the DTM generation) should be carefully defined. The DTMs might vary depending on their purpose, the quality of data sources or interpolation algorithms, the experience of the operators, etc.

  • 7  A related data model is the digital surface model (DSM). The term refers, on the one hand, to a ge (...)

6A basic distinction can be made between digital elevation models (DEMs) and digital terrain models (DTMs) (Burrough and McDonnell, 1998; Podobnikar, 2005; Sutter et al., 2007)7. The DEM is one of the most used ‘raster datasets’ (a grid or a matrix) in geographical information systems (GIS). An elevation value (height) is attributed to each square cell of the grid. The set of cell heights can then be interpreted in two ways: In the first approach, each cell represents a discrete area, hence the entire cell area is assumed to have the same value, the changes occur only at the edges of the cells. In the second approach, the area between the cell centres is assumed to have some intermediate values. This approach is closer to the DTM definition. The DTM is considered as a continuous, usually smooth surface which, in addition to height values (as DEMs), also contains other elements that describe a topographic surface: slope, aspect, curvature, gradient, skeleton (pits, thalwegs, saddles, ridges, peaks), and others. In this study, we focus on DTM but the methods and results are largely applicable to DEM also.

Data quality

7Quality assessment methods can be distinguished a priori or a posteriori. Before generating the DTM, one can know the expected quality that result from our capacity and what quality is required with regard to the respected standards. These two factors enable regular production and usability of the DTM. The a priori assessments are based mostly on analyses of the datasets and methods for the DTM production while the a posteriori methods are based on the final DTM as described in this paper.

8One of the DTM quality assessment goals is to fulfil the requirements of spatial data standards. The ISO (International Organization for Standardization) distinguishes five elements of data quality: completeness; logical consistency; and three types of accuracy (positional, temporal, and thematic). This paper is concerned with accuracy, defined as a difference between the value of a variable, as it appears in a dataset, and the value of the variable in the data model (or “reality”). More specifically, we are referring to positional accuracy. We can distinguish between absolute and relative accuracy in terms of nature of the data. The position (horizontal or vertical) of the objects (e.g. ridges or sink holes as part of the DTM) could be assigned to absolute accuracy and the irregularity of the shapes of objects to the relative accuracy, that is, morphologically relative to a general position. The term precision is considered as a component of accuracy, related to the scale, resolution, and also to the generalisation of datasets (Podobnikar, 2008).

9The term error is used for lack of quality, or little or no accuracy. In addition to mistakes—in its widest meaning—it also refers to the statistical concept of variation (Burrough and McDonnell, 1998). The variation corresponds to random errors, thus incorrect spatial variation can be considered as systematic or gross error. According to these definitions, a level of accuracy (or error) can be described with a root mean square error (RMSE) and precision with a standard deviation or a standard error (σ).

Basic standards for the DTM quality assessment

  • 8 http://www.nga.mil
  • 9  This standard determines a grid size and accuracy according to different levels, from 0 to 5 (from (...)

10Most data quality standards for the DTMs encompass several quality requirements, but methods for quality control are seldom used. Visual quality control methods are even less often included. A certain level of standardisation is provided by USGS (1998). National Geospatial-Intelligence Agency (NGA)8 developed a “Digital Terrain Elevation Data” (DTED) standard for uniform matrix DTMs. It provides basic quantitative data for applications that require terrain elevation, slope, and/or surface roughness information9. The metadata of quality are roughly described with absolute horizontal (circular)/vertical (linear) error.

11EuroGeographics10 is currently developing a pan-European grid called EuroDEM11. Since the DTM is produced from various national DTMs, an important part of the project consists in the standardisation/harmonisation of the various coordinate systems, resolutions, and accuracies.

12The proposed procedure for quality assessment of the spatial datasets, especially of a DTM, comprises the following steps: (1) preparing the datasets; (2) processing with statistical or visual methods; (3) obtaining results as numbers, thematic maps, graphs, etc.; (4) analysis (comparison with expected results); and (5) obtaining metadata or corrected datasets (see figure 1).

Procedure for quality assessment

Figure 1: The five-step procedure for quality assessment of a DTM

Figure 1: The five-step procedure for quality assessment of a DTM

Preparation of the dataset

13The procedure for quality assessment is based primarily on one (single) or multiple spatial datasets. In the case illustrated on Figure 1, one dataset is a tested DTM, while multiple datasets denote a DTM + (independent) reference datasets. The approach with one dataset uses a DTM alone without any reference data. This case is the most subjective and requires a high level of knowledge of the generation processes. The operator also needs to be experienced to recognise deviations from expected outputs and to predict the most useful kind of analysis. The approach with multiple datasets uses a DTM and additional reference datasets. The reference datasets can be the DTMs as regionally continuous data, lines, or points. The basic criterion for selecting the appropriate reference data is that the quality should be at least as high as expected from the tested DTM. The reference data should be representative (of sufficient quantity), therefore distributed with a certain degree of regularity and significance with respect to the whole area. These methods are not convenient for areas where availability of the reference datasets is very low (e.g. currently, Mars datasets).

Processing with statistical and visual methods

14Processing with both statistical and visual methods is the primary focus of this research. The methods addressed differ according to whether they use one or multiple datasets and by their expected outputs. The single dataset method may allow more techniques for processing. These techniques may be used one after another. We classified them into two complexes: techniques using numerical processing and those using visualisation (Figure 1). Those in the first complex apply statistical and visual methods, while those in the second complex additionally apply visual methods only. With respect to visual methods, multiple techniques from complex 1 may be followed with single techniques of complex 2, and vice versa. Furthermore, some techniques of complex 1 can generate input for statistical methods but not for visual ones, some of them are useful just for visual methods, and the others for both statistical and visual methods. Statistical methods are denoted by /S/ and the visual by /V/. We propose the following classification of the methods:

  • Statistical assessment

    • on one spatial dataset /S1/

    • on multiple datasets /Sn/

  • Visual assessment (classification is partly referring to Berry’s (1987) classification of spatial analysis)

    • visualisations according to spatial analytical operations /V1

      • on one dataset /V11

      • on multiple datasets /V1n/

    • Visualisations according to spatial statistical analysis /V2/ (/V21/, /V2n/)

    • Non-spatial visualisations /V3/ (/V31/, /V3n/)

    • Other visualisation techniques/other algorithms /V4/ (/V41/, /V4n/)

Results of the processing

15The results of the processing include numbers for the statistical assessment methods, and thematic maps, various non-spatial visualisations, and other approaches to visualisations for the visual assessment methods.

Analysis of the results

16The next step is the comparison of the results with what can be expected from the quality of the data model. This is done via statistical methods (e.g. calculated RMSE with allowed RMSE). The analysis of results of the visual methods is more complex and less objective. In this case the results are compared with the “thresholds” and already “established” models. The visual methods require experience obtained through training. Fortunately some visual methods are generated fairly effortlessly and are easily understandable by a wide audience (as in Figure 2).

Final evaluation

17 As a final result, the datasets (DTMs) are evaluated by statistical or visual methods within the reports. Parameters of quality control are assessed and presented as extended standard metadata. An additional advantage is the opportunity for correction of the datasets—DTMs (Podobnikar, 2005).

Statistical methods for data quality assessment

18The statistical methods for quality control are also known as geometrical (when a topographic description of particular DTM objects is applied), stochastic (non-deterministic), or even mathematical (using mathematical methods). The most common approaches are analytical and empirical. The analytical approaches are primarily used when reference data is not available (Martinoni and Bernhard, 1998).

Methods based on one dataset /S1/

19The following parameters for quality assessment can be considered (descriptive statistic): arithmetical mean of heights, slopes, etc.; standard deviation σ; covariant function for heights, slopes, and volumes (Östman, 1987), rang (minimum/maximum), and Koppe’s formula adapted with other coefficients (Ackermann, 1978; Kraus, 1994); and autocorrelation analysis (Lee and Marion, 1994). The local methods entail description with variograms and correlograms (Wood, 1996; López, 2000) and measurement of the fractal dimensions of terrain (Wood, 1996) and terrain curvature.

20To analyse the estimated uncertainty of height data, Monte Carlo methods can be applied (Goodchild, 1995; Fisher, 1996; Podobnikar, 2005). The robust estimation method is based on statistical elimination of data that are not well enough autocorrelated to a certain threshold (Kraus and Pfeifer, 1998). Additionally, error assessment for the surroundings of a selected point on a surface may employ the “perfect inspector” hypothesis (López, 2000). A complex analytical method of spectral terrain analysis has been developed by Tempfli (1980; 1999), Frederiksen and Jacobi (1980), Russel et al. (1995), and Russel and Ochis (1995). The sensitivity analysis method was developed by Martinoni and Bernhard (1998). Accuracy can be also estimated by considering the density of the original datasets and local terrain curvature (Kraus et al., 2004).

21Another series of assessments includes various topological controls using vector contour lines developed to correct data in the following manners: nodes between two lines should have identical attributes; crossed lines should be eliminated; different heights of points and lines with identical coordinates should be unified; and contours with only one point (node) should be eliminated (Podobnikar, 2005; Figure 6). Other methods can be used to eliminate gross errors such as determination of the slopes that are too steeply inclined, and methods for determination of the height differences on the basis of control of neighbour contours (Larson, 1996).

Methods based on multiple datasets /Sn/

22Possible methods using a DTM and additional reference DTM(s) include: computing a mean error (M) (indicator for a systematic error), root mean square error (RMSE) (indicator for a random error after the systematic component has been eliminated), range (minimum/maximum), and others. Furthermore, the following tests are proposed: statistical covariance, regression, histograms, volume differences, and others.

23The methods for comparison of the DTM with reference lines and points are similar to the methods described using continuous reference DTMs. The main difference is that their quality is expected to be much higher than that of the continuous reference data. Unfortunately, there is a high possibility that the reference data would not be available for areas where the quality of the DTM is already low. Another difficulty is that it is generally not possible to compute derivative surface, e.g. slope from the reference lines and points.

Visual methods for data quality assessment

24The visual (or graphical, where the term is often applied in relation to geomorphological and semantical analysis) methods require a higher level of adaptation to particular problems than the more objective statistical ones. They are based on particular spatial analysis or modelling. Similar to cognitive mapping (Held and Rekosh, 1963), the use of visual method depends on the expertise and experience of the operator. The rule of thumb is more commonly applied with visual methods than with statistical methods. Visual methods actually offer the first assessments of the spatial data—DTMs. In the past they were carried out on a sheet of paper, nevertheless today they are primarily applied interactively with digital monitors (Burrough and McDonnell, 1998) and other equipment for the digital data visualisation (e.g. Drecki, 2002).

Visualisations according to spatial analytical operations based on one dataset /V11/

25This category of methods utilise the visual appearance of the dataset and is associated with thematic cartography and our ability to graphically express the studied problem. These methods can be roughly split into those that concern plasticity impression (embossing) and those that use geometric methods. For example, analytical shading as a plastic-oriented method (i.e. producing a three-dimensional impression) is based on visually effective presentation of the landform. In contrast, geometric methods like producing contour lines are better for a higher accuracy presentation of the landform. The methods of /V11/ may have some similarities with the methods /V1n/. Similar techniques may be used when comparing the DTM with its derivatives (reference datasets in /V1n/), but for this category only one dataset is used.

Figure 2. Shaded DTM

Figure 2. Shaded DTM

A shaded DTM with the original resolution of 100 m (A), and condensed to a resolution of 20 m using a spline interpolation algorithm (B). The red circle marks a gross error that is more easily recognised in the right picture. The visualisation is based on the /V11/.

26Visual controls of the basic derivatives of DTM include visualisation of slope, aspect (sensitivity to small errors especially on flat terrain), curvature (sensitivity to high frequency changes of the surface; Wood and Fisher, 1993), terrain roughness, dimension (characteristics) of the surface in a fractal sense (Li, 1998; Cheng et al., 1999), and visualisation of the condensed grid cells (Figure 2) or cost surfaces. These methods use different colour cast schemes, analytical shading with different parameters, or a dichromatic colour scheme (applying bipolar differentiation) with linear or non-linear cast (Wood, 1996; Rieger, 1992; Figure 3). The bipolar differentiation technique (or modulo approach, relative height-coding, “continuous” contour lines) can be described as a combination of contour lines (consecutive lines in the same colour of the dichromatic colour set) and repeated height-coding. Bipolar differentiation is similar to contours, but with different casts between them: a transition from light to dark or through a series of hues, which enables portrayal of even small details within the contour intervals. Depending on the chosen height interval, some tiny oscillations (possible errors) within “contour line” intervals can be clearly assessed, independently on the chosen particular azimuth as with analytical shading.

Figure 3. Example of dichromatic colour visualisation

Figure 3. Example of dichromatic colour visualisation

Figure 3. Visualisation based on /V11/ with a bipolar differentiation method with linear cast applying a certain height interval (20 m).

27The other methods are based on detection of seemingly impossible existing structures (e.g. the edges of the connection zone of the neighbour datasets) by applying high-pass filters; characterising the characteristic points, lines, and areas (peaks, pits, etc., or contour lines; Li, 1998); and searching for their false patterns (Figure 4).

Figure 4. Utilisation of false pattern to detect structures

Identification of the ridges and thalwegs based on /V11/. A: crossed contour lines (in circle) caused a false combination of ridge/thalweg (green and red areas are associated). B: incorrect attributes were assessed with a sensitive interpolation that presents analytical shading and ridges (red dots)/thalwegs (green dots) that are in unlikely positions.

28Further quality control methods include visualisation of the DTMs that were previously generalised. Additional techniques for generalisation make possible a multi-scale presentation. A combination of the proposed quality control methods in various scales can improve the reading and understanding of the landform features and therefore the finding of possible errors (Figure 5).

Figure 5. Morphological detection on Mars

Figure 5. Morphological detection on Mars

Detecting morphologically artificial (impossible) features on Mars (Candor Chasma) and labelling them as possible gross errors by applying different visualisation methods based on /V11/. A: analytical shading; B: bipolar differentiation with an interval of 100 m; C: curvatures visualisation; D: curvatures visualisation using a generalised DTM.

Visualisations according to spatial analytical operations based on multiple datasets /V1n/

29The proposed methods are intended for checking consistency of the datasets when using reference data for the analyses. The reference data might be a better quality DTM, an orthophoto, contour lines from the maps, etc. For visualisation purposes the datasets can be previously reclassified, overlaid in different ways (e.g. transparently, using operations), or even placed alongside each other.

30This paper proposes and selects the following methods of spatial analytical operations with the multiple datasets visualisations: (1) difference between the overlaying DTMs; (2) combination of different type of derivatives of the DTMs (hypsometry, analytical shading, contour lines from the maps, contour lines from a DTM, etc.); (3) and contour lines from the maps overlaid over the following DTM derivatives: hypsometry, analytical shading, aspect, slope, curvature, or contour lines interpolated from the DTM (Ackermann, 1978; Hutchinson and Gallant, 1998; Carrara et al., 1997). The hydrological network can be assessed in a way similar to contour lines.

31The next methods use (4) contour lines vectorised from the maps which have been overlaid with characteristic points and lines derived from the contour lines (Figure 6)—the contours may be hierarchically coloured by applying a colour alternation method; (5) overlaying the hydrological network, generated from the DTM (Hutchinson and Gallant, 1998; Wood, 1996) over the pits and from hydrography acquired from the maps; (6) overlaying the contour lines from maps with the DTM generated from them (Carrara et al., 1997) or DTMs generated by other means; (7) overlaying the automatically generated characteristic points, lines, and contour lines; (8) overlaying the DTM with datasets that are basically not connected with DTM generation—satellite images, maps, orthophotos (Wiggenhagen, 2000); (9) overlaying considering Bayes theorem (Skidmore, 1997) where preliminary and actual knowledge is considered (Eastman, 1997); and (10) a perspective view applying the previously described methods for better recognition of the specific problems.

Figure 6. Contour lines obtained with Visual Methods

Figure 6. Contour lines obtained with Visual Methods

Visual methods based on /V11/ and /V1n/ (and on the statistical methods based on one dataset /S1/ that is not presented here) for detection of gross errors from the contour lines. A: contour lines from the original map (grey) and generated by a DTM (red). B: contour lines from the original map and an analytical shaded DTM generated from them. In both examples, a consequential gross error from the attributes (i.e. height of contour line) is easily perceived according to different methods.

Visualisations according to spatial statistical analysis /V2/

32This set of methods is based on generating a selected statistical test of the dataset (DTM) and presenting the results in a way similar to the one described for the both classes of /V1/ methods. Firstly, we propose a group of methods based on Monte Carlo simulations: (1) visibility (Figure 7), slope and aspect, or optimal path simulation is applied by an appropriate error model of the DTM (Fisher, 1996; Podobnikar, 2005; Burrough and McDonnell, 1998; Heuvelink, 1998; Nackaerts et al., 1999; Felicísimo, 1994; Heuvelink, 1998; Canters, 1994; Ehlschlaeger and Shortridge, 1996; Ehlschlaeger et al., 1997); and (2) simulation of positional error of the hydrological network, watersheds, contour lines, characteristic features, and other vectors which have a significant influence on quality in certain circumstances (Burrough and McDonnell, 1998; Hutchinson and Dowling, 1991; Wood, 1996; Veregin, 1997; Lee, 1996; Openshaw, 1992; Podobnikar, 2005).

Figure 7. Monte Carlo simulation

Figure 7. Monte Carlo simulation

Monte Carlo simulation from a selected viewpoint (Krim) based on /V21/ and /V2n/ (comparing two different datasets). Two different models of error simulation on different DTMs were used. The DTM on A is a higher quality, especially on the plain. The Monte Carlo simulations applied specific error models (continuously varying error distribution surfaces) to the evaluated quality of DTMs with a resolution of 25 m—interferometric radar (IfSAR, A), and integrated DTM 20 m (B). The probability viewshed was converted to a fuzzy viewshed with a semantic import model (Burrough and McDonnell, 1998; Podobnikar, 2008), therefore to the fuzzy borders. Red indicates shadows, with a lower possibility of visibility. Hill shadows of tested DTMs are transparently overlaid;

33The next method entails (3) construction of fractal surfaces (Wood, 1996) similar to Monte Carlo approaches, where changing of the fractality allows controlled changing of the surface; (4) visualisation of precision and uncertainty of the contour lines, calculated with analytical methods (Tempfli, 1980; Kraus, 1994); and (5) visualisation of reference point difference according to the terrain surface, presented as deviation plots, that describes and portrays the quality of the DTMs’ surfaces.

Non-spatial visualisations /V3/

34This class of visualisation methods is based on similar or completely different algorithms as for /V1/ and /V2/ classes. The outputs are histograms, graphs, diagrams, matrices, etc. Histograms as among the well known visual (graphical) presentation methods for certain statistic tests can be applied for DTM’s heights (Li, 1998) or derived aspects, curvatures, etc. (Hutchinson and Gallant, 1998). Histograms are then visually assessed: the DTM is expected to be of high quality if the transition between the columns is smooth enough or exhibits no repetitive pattern. Another possibility is a histogram of relative heights (so called relative histogram). If the DTM is interpolated from the contour lines then the values of DTM will tend to accumulate around the contour interval values. Higher perpendicularity (homogeneity) of the histogram signifies a higher quality of the interpolated surface (Carrara et al., 1997; Figure 8).

35The next proposed visualisation is a co-occurrence matrix calculation, used generally for analyses in a grey colour scheme. Using the DTM, the height values are assigned to the abscissa, and mean values of near surroundings to the ordinate. The autocorrelation of the surface can be inspected visually as it is higher when the values are closer to the principal diagonal (Wood and Fisher, 1993). Low autocorrelation signifies a very rough surface or a gross error.

Figure 8. Relative histogram for DTMs

Figure 8. Relative histogram for DTMs

Relative histogram for DTMs produced on a repetitive height interval of 10 m (0 to 9 m) based on /V31/ and /V3n/ (comparing two different datasets). On the left is a relative histogram for a DTM produced from contour lines (with interval 10 m) and on the right for a photogrammetrically generated DTM.

Other visualisation techniques/other algorithms /V4/

36There are many other possibilities for visually assessing a DTM’s quality. Several examples are presented below. The first is a path simulation between the selected points using different DTMs (Figure 9). This visualisation is actually bases on spatial analytical operations described in /V1/ but require some additional information besides the DTM (in this case the starting and the ending points). A very effective method is presenting terrain profiles (Figure 10) or terrain silhouettes from selected viewpoints. Another method demands motion picture techniques: attribute errors on the contour lines can be assessed, while the counter lines are presented sequentially according to their attributes or hierarchically from main to auxiliary ones. Another possibility is to label the contour lines according to their height (Hutchinson and Gallant, 1998).

Figure 9. Optimal path simulation

Figure 9. Optimal path simulation

Optimal path simulation using the same algorithm applied on three DTMs of different quality based on /V4n/. The black path is simulated on the highest quality DTM while blue one on the lower quality dataset. Similar results using DTMs produced from different sources signify (but do not prove) a higher quality.

Figure 10. Production of profile using DTMs

Figure 10. Production of profile using DTMs

Profiles over the same area on DTMs of different precision based on /V4n/. The appearance of the DTM on the A is very rough. It contains many gross errors and the overall quality is much lower than the one of the DTM on the B. These visualisations reflect the methods of the DTM production.

Conclusions

37Several methods have been developed, described and analysed, to assess DTM quality. This paper presents both statistical and visual methods, used for one (DTM) or multiple (DTM + reference) datasets. In particular, visual methods are presented in four classes: visualisations according to spatial analytical operations based on one dataset /V11/ or multiple datasets /V1n/; visualisations according to spatial statistical analysis /V2/; non-spatial visualisations /V3/; and other visualisation techniques/other algorithms /V4/. The first two classes result in thematic maps, while the third produces non-spatial visualisation.

38The visual methods (especially analytical shading) provide a first impression of the DTM quality. Although the methods for visual quality assessment of a DTM or other spatial datasets are less objective, they support statistical methods with their mutual combinations and combination with the other assessments, and allow understanding of even complex problems which may negatively influence the DTM quality and which otherwise would not be easily discovered. We can say that the statistical methods are well accepted for quality assessment, but they provide incomplete results, and vice versa. The examples are a quantification of the fuzzy viewsheds that would be additionally processed (see Figure 7) and quantifying/visualisations of the histograms (see Figure 8). The same examples also show a potential problem where the quality assessment is largely driven by a specific application. Additionally, more error types (e.g. random, systematic, and gross) could be assessed using the same visualisation method (see figures for examples).

39Results of the tests allow description of and improvement in quality in a sophisticated way considering the higher level of description and integrity of the processes. Consequently, the usability of the carefully checked and possibly corrected data can increase significantly. The proposed and applied methods considerably exceed available standards for the quality control used for the national or international DTM production (e.g. ISO/TC 211). The standards change frequently, and they are often based on the lowest common denominator—especially the subjective visual assessments. However, extensive experience combined with the complex knowledge thus acquired could be the most important factor in understanding the entire process of data acquisition, processing, etc. Furthermore, these checks provide an ideal opportunity to improve and extend the information content of standard metadata.

40In the future, more complex studies that include comprehensive simulation methods (Podobnikar, 2008) will be needed for visual quality assessment (ontologically, epistemologically, and pragmatically) to integrate outcomes of technical, natural, and social sciences and to reach a higher level of simplicity—as an ultimate level of sophistication (after Leonardo da Vinci).

Various techniques for quality assessment by visualisation have been carried out on different DTMs. Some of them were kindly provided by the Mapping Authority of the Republic of Slovenia through my doctoral thesis and others are DTMs of Mars available though the research project series TMIS (plus, plus.II, morph) funded by the Austrian Research Promotion Agency in the frame of the ASAP program. I am very grateful to Prof. Josef Jansa who performed a systematic review of my ideas.

Top of page

Bibliography

Aalders H.J. (1996). Quality metrics for GIS, Kraak, M.J., Molenaar, M. (eds.): Advances in GIS Research II. Proceedings 7th International Symposium on Spatial Data Handling, Delft, 5B.1–5B.10.

Ackermann F. (1978). Experimental Investigation into the Accuracy of Contouring from DTM, Photogrammetric Engineering & Remote Sensing, 44, 1537–1548.

Berry J.K. (1987). Fundamental operations in computer-assisted map analysis, IJGIS, 1/2, 119–136.

Burrough P. & R. McDonnell (1998). Principles of Geographical Information Systems, Oxford.

Canters F. (1994). Simulating error in triangulated irregular network models, EGIS/MARI '94, Amsterdam, 169–178.

Carrara A., G. Bitelli & R. Carla (1997). Comparison of techniques for generating digital terrain models from contour lines, IJGIS, 11/5, 451–473.

Cheng Y.C., P.J. Lee & Lee, T.Y. (1999). Self-similarity dimensions of the Taiwan Island landscape, Computers and Geosciences, Geological Survey of Canada, Elsevier Science, 25/9, 1043–1050.

Doyle F.J. (1978). Digital Terrain Models: An Overview, Photogrammetric Engineering and Remote Sensing, 44, 1481–1487.

Drecki I. (2002). Visualisation of Uncertainty in Geographic Data. Shi, W., Fisher. P.F., Goodchild, M.F. (eds.): Spatial Data Quality, Taylor & Francis, New York, 140–159.

Eastman J.R. (1997). IDRISI for Windows (software documentation, version 2.0), Clark University, Graduate School of Geography, Worcester.

Ehlschlaeger C.R. & A.M. Shortridge (1996). Modelling elevation uncertainty in geographical analyses, Kraak, M.J., Molenaar, M. (eds.), Advances in GIS Research II: Proceedings 7th International Symposium on Spatial Data Handling, Delft, 9B.15–9B.25.

Ehlschlaeger C.R., A.M. Shortridge & M.F. Goodchild (1997). Visualizing Spatial Data Uncertainty Using Animation. Computers and Geosciences 23/4, 387–395.

Felicísimo A.M. (1994). Parametric statistical method for error detection in digital elevation models, ISPRS Journal of Photogrammetry and Remote Sensing, 49/4, 29–33.

Fisher P.F. (1996). Animation of Reliability in Computer-generated Dot Maps and Elevation Models, Cartography and Geographic Information Systems, 23/4, Journal of American Congress on Surveying and Mapping.

Frederiksen P.& O. Jacobi (1980). Terrain Spectra, Technical University, Denmark.

Goodchild M.F. (1995). Attribute accuracy, Guptill, S.C., Morrison, J.L. (eds.): Elements of spatial data quality, 59–80.

Haining R. (2003). Spatial Data Analysis: Theory and Practice, Cambridge University Press.

Held R.& J. Rekosh .(1963). Motor-sensory feedback and geometry of visual space, Science, 141, New York.

Heuvelink G.B. (1998). Error Propagation in Environmental Modelling with GIS, Taylor & Frances, London, Bristol.

Hutchinson M.F.& T.I. Dowling (1991). A continental hydrological assessment of a new grid-based digital elevation model of Australia, Hydrological Processes, John Wiley & Sons, 5/45–58.

Hutchinson M.F.& J.C. Gallant (1998). Representation of terrain, Longley, P.A., Goodchild, M.F., Maguire, D.J., Rhind, D.W. (eds.): Geographical information systems: Principles and Technical Issues, John Wiley & Sons, New York, 105–124.

Kraus K. et al. (2004). Quality Measures for Digital Terrain Models, International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, XXXV-B2.

Kraus K. (1994). Visualization of the Quality of Surfaces and Their Derivatives, Photogrammetric Engineering & Remote Sensing, 60/4, 457–462.

Kraus K.& N. Pfeifer (1998). Determination of terrain models in wooded areas with airborne laser scanner data, ISPRS Journal of Photogrammetry and Remote Sensing, 53/4, 193–203.

Larson K.S. (1996). Error Detection and Correction of Hypsography Layers, Proceedings, ESRI User Conference, 20–24 May, Redlands.

Lee J. & L.K. Marion (1994). Analysis of Spatial Autocorrelation of USGS. 1:250,000 Digital Elevation Models, GIS/LIS, 504–513.

Lee J. (1996). Digital Elevation models: Issues of Data Accuracy and Applications, ESRI User Conference, 20–24 May, Redlands.

Li M. (1998). Error and Uncertainty Management of Spatial Databases in GIS, Seminar, Key Centre for Social Applications of GIS, School of Geoinformatics, Planning and Building, University of South Australia.

López C. (2000). On the improving of elevation accuracy of Digital Elevation Models: a comparison of some error detection procedures, Transactions on GIS, 1/1.

Martinoni D., L. Bernhard (1998). A Conceptual Framework for Reliable Digital Terrain Modelling, Proceedings 8th Symposium on Spatial Data Handling, Vancouver, 737–750.

Maune D.F. (2001). (ed.): Digital Elevation Model Technologies and Applications: The DEM Users Manual (ASPRS).

Miller C.L. & R.A. Laflamme. (1958). The Digital Terrain Model: Theory and Application, Photogrammetric Engineering and Remote Sensing, 24, 433–422.

Nackaerts K., G. Govers, & J. van Orshoven.(1999). Accuracy assessment of probabilistic visibilities, IJGIS, 13/7, 709–721.

Openshaw S. (1992). Learning to live with errors in spatial databases (Chapter 23), Goodchild, M., Gopal, S. (eds.): Accuracy of spatial databases, 263–276.

Östman A. (1987). Quality control of photogrammetrically sampled digital elevation models, Photogrammetric Record, 12/69, 333–341.

Podobnikar T. (2005). Production of integrated digital terrain model from multiple datasets of different quality, IJGIS, 19/1, 69–89.

Podobnikar T. (2008). Simulation and representation of the positional errors of boundary and interior regions in maps (Chapter 7), Moore, A., Drecki, I. (eds.). Geospatial vision: new dimensions in cartography (Lecture notes in geoinformation and cartography), 141–169.

Rieger W. (1992). Hydrologische Anwendungen des digitalen Geländemodelles, Institut für Photogrammetrie und Fernerkunding, Heft 39, PhD Thesis, Vienna.

Russell E.M. Kumler & H. Ochis (1995). Identifying and removing systematic errors in USGS DEMs, GIS in the Rockies conference Proceedings (poster), Denver, 25–27 September.

Russell E. & H. Ochis (1995). Mitigation Methods for Systematic Errors in USGS DEMs.

Skidmore A.K. (1997). GIS Applications and use of digital terrain modelling, Joint European Conference and Exhibition on Geographical Information, Vienna, 1, 442–463.

Sutter F., S. Räber & B. Jenny (2007). Terrain Models
http://www.terrainmodels.com

Tempfli K. (1980). Spectral analysis of terrain relief for the accuracy estimation of digital terrain models, XIVth ISP Congress, Commision II, Hamburg.

Tempfli K. (1999). DTM Accuracy Assessment, American Society for Photogrammetry and Remote Sensing (ASPRS).

USGS (1998). (United States Geodetic Survey) Digital Elevation Model Standards
http://rmmcweb.cr.usgs.gov/public/nmpstds/demstds.html
http://rockyweb.cr.usgs.gov/nmpstds/demstds.html.

Veregin H. (1997). The effects of Vertical Error in Digital Elevation Models on the Determination of Flow-path Direction, Cartography and Geographic Information systems, 24/2, 67–79.

Wiggenhagen M. (2000). Development of real-time visualization tools for the quality control of digital terrain models and orthoimages, International Archives of Photogrammetry and Remote Sensing, Amsterdam, XXXIII, part B2, 987–993.

Wood J.D. (1996). The Geomorphological Characterisation of Digital Elevation Models, PhD Thesis, Department of Geography, University of Leicester
http://www.soi.city.ac.uk/~jwo/phd/.

Wood J.D.& P.F. Fisher (1993). Assessing interpolation accuracy in elevation models, IEEE Computer Graphics and Applications, 13/2, 48–56.

Top of page

Notes

1  e.g. NASA’s SRTM (Shuttle Radar Topography Mission) with a horizontal /planimetrical/ resolution of 3” and an ongoing project at DLR (German Aerospace Center) named TanDEM-X for a DTM with a resolution of 12 m.

2  airborne LIDAR for local DTMs with resolution of around 1 m

3  http://earth.google.com

4  http://www.microsoft.com/VIRTUALEARTH

5  http://worldwind.arc.nasa.gov.

6  http://www.radroutenplaner.nrw.de

7  A related data model is the digital surface model (DSM). The term refers, on the one hand, to a general expression for any mathematically defined surface, and on the other hand, to a basic product of radar interferometry, ALS, photogrammetrical terrain modelling, etc. In contrast to a DTM, a DSM includes all kinds of buildings (including houses, chimneys, road bridges, and viaducts), vegetation cover, as well as natural terrain features (e.g. temporal snow cover or 3D surface of caves). Additionally, a normalised digital surface model is defined as: nDSM = DSM – DTM.

8 http://www.nga.mil

9  This standard determines a grid size and accuracy according to different levels, from 0 to 5 (from 1000, 90, 30, 10, 3, to 1 m). Additionally, a “High-Resolution Terrain Information” (HRTI) standard with levels from 3 to 4 (from 12 to 1 m) has been proposed—but not yet fully accepted.

10 http://www.ec-gis.org/inspire

11  with a resolution of 60 m (2”) and absolute vertical accuracy of 8 to 10 m. The first version has been released on April, 2008.

Top of page

List of illustrations

Title Figure 1: The five-step procedure for quality assessment of a DTM
URL http://journals.openedition.org/sapiens/docannexe/image/738/img-1.png
File image/png, 18k
Title Figure 2. Shaded DTM
Caption A shaded DTM with the original resolution of 100 m (A), and condensed to a resolution of 20 m using a spline interpolation algorithm (B). The red circle marks a gross error that is more easily recognised in the right picture. The visualisation is based on the /V11/.
URL http://journals.openedition.org/sapiens/docannexe/image/738/img-2.jpg
File image/jpeg, 52k
Title Figure 3. Example of dichromatic colour visualisation
Caption Figure 3. Visualisation based on /V11/ with a bipolar differentiation method with linear cast applying a certain height interval (20 m).
URL http://journals.openedition.org/sapiens/docannexe/image/738/img-3.jpg
File image/jpeg, 632k
Caption Identification of the ridges and thalwegs based on /V11/. A: crossed contour lines (in circle) caused a false combination of ridge/thalweg (green and red areas are associated). B: incorrect attributes were assessed with a sensitive interpolation that presents analytical shading and ridges (red dots)/thalwegs (green dots) that are in unlikely positions.
URL http://journals.openedition.org/sapiens/docannexe/image/738/img-4.jpg
File image/jpeg, 96k
Title Figure 5. Morphological detection on Mars
Caption Detecting morphologically artificial (impossible) features on Mars (Candor Chasma) and labelling them as possible gross errors by applying different visualisation methods based on /V11/. A: analytical shading; B: bipolar differentiation with an interval of 100 m; C: curvatures visualisation; D: curvatures visualisation using a generalised DTM.
URL http://journals.openedition.org/sapiens/docannexe/image/738/img-5.jpg
File image/jpeg, 80k
Title Figure 6. Contour lines obtained with Visual Methods
Caption Visual methods based on /V11/ and /V1n/ (and on the statistical methods based on one dataset /S1/ that is not presented here) for detection of gross errors from the contour lines. A: contour lines from the original map (grey) and generated by a DTM (red). B: contour lines from the original map and an analytical shaded DTM generated from them. In both examples, a consequential gross error from the attributes (i.e. height of contour line) is easily perceived according to different methods.
URL http://journals.openedition.org/sapiens/docannexe/image/738/img-6.jpg
File image/jpeg, 124k
Title Figure 7. Monte Carlo simulation
Caption Monte Carlo simulation from a selected viewpoint (Krim) based on /V21/ and /V2n/ (comparing two different datasets). Two different models of error simulation on different DTMs were used. The DTM on A is a higher quality, especially on the plain. The Monte Carlo simulations applied specific error models (continuously varying error distribution surfaces) to the evaluated quality of DTMs with a resolution of 25 m—interferometric radar (IfSAR, A), and integrated DTM 20 m (B). The probability viewshed was converted to a fuzzy viewshed with a semantic import model (Burrough and McDonnell, 1998; Podobnikar, 2008), therefore to the fuzzy borders. Red indicates shadows, with a lower possibility of visibility. Hill shadows of tested DTMs are transparently overlaid;
URL http://journals.openedition.org/sapiens/docannexe/image/738/img-7.jpg
File image/jpeg, 60k
Title Figure 8. Relative histogram for DTMs
Caption Relative histogram for DTMs produced on a repetitive height interval of 10 m (0 to 9 m) based on /V31/ and /V3n/ (comparing two different datasets). On the left is a relative histogram for a DTM produced from contour lines (with interval 10 m) and on the right for a photogrammetrically generated DTM.
URL http://journals.openedition.org/sapiens/docannexe/image/738/img-8.jpg
File image/jpeg, 72k
Title Figure 9. Optimal path simulation
Caption Optimal path simulation using the same algorithm applied on three DTMs of different quality based on /V4n/. The black path is simulated on the highest quality DTM while blue one on the lower quality dataset. Similar results using DTMs produced from different sources signify (but do not prove) a higher quality.
URL http://journals.openedition.org/sapiens/docannexe/image/738/img-9.jpg
File image/jpeg, 736k
Title Figure 10. Production of profile using DTMs
Caption Profiles over the same area on DTMs of different precision based on /V4n/. The appearance of the DTM on the A is very rough. It contains many gross errors and the overall quality is much lower than the one of the DTM on the B. These visualisations reflect the methods of the DTM production.
URL http://journals.openedition.org/sapiens/docannexe/image/738/img-10.jpg
File image/jpeg, 21k
Top of page

References

Electronic reference

Tomaz Podobnikar, “Methods for visual quality assessment of a digital terrain model”S.A.P.I.EN.S [Online], 2.2 | 2009, Online since 29 January 2009, connection on 18 March 2024. URL: http://journals.openedition.org/sapiens/738

Top of page

About the author

Tomaz Podobnikar

Institute of Photogrammetry and Remote Sensing, Vienna University of Technology, Vienna, Austria

Scientific Research Centre of the Slovenian Academy of Sciences and Arts, Ljubljana, Slovenia

Top of page

Copyright

CC-BY-4.0

The text only may be used under licence CC BY 4.0. All other elements (illustrations, imported files) are “All rights reserved”, unless otherwise stated.

Top of page
Search OpenEdition Search

You will be redirected to OpenEdition Search