Source: http://graphics.cs.uiuc.edu/~garland/papers/simp.pdf

Survey of Polygonal Surface Simpli cation Algorithms

Paul S. Heckbert and Michael Garland 1 May 1997

School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213

Multiresolution Surface Modeling Course SIGGRAPH ’97

This is a draft of a Carnegie Mellon University technical report, to appear. See http://www.cs.cmu.edu/ ph for nal version.

Send comments or corrections to the authors at: ph,garland @cs.cmu.edu

Abstract

This paper surveys methods for simplifying and approximating polygonal surfaces. A polygonal surface is a piecewise-linear surface in 3-D de ned by a set of polygons; typically a set of triangles. Methods from computer graphics, computer vision, cartography, computational geometry, and other elds are classi ed, summarized, and compared both practically and theoretically. The surface types range from height elds (bivariate functions), to manifolds, to non-manifold self-intersecting surfaces. Piecewise-linear curve simpli cation is also brie y surveyed.

Keywords: multiresolution modeling, surface approximation, piecewise-linear surface, triangulated irregular network, mesh coarsening, decimation, non-manifold, cartographic generalization, curve simpli cation, level of detail, greedy insertion.

1 Introduction

The simplification of surfaces has become increasingly important as it has become possible in recent years to create models of greater and greater detail. Detailed surface models are generated in a number of disciplines. For example, in computer vision, range data is captured using scanners; in scienti c visualization, isosurfaces are extracted from volume data with the “marching cubes” algorithm; in remote sensing, terrain data is acquired from satellite photographs; and in computer graphics and computer-aided geometric design, polygonal models are generated by subdivision of curved parametric surfaces. Each of these techniques can easily generate surface models consisting of millions of polygons.

Simplification is useful in order to make storage, transmission, computation, and display more ef cient. A compact approximation of a shape can reduce disk and memory requirements and can speed network transmission. It can also accelerate a number of computations involving shape information, such as nite element analysis, collision detection, visibility testing, shape recognition, and display. Reducing the number of polygons in a model can make the difference between slow display and real time

display.

A variety of methods for simplifying curves and surfaces have been explored over the years. Work on this topic is spread among a number of elds, making literature search quite challenging. These elds include: cartography, geographic information systems (GIS), virtual reality, computer vision, computer graphics, scienti c visualization, computer-aided geometric design, nite element methods, approximation theory, and computational geometry.

Some prior surveys of related methods exist, notably a bibliography on approximation [45], a survey of spatial data structures for curves and surfaces [106], and surveys of triangulation methods with both theoretical [6] and sci-enti c visualization [89] orientations. None of these surveys surface simpli cation in depth, however.

The present paper attempts to survey all previous work on surface simpli cation and place the algorithms in a taxonomy. In this taxonomy, we intermix algorithms from various elds, classifying algorithms not according to the application for which they were designed, but according to the technical problem they solve. By doing so, we nd great similarities between algorithms from disparate elds. For example, we nd common ground between methods for representing terrains developed in cartography, methods for approximating bivariate functions developed in computational geometry and approximation theory, and methods for approximating range data developed in computer vision. This is not too surprising, since these are fundamentally the same technical problem. By calling attention to these similarities, and to the past duplication of work, we hope to facilitate cross-fertilization between disciplines.

Our emphasis is on methods that take polygonal surfaces as input and produce polygonal surfaces as output, although we touch on curved parametric surface and volume techniques. Our polygons will typically be planar triangles. Although surface simpli cation is our primary interest, we also discuss curve simpli cation, because many surface methods are simple generalizations of curve methods.

1.1 Characterizing Algorithms

Methods for simplifying curves and surfaces vary in their generality and approach – among surface methods, some are limited to height elds, for example, while others are applicable to general surfaces in 3-D. To systematize our taxonomy, we will classify methods according to the problems that they solve and the algorithms they employ. Below is a list of the primary characteristics with which we will do so:

Problem Characteristics

Topology and Geometry of Input: For curves, the input can be a set of points, a function y.x/, a planar curve, or a space curve. For surfaces, the input can be a set of points, samples of a height eld z.x; y/ in a regular grid or at scattered points, a manifold1, a manifold with boundary, or a set of surfaces with arbitrary topology (e.g. a set of intersecting polygons).

Other Attributes of Input: Color, texture, and surface normals might be provided in addition to geometry.

Domain of Output Vertices: Vertices of the output can be restricted to be a subset of the input points, or they can come from the continuous domain.

Structure of Output Triangulation: Meshes can be regular grids, they can come from a hierarchical subdivision such as a quadtree, or they can be a general subdivision such as a Delaunay or data-dependent triangulation.

Approximating Elements: The approximating curve or surface elements can be piecewise-linear (polygonal), quadratic, cubic, high degree polynomial, or some other basis function.

Error Metric: The error of the approximation is typically measured and minimized with respect

to L2 orL1 error2. Distances can be measured in various ways, e.g., to the closest point on a given polygon, or closest point on the entire surface.

Constraints on Solution: One might request the most accurate approximation possible using a given number of elements (e.g. line segments or triangles), or one might request the solution using the minimum number of elements that satis es a given error tolerance. Some algorithms give neither type of guarantee, but give the user only indirect control over the speed/quality tradeoff. Other possible constraints include limits on the time or memory available.

Algorithm Characteristics

Speed/Quality Tradeoff: Algorithms that are optimal (minimal error and size) are typically slow, while algorithms that generate lower quality or less compact approximations can generally be faster.

Re nement/Decimation: Many algorithms can be characterized as using either re nement, a coarse-to- ne approach starting with a minimal approximation and building up more and more accurate ones, or decimation, a ne-to-coarse approach starting with an exact t, and discarding details to create less and less accurate approximations.

1.2 Background on Application Areas

The motivations for surface simpli cation differ from eld to eld. Terminology differs as well.

Cartography. In cartography, simpli cation is one method among many for the “generalization” of geographic information [86]. In that eld, curve simpli ca-tion is called “line generalization”. It is used to simplify the representations of rivers, roads, coastlines, and other features when a map with large scale is produced. It is needed for several reasons: to remove unnecessary detail for aesthetic reasons, to save memory/disk space, and to reduce plotting/display time. The principal surface type simpli ed in cartography is, of course, the terrain. Map production was formerly a slow, off-line activity, but it is currently becoming more interactive, necessitating the development of better simpli cation algorithms.

The ideal error measures for cartographic simpli cation include considerations of geometric error, viewer interest, and data semantics. Treatment of the latter issues is beyond the scope of this study. The algorithms summarized here typically employ a geometric error measure based on Euclidean distance. The problem is thus to retain features larger than some size threshold, typically determined by the limits of the viewer’s perception, the resolution of the display device, or the available time or memory.

Computer Vision. Range data acquired by stereo or structured light techniques (e.g. lasers) can easily produce millions of data points. It is desirable to simplify the surface models created from this data in order to remove redundancy, save space, and speed display and recognition tasks. The acquired data is often noisy, so tolerance of and smoothing of noise are important considerations here.

Computer Graphics. In computer graphics and the closely related elds of virtual reality, computer-aided geometric design, and scienti c visualization, compact storage and fast display of shape information are vital. For interactive applications such as military ight simulators, video games, and computer-aided design, real time performance is a very important goal. For such applications, the geometry can be simpli ed to multiple levels of detail, and display can switch or blend between the appropriate levels of detail as a function of the screen size of each object [13, 52]. This technique is called multiresolution modeling. Redisplaying a static scene from a moving viewpoint is often called a walkthrough. For off-line, more realistic

simulations such as special effects in entertainment, real time is not vital, but reasonable speed and storage are nevertheless important.

When 3-D shape models are transmitted, compression is very important. This applies whether the channel has very low bandwidth (e.g. a modem) or higher bandwidth (e.g. the Internet backbone). The rapid growth of the World Wide Web is spurring some of the current work in surface simpli cation.

Finite Element Analysis. Engineers use the nite element method for structural analysis of bridges, to simulate the air ow around airplanes, and to simulate electromagnetic elds, among other applications. A preprocess to simulation is a “mesh generation” step. In 2-D mesh generation, the domain, bounded by curves, is subdivided into triangles or quadrilaterals. In 3-D mesh generation, the domain is given by boundary surfaces. Surface meshes of triangles or quadrilaterals are rst constructed, and then the volume is subdivided into tetrahedra or hexahedra. The criteria for a good mesh include both geometric delity and considerations of the physical phenomena being simulated (stress, ow, etc). To speed simulation, it is desirable to make the mesh as coarse as possible while still resolving the physical features of interest. In 3-D simulations, surface details such as bolt heads might be eliminated, for example, before meshing the volume. This community calls simpli cation “mesh coarsening”.

Approximation Theory and Computational Geometry.
What is called a terrain in cartography or a height eld in computer graphics is called a bivariate function or a function of two variables in more theoretical elds. The goal in approximation theory is often to characterize the error in the limit as the mesh becomes in nitely ne. In computational geometry the goal is typically to nd algorithms to generate approximations with optimal or near-optimal compactness, error, or speed or to prove bounds on these. Implementation of algorithms and low level practical op-timizations receive less attention.

2 Curve Simpli cation

Curve simpli cation has been used in cartography, computer vision, computer graphics, and a number of other elds.

A basic curve simpli cation problem is to take a poly-gonized curve with n vertices (a chain of line segments or “polyline”) as input and produce an approximating poly-gonized curve with m vertices as output. A closely related problem is to take a curve with n vertices and approximate it within a speci ed error tolerance.

Douglas-Peucker Algorithm. The most widely used high-quality curve simpli cation algorithm is probably the heuristic method commonly called the Douglas-Peucker3algorithm. It was independently invented by many people [99], [31], [30, p. 338], [5, p. 92], [125], [91, p. 176], [3]. At each step, the Douglas-Peucker algorithm attempts to approximate a sequence of points by a line segment from the rst point to the last point. The point farthest from this line segment is found, and if the distance is below threshold, the approximation is accepted, otherwise the algorithm is recursively applied to the two subsequences before and after the chosen point. This algorithm, though not optimal, has generally been found to produce the highest subjective- and objective-quality approximations when compared with many other heuristic algorithms [85, 130]. Its best case time cost4is .n/, its worst case cost is O.mn/, and its expected time cost is about 2.n log m/. The worst case behavior can be improved, with some sacri ce in the best case behavior, using a 2.n log n/ algorithm employing convex hulls [54].

A variant of the Douglas-Peucker algorithm described by Ballard and Brown [4, p. 234] on each iteration splits at the point of highest error along the whole curve, instead of splitting recursively. This yields higher quality approximations for slightly more time. If this subdivision tree is

saved, it is possible to dynamically build an approximation for any larger error tolerance very quickly [18].

A potential problem is that simpli cation can cause a simple polygon to become self-intersecting. This could be a problem in cartographic applications.

Faster or Higher Quality Algorithms. There are faster algorithms than Douglas-Peucker, but all of these are generally believed to have inferior quality [84]. One such algorithm is the trivial method of regular subsam-pling (known as the “nth-point algorithm” in cartography), which simply keeps every kth point of the input, for some k, discarding the rest. This algorithm is very fast, but will sometimes yield very poor quality approximations.

Least squares techniques are commonly used for curve tting in pattern recognition and computer vision, but they do not appear to be widely used for that purpose in cartography.

Polygonal Boundary Reduction. While the Douglas-Peucker algorithm and its variants are re nement algorithms, curves can also be simpli ed using decimation methods. Boxer et al. [8] describe two such algorithms for simplifying 2-D polygons. The rst, due to Leu and Chen [75], is a simple decimation algorithm. It considers boundary arcs of 2 and 3 edges. For each arc, it computes the maximum distance between the arc and the chord connecting its endpoints. It then selects an independent set of arcs whose deviation is less than some threshold, and replaces them by their chords. The second algorithm is an improvement of this basic algorithm which guarantees that the approximate curve is always within some bounded distance from the original. They state that the running time of the simple algorithm is 2.n/, while the bounded-error algorithm requires O.n C r2/ time where r is the number of vertices removed.

Optimal Approximations. Algorithms for optimal curve simpli cation are much less common than heuristic methods, probably because they are slower and/or more complicated to implement. In a common form of optimal curve simpli cation, one searches for the approximation of a given size with minimum error, according to some

de nition of “error”. Typically the output vertices are restricted to be a subset of the input vertices. A naive, exhaustive algorithm would have exponential cost, since the number of subsets is exponential, but using dynamic programming and/or geometric properties, the cost can be reduced to polynomial. The L2–optimal approximationto a function y.x/ can be found in O.mn2/ time, worst case, using dynamic programming. Remarkably, a slight variation in the error metric permits a much faster algorithm: the L1–optimal approximation to a function can be found in O.n/ time [63], using visibility techniques (see also [123, 91]). When the problem is generalized from functions to planar curves, the complexity of the best L1–optimal algorithms we know of jumps to O.n2log n/ [63]. These methods use shortest-path graph algorithms or convex hulls. For space curves (curves in 3-D), there are O.n3log m/ L1–optimal algorithms [62].

Asymptotic Approximation. In related work, McClure and de Boor analyzed the error when approximating a highly continuous function y.x/ using piecewise-polynomials with variable knots [82, 21]. We discuss only the special case of piecewise-linear approximations. They analyzed the asymptotic behavior of the Lp error ofapproximation in the limit as m, the number of vertices (knots) of the approximation, goes to in nity. They showed that the asymptotic Lp error with regular subsam-pling is proportional to m 2, for any p. The
Lp–optimal
approximation has the same asymptotic behavior, though with a smaller constant. McClure showed that the spacing of vertices in the optimal approximation is closely related to the function’s second derivative. Speci cally, he proved that as m!1, the density of vertices at each point
in the optimal L2 approximation becomes proportional to
jy00.x/j2=5. For optimal L approximations, the density is
1
proportional to jy00.x/j1=2. Also, as m !1, all intervals
have equal error in an Lp–optimal approximation.

The density property and the balanced error property described above can be used as the basis for curve sim-pli cation algorithms [82]. Although adherence to neither property guarantees optimality for real simpli cation problems with nite m, iterative balanced error methods have been shown to generate good approximations in practice [91, p. 181]. Another caveat is that many curves in nature do not have continuous derivatives, but instead have

some fractal characteristics [80]. Nevertheless, these theoretical results suggest the importance of the second derivative, and hence curvature, in the simpli cation of curves and surfaces.

Summary of Curve Simpli cation. The Douglas-Peucker algorithm is probably the most commonly used curve simpli cation algorithm. Most implementations have O.mn/ cost, worst case, but typical cost of 2.n log m/. An optimization with worst case cost of O.n log n/ is available, however. Optimal simpli cation typically has quadratic or cubic cost, making it impractical for large inputs.