Source: www.cs.unc.edu/Research/nano/documentarchive/publications/2002Taylor_VisualizationViewpts.pdf
Chris Johnson
Top Scientific Visualization Research Problems
Scienti c visualization as currently understood andpracticed is still a relatively new discipline. As a result, we visualization researchers are not necessarily accustomed to undertaking the sorts of self-examinations that other scientists routinely undergo in relation to their work. Yet if we are to create a disciplinary culture focused on matters of real scientific importance and committed to real progress, it is essential that we ask ourselves hard questions on an ongoing basis. What are the most important research issues facing us? What underlying assumptions need to be challenged and perhaps abandoned? What practices need to be reviewed? In this article, I attempt to start a discussion of these issues by proposing a list of top research problems and issues in scienti c visualization.
“Study the science of art and the art of science”—Leonardo da Vinci Scientists and mathematicians have a long tradition of creating lists of important unsolved problems both to focus the eld’s attention and to provide a forum for discussion. Perhaps the most famous list of unsolved problems is David Hilbert’s list of 23 problems that he proposed in 1900 (http://mathworld.wolfram.com/ HilbertsProblems/html). More recently, Stephen Smale proposed a list of 18 outstanding problems in mathematics.1These lists include important problems whose pursuit has been crucial to the development of the eld. Such lists continue to be created in many areas of science and mathematics and help to motivate future research (see http://www.geocities.com/ednitou for a Web site with links to many such lists).
Because computer science is such a new discipline and computer graphics even newer, it wasn’t until 1966 that Ivan Sutherland created the rst list of unsolved problems in computer graphics.2Jim Blinn and Martin Newell created a second list in 1977 (see http:// www.siggraph.org/publications/newsletter/v33n1/ columns/conf.html). Additional lists were created by Paul Heckbert in 1987 and by Jim Blinn in 1998.3,4(Interestingly, the majority of these list makers, including myself, were either professors—Sutherland, Newell—at or graduated from—Newell, Blinn—the University of Utah.
The field of scientific visualization is newer still, “launched” only in 1987 by the National Science Foundation report, Visualization in Scientific Computing.5
(Here I mean the discipline employing computational
means, not simply visualizing science, which is as old as
science itself.) In 1994, Larry Rosenblum edited a spe-
cial issue of IEEE Computer Graphics and Applications on
research issues in scienti c visualization, focusing on
recent advances and emerging topics in visualization
(vol. 14, no. 2, Mar./Apr. 1994). However, not until 1999
did Bill Hibbard create his list of top 10 visualization
problems.6Hibbard organized his list within the broad
categories of
visual quality,
integration,
information,
interactions, and
abstractions.
Top scienti c visualization research problems
I have been assembling my own list of the most important issues facing researchers in scienti c visualization, a list that represents my personal view of the eld. In the last year or so, I have been presenting my ideas and updating my list based partly on the feedback I’ve obtained (see Figure 1). My strongest concern in creating and presenting this list is not to impose my own ideas on the eld but rather to start a discussion about important research issues within scientific visualization. I recently participated in an IEEE Visualization 2004 panel proposal called, “Can We Determine the Top Unresolved Problems of Visualization?” and note views among the authors varied widely.7This is as it should be. The most important thing is that researchers formulate positions and that their positions be disseminated and discussed. To be clear, the items on my list are not all “problems”; some are possible “directions”
(such as items 8 and 10) and some pertain to “best practices” (such as items 1 and 2). However, I think they all are important to think about and deserve inclusion. I note that my list is not ranked; however, I am sure that readers will easily identify their most important issues—and tell me if I neglected to include their favorites in my nal list.
Here, then, is my list of the top problems and issues in visualization research.
1. Think about the science
Too often, creators of visualization technology do not spend enough (or indeed any) time endeavoring to understand the underlying science they are trying to represent, just as application scientists sometimes create crude visualizations without understanding the algorithms and science of visualization. To establish mutually bene cial peer relationships with application scientists and to create effective visual representations, visualization scientists need to spend more time understanding the underlying science, engineering, and medical applications. The bene ts of working directly with application scientists are enormous, and yet all too often visualization scientists “hide” from end users. There is no substitute for working side-by-side with end users to
2. Quantify effectiveness
In 1993, during his keynote address at the IEEE Visualization 93 Conference, Fred Brooks said that “sci-enti c visualization is not yet a discipline, although it is emerging as one. Too often we still have an ad hoc technique and rules of thumb.” The majority of papers in visualization involve new techniques for characterizing scalar, vector, or tensor elds. However, the new techniques are rarely compared with previous techniques, and their effectiveness is seldom quanti ed by user studies. Fortunately, the importance of user studies in visualization research is now being recognized (see the excellent article by Robert Kosara and his colleagues for this magazine: “Thoughts on User Studies: Why, How, and When”9).Unfortunately, it is also too rarely the case that the effectiveness of new methods is quanti ed within the scienti c visualization literature. If we wish visualization to become a scienti c enquiry, visualization scientists need to understand and use the scientific method, employing its steps:
Observation and description of a phenomenon or group of phenomena.
Formulation of a hypothesis to explain the phenomena. Use of the hypothesis to predict the existence of other phenomena or to predict quantitatively the results of new observations.
Evaluation of the proposed methods and quanti ca-tion of the effectiveness of their techniques.
3. Represent error and uncertainty
When was the last time you saw an isosurface with “error bars” or streamlines with “standard deviations” or volume visualizations with representations of con dence intervals? With few exceptions, visualization research has ignored the visual representation of errors and uncertainty for 3D visualizations. However, if you look at highly peer-reviewed science and engineering journals, you will see that the majority of 2D graphs represent error or uncertainty within the experimental or simulated data. Why the difference? Clearly, if it is important to represent error and uncertainty in 2D, it is equally important to represent error and uncertainty in 2D and 3D visualizations. It is also often important to quantify error and uncertainty within new computer graphics techniques (see my previous Visualization Viewpoint article in the Sept./Oct. 2003 issue of IEEE Computer Graphics and Applications for further discussion of this subject10).
4. Perceptual issues
Research on the human visual system is vast, yet visualization researchers rarely study or apply what is known about the visual system when designing visualization techniques. The computer graphics and information visualizationcommunities may be ahead in this regard, but there is still much to be gained by all groups in studying the biophysics and psychophysics of the visual system.11
6. Human–computer interaction
Effective human–computer interaction was on Sutherland’s 1966 list.2HCI continues to be one of the top research and development goals for both visualization and computer graphics. I cannot begin to address the importance of effective interaction, much less details about how to achieve it in such a short article, especially given that HCI is a eld unto itself. A starting place might be Ben Shneiderman’s visual-information-seeking mantra: “Overview rst, zoom and lter, then details-on-demand.”12Two recent papers by Andries van Dam and his colleagues discuss the overall progress in interaction and provide comments on research challenges.13,14
7. Global/local visualization (details within context)
Currently, most graphical techniques emphasize either a global or local perspective when visualizing vector or scalar eld data, yet ideally one wishes for simultaneous access to both perspectives. The global perspective is required for navigation and development of an overall gestalt, while a local perspective is required for detailed information extraction. Most visualization methods display either global variations, as is the case with line integral convolution and other dense vector eld visualization methods, or local variations, as occurs in the use of streamlines.
When one uses a global operation, such as drawing a vectorat every cell, it is impossible to navigate due to the visual occlusion ofthe many vectors. However, local methods such as the vector rake, which avoid this occlu-
8. Integrated problem-solving environments (PSEs)
Visualization is now most often seen as a postprocessing step in the scienti c computing pipeline (geometric modeling® simulation® visualization).However, scientists now require more from visualization than a set of results and a tidy showcase in which to display them. The 1987 National Science Foundation Visualization in Scienti c Computing workshop report poses the problem in these terms:
Scientists not only want to analyze data that results from super-computations; they also want to interpret what is happening to the data during super-computations. Researchers want to steer calculations in close-to-real-time; they want to be able to change parameters, resolution or representation, and see the effects. They want to drive the scientific discovery process; they want to interact with their data.
The most common mode of visualization today at national supercomputer centers is batch. Batch processing de nes a sequential process: compute, generate images and plots, and then record on paper, videotape or lm.
Interactive visual computing is a process whereby scientists communicating with data by manipulating its visual representation during processing. The more sophisticated process of navigation allows scientists to steer, or dynamically modify computations while they are occurring. These processes are invaluable tools for scienti c discovery. 5
Although these thoughts were reported more than 15 years ago, they express a very simple and still current idea: scientists want more interaction (see item 6 on the list) between modeling, simulation, and visualization than is currently made possible by most scienti c computing codes. The scienti c investigation process relies heavily on answers to a range of “what if?” questions. Integrated PSEs that tightly couple interactive visualization techniques with geometric modeling and simulation techniques allow these questions to be answered more ef ciently and effectively and thus help to guide the investigation as it occurs. Such PSEs can also provide computational steering and more interactive modes of investigation. Integration also requires that we develop improved tools and techniques for managing visualizations. Similarly, while VTK, the Visualization Toolkit, is a great rst step, integration requires further research in visualization software architecture.
9. Multi eld visualization
Computational eld problems such as computational uid dynamics (CFD), electromagnetic eld simulation,
10. Integrating scienti c and information visualization
The amount of information available to scientists from large-scale simulations, experiments, and data collection is unprecedented. In many instances, the abundance and variety of information can be overwhelming. The traditional method for analyzing and understanding the output from large-scale simulations and experiments has been scientific visualization. However, an increasing amount of scienti c information collected today has high dimensionality and is not well suited to treatment by traditional scienti c visualization methods. To handle high-dimensional information, so-called information visualization techniques have emerged. There is now a growing community of information visualization scientists. Curiously, the information visualization and scienti c visualization communities have evolved separately and, for the large part, do not interact (see the May/June 2003 Visualization Viewpoints article).15As such, a signi cant gap has developed in analyzing large-scale scienti c data that has both scienti c and information characteristics. The time has come to break down the arti cial barriers that currently exist between information and scienti c visualization communities and work together to solve important problems. A simple example where scienti c and information visualization techniques could have an immediate positive bene t to the application scientist is in analyzing, understanding, and representing error and uncertainty in complex 3D simulations (see item 3 and my earlier Visualization Viewpoint article10).
11. Feature detection
Analysis of complex, large-scale, multidimensional data is recognized as an important component in many areas, including computational fluid dynamics, computational combustion, and computational mechanics. Modern high-performance computers have speeds measured in tera ops and produce simulation data set sizes measured in gigabytes to terabytes and even petabytes. With such large-scale data, locating and representing
12. Time-dependent visualization
Currently, most interactive visualization techniques involve static data. The predominant method for visualizing time-dependent data is rst to select a viewing angle, then to render time steps of ine and play the visualization back as a video. While this often works adequately for presentational purposes, the lack of ability to engage in interactive engagement and exploration undermines the effectiveness and relevancy of investigative visualization. While there are a few recent examples of interactive time-dependent visualization techniques, there could be considerable improvement in this area.16
13. Scalable, distributed, and grid-based visualization
The available graphics throughput in PC graphics cards continues to grow. At the same time, other powerful graphics facilities are becoming available as part of grid-based computing systems. Missing are effective ways to tap into the available graphics capabilities of these distributed graphics systems to create scalable visualizations. It is clear that we need innovation on all fronts: hardware for plugging in multiple graphics PC cards, software to ef ciently coordinate distributed visualization resources, and scalable algorithms to effectively take advantage of such distributed visualization resources.
14. Visual abstractions
Hibbard is entirely correct to emphasize the importance and necessity of effective visual abstractions. As he says, we need to
… de ne effective abstractions for the visualization and user interaction process. Examples include the relational and eld data models for information being visualized, the data ow model of the visualization process, mathematical models of human visual perception, mathematical models of interaction, models of users and their tasks, and more general models of computing and distributed computing. Effective abstractions have generality where it is needed but also make limiting assumptions that permit efficient and usable implementations. In addition to its practical consequences, this is a foundational problem for visualization.6
While this item could easily be absorbed within other items on my list, in particular items 10 and 15, it is so important that it deserves its own bullet.
15. Theory of visualizationConclusion
Researchers in scienti c visualization will determine the futures not only of their own eld, but of the many scienti c elds to which they contribute. We can best take advantage of our position by ensuring that our discipline is as rigorous and productive as any other science even as we vigorously pursue technological innovation. In this way, we may motivate visualization researchers to think either about new problems or about persistent problems in a new way.
Acknowledgments
I thank David Laidlaw, Helwig Hauser, Andy Van Dam, Chuck Hansen, Ross Whitaker, Claudio Silva, David Weinstein, and Steven Parker, as well as the reviewers. Special thanks to David Banks for his input
Readers may contact Chris Johnson at the Scientific Computing and Imaging Inst., Univ. of Utah, 50 S. Central Campus Dr., Rm. 3490, Salt Lake City, UT 84112, crj@sci.utah.edu.
Readers may contact Theresa-Marie Rhyne by email at tmrhyne@ncsu.edu.