Temporal Focusing and Temporal Brushing: Assessing their Impact in Geographic Visualization.

 

Mark Harrower (1)

GeoVISTA Center, The Pennsylvania State University, 302 Walker Building, University Park, PA 16802-5011, USA

 

Amy L. Griffin (2)

GeoVISTA Center, The Pennsylvania State University, 302 Walker Building, University Park, PA 16802-5011, USA

 

Alan MacEachren (3)

GeoVISTA Center, The Pennsylvania State University, 302 Walker Building, University Park, PA 16802-5011, USA

 

 

Abstract

We report here on the second stage of a project directed toward developing and assessing a set of space-time visualization tools (temporal focusing and temporal brushing). These tools were implemented into the EarthSystemsVisualizer (ESV), a geographic visualization system designed to allow students to explore and conceptualize the spatial and temporal aspects of multivariate continuously changing phenomena (specifically weather and climate), and to develop hypotheses about those processes. Our intention was not simply to determine if these tools "worked," but rather, to determine if they influenced problem solving and learning. Focusing and brushing had little impact on student’s ability to answer a series of skill-testing questions. However, students who used temporal brushing and focusing were better able to formulate hypotheses about the relationship between climate variables than students without access to these tools. Performance suffered for students who were confused by the focusing and brushing tools. In fact, students without the tools performed better than those who were confused by the tools, but not as well as those who had the tools and understood how to use them. Temporal focusing caused more confusion than temporal brushing. One of the conclusions we have drawn from this research is that the level of the visualization system has to be well matched to the level of the user: students who already possessed an advanced understanding of climatology benefited less than students with an intermediate or novice level understanding.

 

 

Introduction

The ways in which we explore data and understand relationships within and among data sets have changed dramatically in the last decade. As a result, new visualization methods and ways to implement them, graphical user interfaces have been developed. Conceptual methods adapted from exploratory data analysis such as sequencing [Slocum, 1988; Monmonier, 1992; Peterson, 1995], brushing [Monmonier, 1989; Edsall and Peuquet, 1996], and focusing [Cook et al., 1996; MacEachren et al., 1997] have led to the development of specific tools such as linked geographic displays [MacDougall, 1992], coupled statistical-geographic representations [Monmonier, 1989; Becker et al., 1988], and flexible on-screen classification techniques [MacEachren et al., 1997]. In turn, these research developments have helped to redefine how we use computers in geographic analysis. Although there have been many advances in methods for geographic visualization in the past few years, questions about how these methods influence problem solving and how they facilitate knowledge construction remain unanswered. This demands that new tools for geographic visualization be tested in real world situations with real users so that strengths, weaknesses, and opportunities for improvement can be identified.

This paper reports on stage two of an ongoing research effort within the GeoVISTA Center at Penn State directed toward developing a set of space-time visualization tools designed to facilitate earth science learning. Our research has two primary goals. The first is to integrate two exploratory data analysis methods (brushing and focusing) with map animation to produce a manipulable dynamic representation that facilitates a conceptualization of time as both linear and cyclic. The second goal is to explore the use of these tools in a geovisualization system that allows users to conceptualize the spatial and temporal aspects of multivariate continuously changing phenomena (specifically global climate data) and to develop hypotheses about those processes. To meet these goals, we have built the EarthSystemsVisualizer (ESV). The ESV facilitates examination of three aspects of the global weather system (land temperature, ocean temperature, and cloud cover) as they relate to one another in both time and space.

Stage one of this project involved building and assessing a prototype of the EarthSystemsVisualizer (Figure 1). Assessment was carried out using a focus group methodology. In the second stage of this research we assess the impact of temporal brushing and temporal focusing tools on the student's ability to develop an understanding of earth-climate processes. Use, by student participants, of two versions of the ESV were compared: one which contained both temporal focusing and temporal brushing tools, and a second which did not. We report here on student performance in a series of typical earth science learning tasks to which each version of the ESV was applied. A key objective of this study is to determine whether the difference in ESV tools prompted different knowledge schemata, stimulated different approaches to problem solving, and ultimately, if they led to generation of different hypotheses about the relationship between climate variables over both space and time. Our intention was not simply to determine if these tools "worked," but rather, to determine if they influenced problem solving and learning, and if so, how.

Figure 1. EarthSystemsVisualizer

Before discussing the experimental results, we provide an overview of the EarthSystemsVisualizer, its purposes, development, and implementation. A discussion of the techniques of brushing and focusing and their implementation in the ESV is included here. This introduction to the ESV is followed by a report on the rationale, methodology, and results of the formal user testing of the ESV.

 

The EarthSystemsVisualizer

System Design

The ESV is an interactive, exploratory visualization system designed as an educational tool for novice-level users, specifically, high school and introductory-level university students. In this way, the ESV differs from many visualization systems which are designed for expert researchers who often already possess high-level knowledge about both the subject matter (i.e. climatology) and strategies for interacting with the data (i.e. task-directed learning). Nevertheless, we believe that advanced exploratory data analysis concepts such as focusing and brushing (developed to support visualization by experts) are appropriate for all levels of expertise (though their specific implementations may be different), and that these tools can assist in problem solving and learning even at an introductory level.

Data in the ESV are stored as layers the user can turn on and off to create visual overlays (see Figure 2). Because each layer is semi-transparent, multiple layers can be simultaneously visualized. Traditionally, raster-based data layers would have to be viewed separately, or, when commensurate, as an average or composite. The ability to visually superimpose semi-transparent data layers facilitates learning about relationships among phenomena, both spatially and temporally. For example, the ESV can be used by students to understand the spatial association between clouds and air temperature at the surface. Moreover, because the ESV supports spatio-temporal data, students could look for a possible lag period in the relationship between clouds and air temperature.

Figure 2. Data layer controls Figure 3. Linear temporal legend Figure 4. Cyclic temporal legend

Two kinds of temporal legends are incorporated into the ESV. A linear temporal legend (see Figure 3) denotes the day of the year, while a cyclic temporal legend (see Figure 4) denotes the hour of the day. Legends in a dynamic learning environment serve a dual role, as a key to the "sign-vehicles" embedded in the display (i.e., to the symbols used to represent phenomena) and as a control on parameters of those sign-vehicles (as what is often called an "interactor"). By providing two temporal legends, we are representing time as both linear and cyclic. Dual legends are warranted because earth science learning objectives related to climate will include attention to both long-term trends (e.g. global warming) and recurring patterns (e.g. diurnal cycles of temperature or seasonal variation in precipitation). Developing an understanding of the complex spatial and temporal relationships between earth climate phenomena requires that students be able to conceptualize both the linear and cyclic nature of these phenomena.

Temporal Focusing and Brushing

Apart from standard animation interface tools, such as direction and speed controls, users of the ESV have access to temporal focusing and temporal brushing tools. The former is used to adjust the start and end dates of an animation segment (Figure 5), while the latter is used to select what times of day are included in the animation (Figure 6). As implemented here, temporal focusing provides temporal delimiters (adjustable start and end points) that can be moved along the linear temporal legend to focus the animated sequence on a smaller time window. These temporal delimiters are borrowed from the graphical metaphor of indent tabs on a typewriter or a word processor. As we discovered earlier, in our focus group testing, these temporal "pull tabs" were successfully understood and used by most participants.

 

Figure 5. Temporal focusing controls Figure 6. Temporal brushing controls

Brushing is a powerful concept that has been discussed and implemented in other contexts [see Monmonier, 1989; MacEachren et al., 1997]. Its extension into the temporal domain allows users to search for geographic patterns that may appear at only certain times, such as in the morning hours. Brushing also can be used to explore and understand the spatio-temporal behavior of geographic phenomena that may be manifest only at certain times or over certain time intervals. For example, if one is interested in linear changes in daily maximum temperatures over a one-week period, the ability to suppress the dominant (and potentially overwhelming) diurnal temperature cycle is very useful. As a result, more subtle patterns or longer-term trends might visually emerge "from the noise." As visualization environments become increasingly data-rich, the ability to filter data using brushing is increasingly useful.

 

User Testing

Pre-Testing the ESV

Before proceeding with formal user testing, we conducted two focus-group sessions. Focus groups have proven to be an effective research tool in cartographic design [Monmonier and Gluck, 1994], especially when used in conjunction with more traditional survey design [Pickle et al., 1995]. To reduce the potential for finding serious flaws in the visualization system after conducting expensive user-testing, we used focus groups to ‘de-bug’ the ESV.

Focus group testing usually consists of a small group of participants (less than 10) who, with the guidance of a trained facilitator, are led in an informal yet structured discussion about the "product" under consideration. Because these sessions are intentionally flexible, they often generate unanticipated–yet highly useful–qualitative feedback for the system designer. Our initial focus group was run with cartographic experts, so defined because of their experience in designing and using interactive and animated cartographic systems. The second session was run using our target population: introductory-level undergraduate students with little or no experience in either cartography or climatology.

Surprisingly, both focus groups generated similar insights. Some of the concerns participants identified with the initial ESV included: difficulty with the design and legibility of the cyclic temporal legend, poor behavior of the VCR-style navigation buttons, and the need for better visual and audio feedback from the interface. As a result, parts of the ESV interface were re-designed, others were eliminated, and some new functions (such as frame-by-frame advance) were added.

 

Methodology

Mersey [1990] notes that good experiment design in map testing should strive to replicate a realistic experience. She critiques cartographic research that tests highly artificial map tasks. Testing, she argues, must reflect the fact that using a map is a holistic process; only when the map "is viewed in its totality" do "the spatial arrangement of the symbols acquire significance." [Mersey, 1990, p. 33]. Testing sub-components of maps in isolation prevents the reader from using (as they would normally) the graphical context of that sub-component and its relation to the entire document. A well designed map (or visualization system) is truly more than the sum of its parts. Therefore, robust experiments employ real data drawn on real base maps that test both a variety of users and map tasks.

Good experiment design takes into account the characteristics of the group being tested. For example, a questionnaire that imposes tasks that challenge or exceed the abilities of the respondents will produce poor results [Sirken et al., 1995]. In general, the level of difficulty of test questions and the ability of participants to answer them should be equally matched. When selecting participants, it is important to recognize that the more motivated the map user is, the more transparent the graphic design becomes. This is an issue when cartographic research tests highly motivated individuals, such as professional colleagues or paid participants.

The use of a small pilot test is strongly recommended as a means of assessing and refining the survey instrument [Mersey 1990]. The focus groups were designed to gather student opinions on the design of the ESV, rather than to test survey question utility and quality. It was therefore necessary for us to pilot test the final version of the survey questions on our target audience before proceeding to paid user-testing.

 

The Questionnaire

Two versions of the ESV were created and tested. The first was an "enhanced" version, which contained tools for temporal brushing and focusing. The second contained all of the interface tools except temporal brushing and focusing. Two questionnaires that corresponding questionnaires were prepared and administered to two separate groups in our experiment (those with the "enhanced" ESV and those without). The surveys were identical, except for the addition of questions specifically related to brushing and focusing that were given to the group with these tools. Participants who had access to the tools were asked three additional multiple choice questions dealing with the controls as well as two additional open-ended text questions which prompted them to describe reasons for using brushing and focusing. These open-ended questions were used to determine whether students understood the purpose of the tools.

The survey instrument was designed to test student

  1. understanding of the interface tools (specifically temporal brushing and temporal focusing)
  2. understanding of the climate data
  3. knowledge about the climate phenomena (both before and after using the ESV)
  4. ability to generate hypotheses about observed relationships between climate variables and
  5. overall impressions of the system.

A variety of methods were used, including multiple choice questions, short written answers and semantic-difference word pairs. Although space does not permit us to present the complete version of the questionnaire, a copy can be found online at http://www.geog.psu.edu/~harrower/questionnaire.htm. Each type of question is briefly discussed below.

Three multiple choice questions were designed to elucidate whether students with access to temporal brushing and focusing understood the basic functions of the interface tools. Six multiple choice questions were designed to test if students understood and could interpret basic information and patterns from the map. We wanted to know whether students were able to use the ESV to answer questions about particular places, specific points in time, and attributes/processes represented in the maps. Each question was constructed to hold two constant, while testing knowledge of the third.

A series of open-ended text questions was constructed for two reasons: to assess each student’s level of knowledge of climate variable relationships before using the tool, and to gain insight on how the ESV does (or does not) promote the ability to generate hypotheses to explain climate variable relationships. An open-ended text question posed before students used the ESV permitted a characterization of the knowledge students brought to the survey. The same question was asked at the end of the survey, along with an additional open-ended question about climate relationships. We did not inform respondents before the experiment that they would later be asked the same question, as an a priori understanding of the experiment design has been shown to artificially inflate test results [MacEachren 1982]. Asking the same question twice allowed us to assess whether the ESV helped students to improve the quality and sophistication of their initial hypotheses.

Semantic differentials are bipolar word pairs that have been are used to measure people’s subjective responses to the entire map [Harrower et al., 1997; Gilmartin, 1978; Petechnik, 1974]. We created 21 word pairs to help characterize subjects’ overall reactions to three aspects of the ESV: the controls, the map (visualization) and the data. The positive-negative polarity of the word pairs was randomly assigned to avoid leading students to an answer.

In an effort to minimize and standardize the amount of interaction between participants and us, no verbal instructions were given to participants on how to use the ESV. Instead, participants learned about the system from a series of on-screen instructions. Participants were encouraged to "play" with the system as much as they wanted to before starting the test questions. They were told that there was no time limit for the test.

 

Analysis and Results

Thirty-four undergraduate students participated in our study. Gender participation was equal (17 male/17 female) and the average age of participants was 20.6 years. Testing took between 25 and 45 minutes, depending in part on the version of the survey that was used. All participants received $10 for taking part in the experiment.

Degree of Confusion in using the Tools

Temporal focusing created more confusion than temporal brushing. Based on the written answers of the 17 participants who used temporal focusing, we determined that six did not seem to understand why focusing might be useful. The confused participants incorrectly thought it was simply another kind of stop button. Either they did not notice the appearance of the temporal pull-tabs, or they did not know how to use them. In comparison, only one participant could not describe a potential and reasonable use for brushing. As a second measure of student understanding of the EDA tools, students were asked to choose the most appropriate tool for completing a specific task. For example, the correct answer to the question "To look at fewer days…" was Activate temporal focusing. Five participants had a partial understanding of focusing (they answered some questions correctly) while the remainder (12) generated perfect scores for these questions. Based on student responses to the multiple choice questions, we determined that temporal brushing was better understood than temporal focusing.

Sophistication of Hypothesis

One of the challenges we faced was how to consistently judge the quality and sophistication of written responses. Three basic strategies were developed. First, we noted whether participants made reference to space, time and process in their answers: those answers that referred to all three were judged to be more sophisticated than answers that, for example, focused solely on the spatial association between clouds and air temperature. Second, we considered whether students made reference to specific places in formulating a hypothesis. A good example contained the following line "…as can be seen, during the day over the Sahara clouds disappear…". Synthesis of visual information into a coherent theory is a high-level cognitive task. Lastly, the participant’s level of confidence in answering the questions was also considered. For example, answers starting with "I’m not sure, but…" or finishing with a question mark were judged to be less confident. According to these guidelines, we rated hypotheses as advanced (8 students) , intermediate (11 students), or novice (15 students). Examples are included in Table 1.

 

Table 1. Examples of student hypotheses.

Level of Understanding

How are clouds and air temperature related?

Novice

"More clouds when cold than warm?"

Intermediate

"Temperature determines the existence of clouds. Higher temperatures will ‘burn off’ clouds."

Advanced

"Clouds are formed when the temperature and the dew point are equal and the water vapor is released in the atmosphere, thus forming clouds. The temperature and moisture content of the air have to at a certain conditions for clouds to form"

Impact Upon Learning

Students who demonstrated an advanced understanding of climate phenomenon before the test session showed little or no improvement after using the ESV, in either form. Students with a novice- or intermediate-level understanding did, however, benefit from the ESV. Many of them generated more complete theories, were able to draw on specific examples, or answered with greater confidence after the session. Students classified as novice or intermediate who used the enhanced ESV (those with access to temporal brushing and focusing) showed greater improvement than those without. Of the six students who initially wrote a novice-level answer, all improved to at least an intermediate level, and two progressed from an answer of "I’m not sure" to generating an advanced-level hypothesis after using the enhanced ESV. By comparison, only half of the novice-level participants using the standard ESV showed an improvement after the session. It seems that temporal brushing and focusing facilitate knowledge construction, although students who were confused by the focusing and brushing tools showed less improvement in their hypotheses than students who demonstrated an understanding of these tools.

Accuracy

We hypothesized that students who had access to brushing and focusing would perform better in a series of skill-testing questions. However, no statistically significant differences were observed between the groups using the standard and enhanced ESV. The average number of correct answers for the enhanced group was 4.23/6.00 (70.5% correct) versus 4.06/6.00 for the standard group (67.7% correct). However, there was a clear bimodal distribution to the responses from the enhanced group. Students who were confused by focusing and brushing did poorly in the skill-testing questions (60% correct). By comparison, students who had the tools and understood how to use them, generated some of the highest scores in the test (75% correct). Interestingly, four of the five students that produced perfect scores used the enhanced ESV. This lends support to our belief that when students are presented with tools that they do not understand, their performance will decrease to a level below that of the students with no tools at all.

At the end of the sessions, students were asked to rate their ability to use the ESV on a scale from one to four. We found no relationship between self-reported level of confidence and how well participants did on any of the components of the test.

Using the level of sophistication of written answers (discussed previously) as a grouping variable (novice, intermediate, advanced), the intermediate group scored the highest of any group on the skill-testing questions (80% correct). In contrast, the advanced level participants (64% correct) and novice level participants (63% correct) generally did poorly on the skill-testing questions. This means that although some students started the test with an advanced understanding of climate, they did not necessarily do well on questions specific to using the ESV. For some questions, many of the advanced students answered questions quickly–perhaps too quickly–possibly in part because they did not feel the ESV was helpful. Results from the semantic word-pairs, discussed below, support this hypothesis.

Time Differences

Students with access to focusing and brushing took longer to formulate written answers than those without the tools. Using a Wilcoxon Rank Sum test, time differences were statistically significant for both individual questions (group mean: 200 seconds versus 156 seconds, p = 0.01), as well as total time to take the test (829 seconds versus 695 seconds, p = 0.01). These results can be interpreted to mean that the more complex the interface, the longer it will take to explore and theorize about observed relationships within the data. In this sense, focusing and brushing placed a great burden on the user because there is "more interface" to use. Nevertheless, the extra time spent learning and using the system seems justified in light of the more complete hypothesis most of the students were able to generate using these tools.

Semantic Word Pairs

Twenty-one bi-polar words pairs were used to allow participants to rate, on a 7 point scale, (a) the controls, (b) the map and (c) the data (see Table 2). Using a Mann-Whitney test, statistically significant differences (p = 0.05) were discovered between a number of groups. One pattern that emerged was that individuals who we rated as having an intermediate-level understanding of climate had the most positive reaction to the interface; they thought the interface was more attractive, better organized, and less confusing than either the advanced or novice level users. Not surprisingly, people who were confused by focusing thought the controls were hindering, but they also thought the map(s) were more truthful than people who were not confused by the tools.

 

Table 2. Examples of word-pairs used to rate components of the ESV.

Rate the Controls

Rate the Map

Rate the Data

fast-slow

unattractive-attractive

helpful-hindering

misleading-truthful

incomplete-complete

clear-vague

regular-random

slow-fast

inactive-active

 

The largest differences in students’ reactions to the ESV were related to how well they performed on the skill-testing portion of the test. The six skill-testing questions were used both as a measure of performance as well as a ‘grouping variable.’ Participants were divided into three categories: high performance (answered 6 out of 6 correct), medium performance (4-5 out of 6 correct) and low performance (2-3 out of 6 correct). The number of participants in each of the categories was 5, 17, and 12 respectively. Using a Mann-Whitney test, significant differences were registered between these three groups. The high-performance group characterized the controls as attractive, organized, clear, helpful and easy to understand. The low performance group was least impressed with the ESV. Why these individuals had trouble with the ESV–and hence did poorly on our test–remains unanswered. This is clearly one area where future research efforts will have to be directed.

 

 

Conclusions

One of the conclusions we have drawn from this research is that the level of a visualization system has to be well matched to the level of the user. Students who already possessed a sophisticated understanding of earth-climate processes generally did poorly on skill testing questions, did not formulate more sophisticated hypotheses after using the ESV, and commented that the system was not especially helpful. The novice- and intermediate-level students, on the other hand did benefit from the system. These students showed increased levels of knowledge related to climate and were able to integrate new information with existing knowledge. There is, however, one important distinction: although novice-level users showed improvement in their written answers to questions (i.e. understanding and theorizing), they generally did poorly on skill-testing questions. By comparison, the most accurate responses to skill-testing questions came from intermediate-level students. Moreover, the feedback from the semantic word pairs showed that the intermediate-level students had the most positive reaction to the system. A corresponding relationship was found with the skill-testing questions: the better students did on the questionnaire, the more they liked the interface (based on their characterization of the interface using the bi-polar word pairs).

Slightly more than one-quarter of the participants who had the enhanced version of the ESV did not seem to understand how to use temporal focusing, or what it should be used for. Temporal brushing confused only one participant. Another conclusion we drew is that when users are presented with tools they do not fully understand, their performance will suffer. In fact, students who were confused by temporal focusing performed the most poorly of all users on skill-testing questions, showed limited improvement in their level of understanding, and characterized both the controls and map negatively. In short, students were better off given no tools, than tools that confused them.

Providing students with exploratory data analysis tools such as temporal brushing and temporal focusing influences how they think about geographic phenomena. Although roughly three-quarters of the participants in this study showed an improved understanding of the relationship between climate variables after using the ESV, those who had access to temporal brushing and temporal focusing tools showed greater improvement. More importantly, students who showed the greatest level of improvement in their understanding (i.e. from no understanding to an advanced understanding), all had access to the tools. Our (admittedly) subjective analysis of student hypotheses revealed consistent differences in the level of complexity (in reference to concepts of space/time/process), comprehensiveness, and the certainty with which students stated their answers. The majority of responses showed that students, especially those at a novice or intermediate level of understanding, used the tools provided in the ESV to confirm existing hypotheses, as well as to formulate new ones.

 

Acknowledgements

We would like to acknowledge support for this research provided through The Visualizing Earth Project (NSF grant #RED-9554504). We would also like to thank Jeff Balmat and Milissa Orzolek for volunteering to participate in the pilot study.

 

 

References

Becker, R. A., Cleveland, W. S., and Wilks, A. R. (1988). Dynamic graphics for data analysis. In W. S. Cleveland and M. E. McGill (Eds.). Dynamic graphics for statistics. Wadsworth & Brooks, Belmont, California.

Cook, D., Majure, J. J., Symanzik, J., and Cressie, N. (1996). Dynamic graphics in a GIS: Exploring and analyzing multivariate spatial data using linked software. Computational Statistics, 11, 467-480.

Edsall, R. M. and Peuquet, D. J. (1996). Graphical query techniques for temporal GIS. Proceedings: ACSM/ASPRS Annual Conference Seattle. WA, pp. 182-189.

Gilmartin, P. P. (1978). Evaluation of thematic maps using the semantic differential test. American Cartographer, 5(2), 133-139.

Harrower, M., Keller, C. P., and Hocking, D. (1997). Cartography on the Internet: thoughts and a preliminary user survey. Cartographic Perspectives, 26, 27-37.

MacDougall, E. B. (1992). Exploratory analysis, dynamic statistical visualization and Geographic Information Systems. Cartography and Geographic Information Science, 19(4), 237-246.

MacEachren, A. M. (1982). The role of complexity and symbolization method in thematic map effectiveness. Annals of the Association of American Geographers, 72(4), 495-513.

MacEachren, A. M., Polsky, C., Haug, D., Brown, D., Boscoe, F., Beedasy, J., Pickle, L. and Marrara, M. (1997). Visualizing spatial relationships among health, environmental, and demographic statistics: interface design issues. 18th International Cartographic Conference, Stockholm, June 23-27, 880-887.

Mersey, J. (1990). Colour and thematic map design: The role of colour scheme and map complexity in choropleth map communication. Cartographica, 27(3).

Monmonier, M. (1989). Geographic brushing: Enhancing exploratory analysis of the scatterplot matrix. Geographical Analysis, 21(1), 81-84.

Monmonier, M. (1992). Authoring Graphics Scripts: Experiences and Principles. Cartography and Geographic Information Systems, 19(4), 247-260.

Monmonier, M. and M. Gluck (1994). Focus groups for design improvement in dynamic cartography. Cartography and Geographic Information Systems, 21(1), 37-47.

Petechnik B. B. (1974). A verbal approach to characterizing the look of maps. The American Cartographer, 1(1), 63-71.

Peterson, M. P. (1995). Interactive and Animated Cartography. Prentice Hall, Englewood Cliffs, NJ.

Pickle, L.W., Herrmann, D., Kerwin, J., Croner, C.M., and White, A.A. (1995). The impact of statistical graphic design on interpretation of disease rate maps. In Cognitive Aspects of Statistical Mapping, Working Paper Series from the Centers for Disease Control and Prevention, Washington, DC.

Sirken, M., Herrmann, D., and White, A.A. (1995). Cognitive aspects of designing statistical maps. In Cognitive Aspects of Statistical Mapping, Working Paper Series from the Centers for Disease Control and Prevention, Washington, DC.

Slocum, T. A. (1988). Developing an information system for choropleth maps. Proceedings: Third International Symposium on Spatial Data Handling Sydney, Australia, 293-305.