Friday, November 28, 2014
board on mathematical sciences and their applications The National Academies
National Academy of Sciences National Academy of Engineering Institute of Medicine National Research Council
BMSA Home
About BMSA
Mission
Current Themes
Committee On Applied & Theoretical Statistics
Member Bios
Publications
Past Events
Staff and Contact Info
DEPS Home

Toward Improved Visualization of Uncertain Information

List of Participants

AGENDA
March 3-4, 2005
National Academy of Sciences
Washington, D.C.

Workshop Goals: Good visual displays of data and of complex concepts are known to facilitate understanding, thinking, and inferences about the data or concepts. The process of using flexible, perhaps interactive, visualization tools has the potential to multiply the power of visual data display.

Creating effective visual representations is challenging, all the more when the information is vast, uncertain, or poorly understood. A major goal of this workshop is to convene experts from various communities to share perspectives on these challenges. An additional goal, prompted by the dramatic increase in the size of data sets in many fields over recent years, is to spark increased research into innovative approaches for visualizing uncertainty for large data sets.

The workshop is structured around three examples of contexts within which people need to evaluate information with uncertainties. These three contexts are characterized by differing amounts of data; differing structure, mode, certainty, and relevance of data; differing time pressures; and differing goals, expertise, and motivation of users. They are not meant to be a hard-and-fast taxonomy of the problem space, but rather a way to sample the range of problems that would benefit from better means of visualizing uncertain information.

 

Thursday, March 3

8:00 a.m. Breakfast in meeting room

8:30 a.m. Welcome; goals of workshop (Leland Wilkinson, SPSS, Inc.)

 

Overview Presentations

8:40 a.m. Best practices for visualizing uncertain information in computer science
Ben Shneiderman, University of Maryland, and Alex Pang, University of California at Santa Cruz

9:30 a.m. Best practices for visualizing uncertain information in geography
Michael Goodchild, University of California at Santa Barbara

10:20 a.m. Break

10:35 a.m. Best practices for visualizing uncertain information in statistics
Andreas Buja, University of Pennsylvania

11:25 a.m. Best practices for visualizing uncertain information in psychology
Barbara Tversky, Stanford University

12:15 p.m. Lunch

1:30 p.m. Context 1: Large, structured data sets with ample time for interpretation. Examples include science, engineering, and medical research. The challenge for decision-makers is to comprehend and interpret complex visualizations.

1:30-1:45 p.m. Discussant (Alex Szalay, Johns Hopkins University) explains the context and gives an example of real data

1:45-2:15 p.m. David Scott, Rice University

2:15-2:45 p.m. Bill Cleveland, Purdue University

2:45-3:15 p.m. Michael Friendly, York University

3:15-3:45 p.m. Discussant presents reactions to ideas given by the speakers.

3:45 p.m. Break

4:15 p.m. Plenary discussion

5:30 p.m. Reception

6:30 p.m. Dinner

 

Friday, March 4

8:00 a.m. Breakfast in meeting room

8:30 a.m. Context 2: Large, unstructured, changing data sets where the relevance, significance, and conceptual links among the data have yet to be discovered. Examples include intelligence, homeland security, military and business planning, market trading, waging war. The data are not well structured and may be very varied (e.g., text, images, reports of varying credibility), uncertainties might be difficult to quantify, and the data may be updated during while analysis is underway. The signal-to-noise ratio is low in that the important information is only a small fraction of the data, and there is no good conceptual model to guide the analysis, so the analysis is exploratory or driven by ad hoc, uncertain hypotheses. Users are motivated to learn new visualization tools, but and the breadth and variety of data make them dependent on the way the data are presented because they cannot invest time into understanding all the uncertainties in the input data. Analyses might have a timescale of hours to days.

8:30-8:4 a.m. Discussant (David Harris, NSA) explains the context and gives an example of real data

8:45-9:1 a.m. Ronald Coifman, Yale University

9:15-9:4 a.m. Andre Skupin, University of New Orleans

9:45-10:15 a.m. Stephen Eick, University of Illinois at Chicago and SSS Research

10:15-10:45 a.m. Discussant presents reactions to ideas given by the speakers.

10:45 a.m. Break

11:00 a.m. Plenary discussion

11:45 a.m. Lunch

12:45 a.m. Context 3: Data sets that are reduced for rapid understanding in time-pressured situations. Examples include pilots, executive decisions in business and policy, newspaper polls and economic data. Decision-makers have limited time to study the information, because of lack of time, motivation, or expertise. The visual display must summarize a complex landscape of phenomena for quick interpretation. The raw data may be precise (e.g., measures of the state of an aircraft) or imprecise (e.g., measures of the “health” of a business unit). Users might be motivated to learn special symbology (e.g., pilots) or may expect the displays to be self-evident (e.g., newspaper readers). Because the data are condensed, ambiguity is difficult to avoid.

12:45-1:00 p.m. Discussant (Robert Frey, Director, Progra.m. in Quantitative Finance Applied Mathematics and Statistics, SUNY-Stony Brook) explains the context and gives an example of real data

1:00-1:30 p.m. Chris Wickens, University of Illinois at Urbana-Champaign

1:30-2:00 p.m. Peter Fisher, City University, London

2:00-2:30 p.m. Henry Rolka, Centers for Disease Control

2:30-3:30 p.m. Discussant presents reactions to ideas given by the speakers.

3:30 p.m. Break

3:45 p.m. Plenary discussion. Develop list of the following:

  • Lessons learned from the workshop
  • Most promising research directions
  • Next steps for developing this topic

4:30 p.m. Adjourn

The program committee for this workshop consists of:
Leland Wilkinson, SPSS, Inc., chair*
Dianne Cook, Iowa State University
Michael Goodchild, University of California at Santa Barbara
Alan MacEachren, Pennsylvania State University
Alex Pang, University of California at Santa Cruz
Ben Shneiderman, University of Maryland
Barbara Tversky, Stanford University
Edward Wegman, George Mason University*

*Member, Committee on Applied and Theoretical Statistics (CATS) of The National Academies

Funding for this workshop was provided by the National Security Agency.