Skip to Main Content
Assessment of Research-Doctorate Programs Board on Higher Education and Workforce
The National Academies
The National Academies
Home About Staff Contacts
Quick Links

Board on Higher Education and Workforce (BHEW) Homepage

 

Contact Us
General inquiries should be directed to:
Board on Higher Education and Workforce
The National Academies
500 Fifth Street, NW WS533
Washington, DC 20001
Email: bhew@nas.edu
Phone: 202.334.2700
Fax: 202.334.2725
 


A Data-Based Assessment of Research-Doctorate Programs in the United States
Frequently Asked Questions

Below are some questions that the committee and staff have been asked or anticipate that many readers will ask about the assessment.  We will add to this collection as more questions come in from those who use the report and data. 


Q: Why are the illustrative rankings given in ranges instead of a single number?

A: The committee felt strongly that assigning to each program a single number and ranking them accordingly would be misleading, since there are significant uncertainties and variability in any ranking process.  Uncertainties arise from assumptions made in creating a ranking model based on quantitative data on program characteristics.  Even with such a model, variability arises from numerous sources, including differences in the views among the faculty surveyed, fluctuations in data from year to year, and the error inherent in estimations from any statistical model.  The ranges reflect some of this uncertainty and variability.


Q: What do the S-rankings mean?

A: The S (or survey-based) rankings reflect the degree to which a program is strong in the characteristics that faculty in the field rated as most important to the overall quality of a program.  In a survey, faculty were asked about the importance of 20 characteristics -- for example, publications per faculty member and students' time to degree -- in determining the quality of a program, and each characteristic was assigned a weight accordingly.  These weights varied by field, since each characteristic is not valued to the same degree in all fields; the percent of faculty with grants was valued more highly in biology than in history, for example.  The weights were then applied to the data on these characteristics for each program, resulting in the ranges of S-rankings for the program.   


Q: What do the R-rankings mean?


A: The R (or regression-based) rankings are based on an indirect approach to determining what faculty value in a program. First, a sample group of faculty were asked to rate a sample of programs in their fields.  Then, a statistical analysis was used to calculate how the 20 program characteristics would need to be weighted in order to reproduce most closely the sample ratings.  In other words, the analysis attempted to understand how much importance faculty implicitly attached to various program characteristics when they rated the sample of programs.  Weights were assigned to each characteristic accordingly -- again, these varied by field -- and the weights were then applied to the data on these characteristics for each program, resulting in a second range of rankings.   


Q: Are the R-rankings the same as reputational rankings?


A: The R- rankings are NOT the same as reputational rankings.  The R ranking ranges are based on data about the programs; the reputational ratings for a sample of programs in each field were used to determine the weights, but were not used directly to rank programs.


Q: What do the "5th percentile" and "95th percentile" mean in the illustrative rankings?

A: The degree of uncertainty in the rankings is quantified in part by calculating the S- and R-rankings of each program 500 times. The resulting 500 rankings were numerically ordered and the lowest and highest five percent were excluded. Thus, the 5th and 95th percentile rankings -- in other words, the 25th highest ranking and the 475th highest ranking in the list of 500 -- define each program's range of rankings, as shown in the Excel spreadsheet. 

For more information on the methodologies used to calculate the S-ranking and R-ranking ranges, see Chapter 4 of the report, as well as the revised methodology guide. 


Q: Why are there differences between the R-rankings and the S-rankings?

A:  Although each approach was based on the program data, different sets of weights were applied to the data, yielding different ranges of rankings.  In the S-rankings, for example, faculty in most fields placed the greatest weight on characteristics related to faculty research activity, such as per capita publications or the percentage of faculty with grants.  Therefore, programs that are strong in those characteristics tend to rank higher. Such characteristics were also weighted heavily in the R-rankings for many fields, but program size (measured by numbers of PhDs produced by the program averaged over five years) was frequently the characteristic with the largest weight in determining these rankings.

The National Research Council is not endorsing either ranking or any ranking as the best indicator of program quality, but instead is providing the R- and S- rankings as illustrations of how rankings can be created by applying weights to data on program characteristics.   The degree of importance attached to each program characteristic depends on how the rankings are to be used. The program data are being made available so that users can compare programs based on the characteristics that are most important to them.  


Q: Why is the methodology different from that described in the Methodology Guide released last year?


A: The committee had originally planned to combine the R- and the S- rankings into a single range of rankings, which is the approach outlined in the 2009 Methodology Guide.  The production of rankings from measures of quantitative data turned out to be more complicated and to have greater uncertainty than originally thought.  As a consequence, the committee did not combine the two measures, and instead has presented them as two illustrative rankings.  Neither one is endorsed or recommended by the National Research Council as an authoritative conclusion about the relative quality of doctoral programs.

It was also decided to include a broader range of rankings from the 500 calculated for each program.  The range in the 2009 Methodology Guide excluded half of them (the highest 25 percent and the lowest 25 percent); now only 10 percent -- the highest 5 percent and lowest 5 percent -- are excluded.


Q: How does this report differ from the rankings the National Research Council released in 1995?


A: There are a number of fundamental differences in the methodology used for the two reports. For example, the rankings in the 1995 report were based on reputation and were not directly related to program characteristics, as the illustrative rankings in the current assessment are.  The method of counting publications and citations differed between the two studies.  And the current study includes separate illustrative rankings for some dimensions of doctoral study -- such as research productivity and diversity -- that were not included in the 1995 study.  The 1995 methodology is different from the 2010 methodology in enough ways that the two are not strictly comparable.


Q.  Given that the data in the study are at least 5 years old, is it useful?


A: Although any university can point to particular programs that have been transformed in the past 5 years, most faculty have been at the same university in the same program for 8 to 20 years.  Programs that have changed significantly can make the case that they are better (or worse) than their 2005-2006 data might indicate.  Users should look at values of the characteristics that are important to them and ask if the values for those characteristics have changed for the programs that interest them. Programs are encouraged to post more current data on their websites.

In the future, universities will be able to update important data on a regular basis, so that programs can continue to be evaluated and compared.


Q: I'm a prospective Ph.D. student. How can I use the data and rankings to help me evaluate various programs?


A: Students can pick out the programs of interest and compare them on characteristics such as percent of students funded in the first year, time to degree, and placement rate. Comparing programs based on the dimensional ranking "Student Support and Outcomes" -- which is a single measure that combines these characteristics -- may be helpful as well.  A tutorial giving an example of how students can use the spreadsheets to select and compare different programs will be available at http://www.nap.edu/rdp when the assessment is publicly released. 

In addition, PhDs.org, an independent web site not affiliated with the National Research Council, is incorporating data from the assessment into its Graduate School Guide.  Users of the Guide will be able to assign weights to the program characteristics measured by the National Research Council and others, and rank graduate programs according to their own priorities.
Prospective students should not use the NRC data alone in their evaluation.  They should talk to faculty in their field of interest and students who are now pursuing doctoral study in the programs that interest them as they make their evaluation.


Q: Do I have to work with the data in Excel? Is there any way I can get a data file to use with SAS?


A: Some users will want to go well beyond the capabilities of Excel and export the data into statistical software such as SAS, Stata, or SPSS.  In such cases, users should make a tab-delimited file of the data in the Excel spreadsheet and use the functionality of the target software to put the file into that program's format.


Q: What should I do if I see an error in the data about my program?

A: Some institutions, when they examine the spreadsheet for the database of the National Research Council study A Data-Based Assessment of Research-Doctorate Programs, may find data that are or appear to be incorrect.  Over the next several weeks following public release of the report and database, we wish to be informed about potential mistakes, misunderstandings, and possible errors. 
  
The NRC took many precautions during data collection to assure the accuracy of the data. Data were returned to the institutional coordinator for verification, and data that were obviously incorrect were flagged. This verification process was ongoing through the summer of 2010. Despite these efforts, errors may persist, and there are many possible causes.

The institutional coordinator at the university should bring information about potential mistakes, misunderstandings, and possible errors to our attention by emailing bhew@nas.edu. Please explain the problem and, if known, its source, and indicate the correct data, program classification, or other aspect of the problem. We will examine the data and work with the institution to identify the source of the error, and the remedy will depend on the source.

If the university, in collecting the data, misunderstood the directions and entered erroneous or incomplete data, we will record what the university tells us on a publicly available list.  The NRC will maintain a database of these corrections classified by institution and program on the What's New page.

Universities can also provide information regarding corrections of this type for the 2005/6 data on the websites for their programs.  It is unlikely that we will be able to undertake the considerable effort that would be required to validate and implement such corrections to the 2005/6 data (such as with variables dependent on faculty lists) in order to change the spreadsheet or to rerun the illustrative rankings.  

If it is determined that the source of the error was in the processing by the NRC of institutional data supplied by universities, we will collate the needed revisions, and we will attempt to rerun the illustrative rankings and publish a revised master spreadsheet.   We ask that any information regarding possible errors of this sort be sent to us by November 1, 2010.