Edward Haertel (Chair), Stanford University
Gary Chamberlain, Harvard University
Mark Dynarski, Pemberton Research, LLC
David J. Francis, University of Houston
Joan Herman, University of California, Los Angeles
Michael Kane, Educational Testing Service
Sharon Lewis, Council of Great City Schools
Robert Mare, University of California, Los Angeles
Diana C. Pullin, Boston College
Ann Marie Ryan, Michigan State University
Brian Stecher, RAND, Santa Monica, CA
John Robert Warren, University of Minnesota
Mark Wilson, University of California, Berkeley
Rebecca Zwick, Educational Testing Service
BOTA Member Biographies
Edward H. Haertel (chair) is the Jacks Family professor of education and the associate dean for faculty affairs at the School of Education at Stanford University. Dr. Haertel is an expert in the area of educational testing and assessment. His research and teaching focus on psychometrics and educational policy, especially test-based accountability and related policy uses of test data. His recent work has examined standard setting methods, limitations of value-added models for teacher and school accountability, impacts of testing on curriculum, students, and educational policy, test reliability, and generalizability theory. He served as president of the National Council on Measurement and Education, and is a member of the National Academy of Education, the American Psychological Association, the National Society for the Study of Education, the Psychometric Society, and the American Educational Research Association. He also served as a member on the NRC’s Board on International Comparative Studies in Education. Dr. Haertel’s other NRC work includes a Panel to Review Alternative Data Sources for the Limited-English Proficiency Allocation Formula under Title III, Part A, Elementary and Secondary Education Act; and the Committee to Respond to the Department of Education Race to the Top Proposal. Recently he received a lifetime achievement award from the California Education Research Association. Dr. Haertel received his Ph.D. in measurement, evaluation, and statistical analysis from the University of Chicago.
Gary Chamberlain has been Louis Berkman professor of economics at Harvard University since 2002. He has taught at the University of Wisconsin-Madison and has been a Professor of Economics at Harvard since 1987. His research topics have included panel data, returns to schooling, factor structure in large asset markets, semiparametric efficiency, the structure of wages, and applications of decision theory in econometrics. He is a fellow of the Econometric Society and was a member of its Council from 1988 to 1993, and he gave the Fisher-Schultz Lecture in 2001. He is a fellow of the American Academy of Arts and Sciences, a fellow of the American Association for the Advancement of Science, and a member of the National Academy of Sciences. Chamberlain received his Ph.D. in economics from Harvard University.
Mark Dynarski is a researcher with Pemberton Research, LLC. He is also associated with the Chesapeake Research Associates. He was the former vice president and director of the Center for Improving Research Evidence (CIRE) at Mathematica from 1988 to 2010. His research interests focus on evidence based policy, educational policy, school dropout programs, 21st century after-school programs, and educational technology. His expertise is in econometrics and evaluation methodology, including the design, implementation and analysis of evaluations of education programs using random assignment and quasi-experimental design. Dynarski previously served on the a number of NRC committees and is currently a member of the Committee on the Evaluation Framework for Successful K-12 STEM Education, and the Committee on the Workshop on Key National Education Indicators. Dynarski received his Ph.D. in economics from Johns Hopkins University.
David J. Francis is professor and chair of the Department of Psychology at the University of Houston, where he also serves as director of the Texas Institute for Measurement, Evaluation, and Statistics. He also earned his Ph.D. at the University of Houston. Dr. Francis has authored or co-authored over 120 peer-reviewed articles and book chapters and is a fellow of Division 5 (Measurement, Evaluation, and Statistics) of the American Psychological Association. He currently serves on the Independent Review Panel for the National Assessment of Title I and the Institute of Education Sciences Reading and Writing Peer Review Panel. He previously served as an official advisor to the U.S. Department of Education on assessment and accountability during negotiated rule making for No Child Left Behind, as a member of the National Technical Advisory Group of the What Works Clearing House, and as a member of the National Literacy Panel for Language Minority Youth and Children. He has also served as a member on many projects through the NRC including a panel to Review Alternative Data Sources for the Limited-English Proficiency Allocation Formula Under Title III; the Roundtable on Education Systems and Accountability; the Committee to Respond to the Department of Education Race to the Top Proposal; the Committee on Developmental Outcomes and Assessments for Young Children; and the Committee on Promising Education Practices. He is a co-developer of the Texas Primary Reading Inventory and Tejas Lee early reading assessments.
Joan Herman is Director of the National Center for Research on Evaluation, Standards, and Student Testing (CRESST) at the University of California, Los Angeles. Her research has explored the effects of testing on schools and the design of assessment systems to support school planning and instructional improvement. Her recent work has focused on the validity and utility of teachers' formative assessment practices in mathematics and science. She also has wide experience as an evaluator of school reform and is noted in bridging research and practice. A former teacher and school board member, Herman also has published extensively in research journals and is a frequent speaker to policy audiences on evaluation and assessment topics. She is past president of the California Educational Research Association; has held a variety of leadership positions in the American Educational Research Association and Knowledge Alliance; is a member of the Joint Committee for the Revision of the Standards for Educational and Psychological Measurement; co-chairs the Board of Education for Para Los Niños, and is current editor of Educational Assessment. She served as a member of the NRC Committee on Test Design for K-12 Science Achievement, and the Roundtable on Education Systems and Accountability, the Committee on Best Practices for State Assessment Systems and is chairing the BOTA workshop on 21st Century Skills. Herman received her doctorate of education in learning and instruction from the University of California, Los Angeles.
Michael T. Kane has been the Samuel J. Messick Chair in Test Validity with the Educational Testing Service since 2009. From 2001 to 2009 he served as director of research at the National Conference of Bar Examiners. From 1991 to 2001, he was a professor in the School of Education at the University of Wisconsin, where he taught measurement theory and practice in the Department of Kinesiology. From 1982 to 1991, Kane was a VP for research and development and as a senior research scientist at American College Testing (ACT) in Iowa City; his main responsibility was to supervise large-scale validity studies on licensure and certification examinations. He served as director of test development at the National League for Nursing from 1976 to 1982. His main research interests are validity theory, generalizability theory, licensure and certification testing, and standard setting. He previously served on the NRC’s Committee on Evaluation of Teacher Certification by the National Board for Professional Teaching Standards and the Committee to Respond to the Department of Education’s Race to the Top Proposal. Kane holds a Ph.D. in education and a M.S. in statistics from Stanford University and a B.S. and M.A. in physics from Manhattan College and SUNY, Stony Brook, respectively.
Sharon J. Lewis is the director of research for the Council of the Great City Schools in Washington, D.C. She directs the Council’s research program, which contributes to the organization’s efforts to improve teaching and learning in the nation’s urban schools as well as help develop education policy. She has previously worked as a national education consultant. Earlier, she was assistant superintendent, research, development and coordination, with the Detroit Public Schools, where she retired. She has extensive experience with the NRC, and is currently a member of the Committee on the Evaluation Framework for Successful K-12 STEM Education. Lewis earned an M.A. in educational research from Wayne State University.
Robert D. Mare is professor of sociology at the University of California, Los Angeles and founding director of the California Center for Population Research. He is the 2010 president of the Population Association of America. He is widely known for his contributions to social demography in five major areas: models of educational stratification; marriage markets and assortative mating; statistical methods; neighborhood change; and population models of stratification. Mare has been widely recognized for his scholarship and played a number of important roles in professional associations. He is a member of the National Academy of Sciences and a fellow of the American Academy of Arts and Sciences. He has been a fellow of the Center for Advanced Studies in Behavioral and Social Sciences, a Guggenheim fellow and a recipient of the American Sociological Association’s Lazarsfeld Memorial Award. Mare previously served on the NRC Committee on the Youth Population and Military Recruitment: Phase 1, and the Panel on Estimation Procedures. Mare received his M.A. and Ph.D. in sociology from the University of Michigan.
Diana C. Pullin is professor of educational leadership and higher education at Boston College. She also coordinates the Joint Degree Program in Law and Education at the Law School and the Lynch School of Education at the University. She has served as dean of the School of Education at Boston College and as associate dean of the College of Education at Michigan State University. Pullin was staff attorney, co-director, and then president of the Center for Law and Education of Cambridge, Massachusetts and Washington, D.C. The relationship between law and education in the pursuit of equality of educational opportunity and educational excellence has always been the cornerstone of Pullin's work as a practicing attorney, scholar, and teacher. She has also made contributions to the development and implementation of ethical and professional standards of practice in education. She served as a member of the Committee on Educational and Psychological Testing of the American Educational Research Association, the American Psychological Association, and the National Council on Measurement in Education. She has served as a member with the NRC on panels addressing issues concerning minority students in special education and gifted education, the impact of standards-based education reform on students with disabilities and, the pursuit of educational excellence and testing equity. These panels include Committee on Best Practices for State Assessment Systems: Improving Assessment While Revisiting Standards; Committee to Respond to the Department of Education Race to the Top Proposal; and the Committee on Minority Representation in Special Education. Pullin received her J.D. degree and Ph.D. in education from the University of Iowa.
Ann Marie Ryan is a professor of organizational psychology at Michigan State University. She was employed for several years at Bowling Green State University where she directed the Institute for Psychological Research and Application. She has published widely on the topics of fairness in organizational decision-making processes, contextual and non-ability factors in employee selection, applicant perceptions of fairness, recruitment and job search, diversity in organizations, and employee assessment tools. She is a fellow and past president of the Society for Industrial and Organizational Psychology, and a fellow of the American Psychological Association and the American Psychological Society. Currently she serves as editor for Personnel Psychology. Ryan previously served on the NRC Panel to review the Occupational Information Network (O*NET). She holds a Ph.D. from the University of Illinois at Chicago.
Brian Stecher is a senior social scientist and the associate director of RAND Education. Stecher's research focuses on measuring educational quality and evaluating education reforms, with a particular emphasis on assessment and accountability systems. During his 20 years at RAND, he has directed prominent national and state evaluations of No Child Left Behind, Mathematics and Science Systemic Reforms, and Class Size Reduction. His measurement-related expertise includes test development (prototype performance assessments for teacher certification, hands-on science tasks for middle school students), test validation (the quality of portfolio assessments in Vermont and Kentucky and new assessments in Washington), and the use of assessments for school improvement (formative and interim assessments, quality of classroom assessments). He has presented findings to policymakers at the state and national level, to practitioners, and to the public. Stecher has served on expert panels relating to standards, assessments, and accountability for the National Academies, and is currently a member of the Board on Testing and Assessment. He has published widely in professional journals and is currently a member of the editorial boards of Educational Evaluation and Policy Analysis and Educational Assessment. He received his Ph.D. from the University of California, Los Angeles.
John Robert Warren is associate professor of sociology at the University of Minnesota. Warren’s research focuses on inequalities in educational and health outcomes. His recent work focuses on the measurement of states’ high school completion rates; the consequences of state high school exit examinations for educational and labor market outcomes; the magnitude of “panel conditioning” (or time in survey) effects in longitudinal surveys; changes over time in the association between socioeconomic status and health; and the effects of life-course trajectories of work and family roles on health and financial outcomes in late adulthood. He has published numerous journal articles on these topics, and currently serves as deputy editor of Sociology of Education. Warren is currently a member of the NRC Steering Committee for the Workshop on Key National Education Indicators and previously served on the Committee for Improved Measurement of High School Dropout and Completion Rates: Expert Guidance on Next Steps for Research and Policy. His Ph.D. is in sociology from the University of Wisconsin-Madison.
Mark R. Wilson is Professor of Policy, Organization, Measurement, and Evaluation Cognition and Development in the Graduate School of Education at the University of California, Berkeley. He is also the developer of the Berkeley Evaluation and Assessment Research Center. His research focuses on educational measurement, survey sampling techniques, modeling, assessment design, and applied statistics. He currently advises the California State Department of Education on assessment issues as a member of the Technical Study Group. Dr. Wilson has recently published three books: Constructing Measures: An Item Response Modeling Approach, which is an introduction to modern measurement; Explanatory Item Responses Models: A Generalized Linear and Nonlinear Approach, which introduces an overarching framework for the statistical modeling of measurements that makes available new tools for understanding the meaning and nature of measurement; and Towards Coherence Between Classroom Assessment and Accountability, an edited volume that explores the issues relating to the relationships between large-scale assessments and classroom-level assessment. He is founding editor of the journal Measurement: Interdisciplinary Research and Perspectives. Dr. Wilson served on the Committee on the Foundations of Assessment and the recent Committee on Development Outcomes and Assessment for Young Children, and he chaired the Committee on Test Design for K-12 Science Achievement. He served on the recently finished Committee on Value-Added Methodology for Instructional Improvement, Program Evaluation, and Accountability. He has a Ph.D. in measurement and educational statistics from the University of Chicago.
Rebecca Zwick is a Distinguished Presidential Appointee in the Statistical Analysis and Psychometric Research area at Educational Testing Service. She is also Professor Emerita at the Gevirtz Graduate School of Education at the University of California, Santa Barbara, where she taught between 1996 and 2009. She received her doctorate in Quantitative Methods in Education at the University of California, Berkeley, completed a postdoctoral year at the L. L. Thurstone Psychometric Laboratory at the University of North Carolina at Chapel Hill, and obtained an M.S. from the Statistics Department at Rutgers University. From 1984 through 1996, she was a member of the Division of Statistics and Psychometrics Research at ETS. She is the author of more than 100 publications in educational measurement and statistics, including Fair Game? The Use of Standardized Admissions Tests in Higher Education (RoutledgeFalmer, 2002). She serves on the National Academy of Sciences Board on Testing and Assessment (BOTA) and the technical advisory group for the Programme in International Student Assessment (PISA). She has served as Vice President of the Measurement and Research Methodology Division of AERA, editor of the Journal of Educational Measurement, and member of the Board of Directors of the National Council on Measurement in Education (NCME). In 2001, she was a recipient of the NCME Award for Outstanding Dissemination of Educational Measurement Concepts to the Public for her publications on standards and high-stakes testing.