You are here

"Teaching Students to Read Journal Articles Critically" (Orlov) Journal of Economic Education, forthcoming.

In this paper, the author describes the use of primary literature readings in an upper-division undergraduate field course. One of the two main learning goals of the course was to teach students how to read academic articles in economics with a critical eye. This was accomplished through providing students with a structured framework for summarizing the main methods and results of each paper and feedback provided on short paper reports and during in-class discussion activities. Based on his experiences in this course, the author offers observations and suggestions to instructors wishing to integrate non-textbook academic readings in their teaching.

"Teaching Economic Evaluation with Population Health Cases" (Green, Bolbocian, Busken, Gonzalez, McKee, and Xu) Journal of Health Administration Education, 2017.

Economic evaluation is one of the largest competency gaps for public health practitioners, and researchers recommend increased training of the public health workforce in economic evaluation. This paper contributes to the literatures on case teaching in economic theory, health economics, and other fields of economics. Authors describe a technology-enhanced, case-based economic evaluation course in a school of public health at a private Midwestern university.

"Racial and Gender Achievement Gaps in an Economics Classroom" (Bottan, McKee, Orlov, and McDougall) (under review)

In this paper, we document gender and race/ethnic achievement gaps over four semesters of an intermediate-level economics course. We find that male under- represented minority (URM) students earned lower final exam scores than male non-URM students, but this gap disappears when we control for differences in prior preparation. In contrast, female URM students performed significantly worse than female non-URM students, even after controlling for prior preparation. We analyze scores on low-stakes assessments and surveys about study behavior and find that the theory of stereotype threat most consistently explains our results. As these issues are unlikely to be unique to our classroom, we offer several potential pedagogical solutions to address differences in prior preparation and stereotype threat that underlie observed achievement gaps.

"Explaining Heterogeneity Across Departments in Diversity of Economics Students" (McDougall, McKee, and Orlov) (working paper)

The field of economics is severely lacking in diversity, lagging behind other STEM fields that have drastically improved on this front in the past decades. However, there is little consensus on the underlying causes of or most effective solutions to this problem. In this paper, we combine data from the Integrated Postsecondary Education Data System (IPEDS) with data from our own survey of economics departments to identify characteristics of institutions and departments that are associated with observed variation in diversity across departments. We explore four avenues that the existing literature suggests departments could pursue to improve the gender and racial diversity of their undergraduate students: student support, role modeling, course content, and the use of active learning pedagogy in the classroom. We find little to no association of variables linked to the first two approaches and either gender or racial diversity. On the other hand, we find a positive association between course content (economics courses with feminist theory) and gender diversity and between the use active learning pedagogy and gender diversity. Unfortunately, we find no such associations for racial diversity, leaving open the question of whether other avenues for increasing the uptake of the economics major by underrepresented minority students need to be explored.

"Total Recall? Short- and Long-term Retention of Statistics and Econometrics Skills" (McKee and Orlov) (working paper)

While written exams are commonly used to measure student learning during or at the end of a course, research on the amount of material students retain over time is very limited. In this paper, we use unique data to study the retention of statistics and econometrics skills after course completion. We measure student skills using low- stakes assessments given regularly at our institution prior to the final examinations. The Economic Statistics Skills Assessment (ESSA) is administered at the end of the introductory statistics course, and at the start of the econometrics course, as a test of pre-existing skills. The difference in ESSA scores for students who took it after their winter and summer breaks, 1.5 and 4 months later respectively, is used as a measure of statistics skills retention. The Applied Econometrics Skills Assessment (AESA) is administered at the end of the econometrics course. To measure knowledge retention in econometrics, we induced a large proportion of non-graduating Fall 2017 and Spring 2018 students to take AESA a second time, a year after they completed the course. We find that a longer interval between the pre- and post-tests is associated with a worse performance on the post-test. Taking courses that apply statistics and econometrics skills are associated with better learning retention; however, courses on statistics and econometrics methods introducing new concepts are associated with worse performance on the post-test. We further find that female and underrepresented minority students have worse learning retention, on average. On the other hand, first generation students perform better on the post-test.

"The Economic Statistics Skills Assessment (ESSA)" (McKee and Orlov) (under review)

Measurements of student knowledge and skills are highly useful both upon the entry of students into a course, so that the gaps in prerequisite knowledge can be addressed, as well as upon course completion, so that the impact of any interventions and changes to the course can be evaluated. Final examinations often do not provide the desired coverage and are difficult to compare across terms and institutions. Within most STEM fields, this problem is solved by the use of concept inventories, which are designed as low-stakes standardized assessments of students’ core knowledge. With the exception of the Test of Understanding College Economics (TUCE), which tests introductory economics knowledge, economics as a field did not have such assessments. In this paper, we document the design, development, and validation of a 20-question Economic Statistics Skills Assessment (ESSA) that we created to test the student knowledge and understanding of probability and statistics concepts. The assessment was reviewed by economics faculty across multiple public and private institutions, validated via think- aloud interviews with students, and taken by students at multiple institutions at the conclusion of their statistics for economics courses or the start of their econometrics courses. We demonstrate, using statistical analysis, that the items in ESSA capture whether students have developed the understanding of specific probability and statistics concepts.

"The Applied Econometrics Skills Assessment (AESA)" (Orlov and McKee) (working paper)

Final exam scores and final course grades are not always reflective of student learning in courses. Unlike most STEM fields, Economics currently lacks high quality standardized assessments of learning outcomes, with the exception of the Test of Understanding of College Economics (Walstad, Watts, and Rebeck 2007), which targets introductory economics courses. The use of econometric methods is a crucial skill that all economics majors should develop and, hence, the ability to evaluate whether learning improves when the instructor changes her teaching approach is highly valuable. Further, as student self-reports of pre-existing skills are unreliable, an objective formal assessment allows instructors of advanced courses to evaluate the student preparedness and tailor the teaching accordingly. The Applied Econometrics Skills Assessment (AESA) is designed to serve both purposes.

"Who Comes to Office Hours?" (Bottan, McKee, and Orlov) (working paper)

Professors spend a substantial fraction of their teaching time and effort providing support to students in office hours. This paper uses attendance and survey data from two introductory economics courses and two more advanced courses to identify what characteristics of students predict attendance at office hours and why some students do not attend. We also shed light on what strategies might be effective for encouraging students to attend office hours. We find attendance rates vary substantially across courses, but female students consistently attend more than male students. Holding gender and race constant, students with very low or very high GPAs attend more often than students in the middle range. The two most common reasons given for non-attendance are a lack of perceived need for help and scheduled office hours conflicting with other classes. We find that those students who state they do not need the extra support do tend to perform better on final exams. Finally, we show that encouraging students to attend office hours through an email message had little impact, but providing small extra credit for attendance significantly increases student participation.