THE COMPLEXITIES OF PUBLIC GOODS FOR A DIVERSE PUBLIC: EVIDENCE FROM GUN LAWS, CLIMATE POLICY, AND POLICE TRANSPARENCY by GARRETT OLSON STANFORD A DISSERTATION Presented to the Department of Economics and the Division of Graduate Studies of the University of Oregon in partial fulfillment of the requirements for the degree of Doctor of Philosophy June 2023 DISSERTATION APPROVAL PAGE Student: Garrett Olson Stanford Title: The Complexities of Public Goods for a Diverse Public: Evidence from Gun Laws, Climate Policy, and Police Transparency This dissertation has been accepted and approved in partial fulfillment of the requirements for the Doctor of Philosophy degree in the Department of Economics by: Edward Rubin Chair Trudy Ann Cameron Core Member Jonathan M. Davis Core Member Bryce Newell Institutional Representative and Krista Chronister Vice Provost for Graduate Studies Original approval signatures are on file with the University of Oregon Division of Graduate Studies. Degree awarded June 2023 ii © 2023 Garrett Olson Stanford This work is licensed under a Creative Commons Attribution-NonCommercial (United States) License. iii DISSERTATION ABSTRACT Garrett Olson Stanford Doctor of Philosophy Department of Economics June 2023 Title: The Complexities of Public Goods for a Diverse Public: Evidence from Gun Laws, Climate Policy, and Police Transparency This research examines three pressing social issues: tensions between law enforcement and the public, climate change policy options, and firearms control laws. Chapter 2 and Chapter 3 use field and survey-based experiments to collect primary data. They estimate novel measures, respectively, of police behavior and public preferences concerning climate change policy. Chapter 4 uses newly available administrative data to understand the consequences of a recently passed firearms control law. In Chapter 2, I test for evidence of racial and gender biases in one aspect of police interactions with the public in the United States. Using a so-called “correspondence” study, I test whether police departments respond differently to requests for information about how to lodge a formal complaint against an officer in the department depending on the perceived race/ethnicity and gender of the complainant. The study’s experimental design allows me to examine police behavior quantitatively without relying on police- provided administrative data. Results for a nationwide random sample of police departments suggest that police departments are less likely to respond to Black and Hispanic individuals than White individuals. Examining the interaction of race/ethnicity and gender, I find police departments are most likely to respond to White males and least likely to respond to Black and Hispanic males. iv Chapter 3 reports upon the results from a set of survey-based choice experiments designed to assess state-level demand for carbon cap-and-trade programs with different attributes. The evidence confirms that these state-level preferences are strongly heterogeneous with respect to political ideologies and opinions about climate change. Our models allow us to calculate the implied social benefits of carbon emissions reductions. We estimate the marginal rate of substitution between “carbon” jobs and “green” jobs for different preference classes. We then use our estimates to model how support for different types of cap-and-trade programs varies across the United States. Methodologically, we account for systematic sample selection of respondents in our estimating sample relative to the quota-based sample of invitees from our commercial internet panel. In Chapter 4, we examine the (un)intended effects of Oregon’s new firearms control law: Measure 114. Narrowly passing by a popular referendum vote in November 2022, Measure 114 aimed to increase firearms licensing requirements and restrict access to high-capacity magazines for ammunitions. We use data from the FBI’s National Instant Criminal Background Check System and an administrative dataset provided by the Oregon State Police to measure the causal effect of the law on firearm sales. Results indicate that Measure 114 unintentionally motivated Oregonians to purchase an unprecedented increase in the number of firearms. This dissertation includes previously unpublished co-authored material. v CURRICULUM VITAE NAME OF AUTHOR: Garrett Olson Stanford GRADUATE AND UNDERGRADUATE SCHOOLS ATTENDED: University of Oregon, Eugene, OR Portland State University, Portland, OR Kanda Institute of International Studies, Chiba, Japan University of Puget Sound, Tacoma, OR DEGREES AWARDED: Doctor of Philosophy, Economics, 2023, University of Oregon Master of Science, Economics, 2019, University of Oregon Bachelor of Science, Economics, 2014, University of Puget Sound Bachelor of Arts, Japanese, 2014, University of Puget Sound AREAS OF SPECIAL INTEREST: Public economics Experimental economics Environmental economics Criminal Justice vi ACKNOWLEDGEMENTS I thank Professors Edward Rubin, Trudy Ann Cameron, Jonathan Davis, and Bryce Newell for their advice, mentorship, and unwavering support. I am incredibly grateful to Professor Rubin for his kindness and ability to keep me positive in the most daunting of times and to Professor Cameron for being the most supportive and selfless mentor for whom anyone could ever ask. I also thank Professor Ben Hansen for his helpful insights and much-needed support in the final stretch. This research benefited from valuable comments from participants in the University of Oregon Micro Group, the 2022 Society for Benefit-Cost Analysis Annual Conference, the 2022 AERE Summer Conference, and the 2022 WEAI Conference. I wish to thank my fellow University of Oregon Economics graduate students, for, among many things, their camaraderie. I especially want to thank Robert McDonough, Promise Kamanga, Tanner Bivins, and John Morehouse. Without them, this dissertation would be an abandoned dream. Finally, I thank my friends and family who supported me throughout this endeavor. In particular, my sister Lydia, for being a good listener, a generous soul, and an excellent ear to gripe to; Lucas for his selfless acts of service; and my steadfast rock throughout this all: Claire, I could not be more grateful for your love, support, and exquisite sense of humor <3 This work has been supported in part by the endowment accompanying the Raymond F. Mikesell Chair in Environmental and Resource Economics at the University of Oregon. Any remaining errors are my own. vii To Tom, for being alive for so long; good job! viii TABLE OF CONTENTS Chapter Page I. INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 II. POLICE ARE LESS LIKELY TO RESPOND TO REQUESTS FOR HELP FROM MINORITIES: FIELD EXPERIMENT EVIDENCE OF POLICE DISCRIMINATION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.1. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2. Experimental Design and Data . . . . . . . . . . . . . . . . . . . . . . 12 2.2.1. Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.2.2. Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.3. Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.3.1. Summary Statistics . . . . . . . . . . . . . . . . . . . . . . . 18 2.3.2. Main results . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.3.2.1. Interaction Effects . . . . . . . . . . . . . . . . . . . 21 2.3.2.2. Department Size . . . . . . . . . . . . . . . . . . . 24 2.4. Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.4.1. Interpreting Bias . . . . . . . . . . . . . . . . . . . . . . . . 25 2.4.2. Accountability . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.4.3. Caveats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.4.4. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.5. Tables Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 III. PUBLIC PREFERENCES FOR A STATE-LEVEL CARBON CAP-AND-TRADE PROGRAM . . . . . . . . . . . . . . . . . . . . . . 42 3.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.2. Basic Choice Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 ix Chapter Page 3.2.1. Homogeneous preferences . . . . . . . . . . . . . . . . . . . . 48 3.2.2. Heterogeneous preferences . . . . . . . . . . . . . . . . . . . 54 3.2.2.1. Mixed logit models . . . . . . . . . . . . . . . . . . 54 3.2.2.2. Latent class models . . . . . . . . . . . . . . . . . . 57 3.2.2.3. Preferences that vary systematically with observable respondent characteristics . . . . . . . . . 59 3.3. Outline of Survey and Data . . . . . . . . . . . . . . . . . . . . . . . 60 3.3.1. Sketch of the survey instrument . . . . . . . . . . . . . . . . . 60 3.3.2. Sample Selection . . . . . . . . . . . . . . . . . . . . . . . . 61 3.4. Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . 63 3.4.1. Sample Selection . . . . . . . . . . . . . . . . . . . . . . . . 63 3.4.2. Program choice model: Homogeneous preferences . . . . . . . . 64 3.4.3. Program choices: Heterogeneous preferences . . . . . . . . . . 67 3.4.3.1. Unobserved heterogeneity: Mixed logit specifications . 67 3.4.3.2. Latent class models . . . . . . . . . . . . . . . . . . 68 3.4.4. Program choices: Observable heterogeneity and benefits transfer . . . . . . . . . . . . . . . . . . . . . . . . . 71 3.4.5. Implications of estimated models . . . . . . . . . . . . . . . . 74 3.4.5.1. Benefit function transfer to all ZCTAs in the lower-48 U.S. states . . . . . . . . . . . . . . . . 78 3.4.5.2. Distribution of WTP for cap-and-trade programs across ZCTAs . . . . . . . . . . . . . . . . 79 3.4.5.3. Spatial heterogeneity in WTP for different cap-and-trade programs . . . . . . . . . . . . . . . . 80 3.5. Directions for Future Research . . . . . . . . . . . . . . . . . . . . . 81 3.6. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 x Chapter Page IV. OREGUNIANS AND THE GUN-CONTROL PARADOX . . . . . . . . . 101 4.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 4.2. Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 4.2.1. Gun laws in the United States . . . . . . . . . . . . . . . . . . 105 4.2.2. Gun laws in Oregon . . . . . . . . . . . . . . . . . . . . . . . 107 4.3. Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 4.4. Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 4.5. Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 4.5.1. Treatment period . . . . . . . . . . . . . . . . . . . . . . . . 114 4.5.2. State-level synthetic control difference-in-difference . . . . . . . 115 4.5.3. Time-series models . . . . . . . . . . . . . . . . . . . . . . . 116 4.5.4. Event Study . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 4.5.5. County Heterogeneity . . . . . . . . . . . . . . . . . . . . . . 118 4.6. Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 4.7. Tables and Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 V. CONCLUSION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 APPENDICES A. CHAPTER 2 APPENDIX . . . . . . . . . . . . . . . . . . . . . . . . . 148 A.1. Appendix: Police Department Selection . . . . . . . . . . . . . . . . . 148 A.1.1. Type of email collected . . . . . . . . . . . . . . . . . . . . . 149 A.2. Appendix: Identity Construction . . . . . . . . . . . . . . . . . . . . . 151 A.3. Appendix: Email Account Creation . . . . . . . . . . . . . . . . . . . 155 A.4. Appendix: Example Email . . . . . . . . . . . . . . . . . . . . . . . 156 A.4.1. Email Text . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 A.5. Appendix: Treatment Assignment . . . . . . . . . . . . . . . . . . . . 158 xi Chapter Page A.6. Appendix: Experiment Implementation . . . . . . . . . . . . . . . . . 163 A.7. Appendix: Response Time and Word Count . . . . . . . . . . . . . . . 165 A.8. Appendix: Summary Statistics . . . . . . . . . . . . . . . . . . . . . . 168 B. CHAPTER 3 APPENDIX . . . . . . . . . . . . . . . . . . . . . . . . . 173 B.1. Appendix: Expanded Discussion of the Related Literature . . . . . . . . 174 B.1.1. Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 B.1.2. Cap-and-Trade Attributes . . . . . . . . . . . . . . . . . . . . 174 B.1.2.1. Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . 175 B.1.2.2. Costs . . . . . . . . . . . . . . . . . . . . . . . . . 176 B.1.2.3. Permit Allocation . . . . . . . . . . . . . . . . . . . 177 B.1.2.4. Permit auction revenue use . . . . . . . . . . . . . . 177 B.1.2.5. Additional Regulations . . . . . . . . . . . . . . . . 178 B.1.3. Political Obstacles . . . . . . . . . . . . . . . . . . . . . . . . 179 B.2. Appendix: Structure of the Survey . . . . . . . . . . . . . . . . . . . . 181 B.2.1. Demographic Questions for Screening . . . . . . . . . . . . . . 181 B.2.2. Intro Questions . . . . . . . . . . . . . . . . . . . . . . . . . 182 B.2.3. Background Information . . . . . . . . . . . . . . . . . . . . . 182 B.2.4. Tutorial on Program Attributes . . . . . . . . . . . . . . . . . 183 B.2.5. Choice Scenarios . . . . . . . . . . . . . . . . . . . . . . . . 187 B.2.6. Follow-up Questions . . . . . . . . . . . . . . . . . . . . . . 190 B.3. Appendix: One Instance of the Survey (Screenshots) . . . . . . . . . . . 197 B.3.1. State of residence . . . . . . . . . . . . . . . . . . . . . . . . 197 B.3.2. Age . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 B.3.3. Gender . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 B.3.4. Race . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 xii Chapter Page B.3.5. Household income . . . . . . . . . . . . . . . . . . . . . . . 199 B.3.6. ZIP code . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 B.3.7. Check ZIP code . . . . . . . . . . . . . . . . . . . . . . . . . 201 B.3.8. Confirm standard ZIP code . . . . . . . . . . . . . . . . . . . 201 B.3.9. Consent to Participate . . . . . . . . . . . . . . . . . . . . . . 201 B.3.10. Oath . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202 B.3.11. Introduction to climate change . . . . . . . . . . . . . . . . . . 202 B.3.12. Introduction to carbon emissions . . . . . . . . . . . . . . . . . 203 B.3.13. Introduction to controversy of cap-and-trade in Oregon . . . . . 204 B.3.14. Introduction to cap-and-trade programs . . . . . . . . . . . . . 205 B.3.15. Introduction to program coverage . . . . . . . . . . . . . . . . 206 B.3.16. Introduction to program coverage, continued . . . . . . . . . . . 207 B.3.17. Introduction to permit auctions . . . . . . . . . . . . . . . . . 207 B.3.18. Introduction to grandfathering process . . . . . . . . . . . . . . 208 B.3.19. Introduction to revenue distribution . . . . . . . . . . . . . . . 209 B.3.20. Introduction to benefits of carbon emissions reductions . . . . . . 209 B.3.21. Introduction to possible distributional concerns . . . . . . . . . 211 B.3.22. Oregon county of residence . . . . . . . . . . . . . . . . . . . 212 B.3.23. Confirm county of residence . . . . . . . . . . . . . . . . . . . 212 B.3.24. Introduce program summary tables . . . . . . . . . . . . . . . 213 B.3.25. Explain Feature Group 1 . . . . . . . . . . . . . . . . . . . . . 213 B.3.26. Explain Feature Group 2 . . . . . . . . . . . . . . . . . . . . . 216 B.3.27. Explain Feature Group 3 . . . . . . . . . . . . . . . . . . . . . 217 B.3.28. Explain Feature Group 4 . . . . . . . . . . . . . . . . . . . . . 218 B.3.29. Explain Feature Group 5 . . . . . . . . . . . . . . . . . . . . . 219 xiii Chapter Page B.3.30. Program A choice . . . . . . . . . . . . . . . . . . . . . . . . 221 B.3.31. Follow-up to “No” vote: Reasons for vote . . . . . . . . . . . . 222 B.3.32. Follow-up to “No” vote: Will you always vote no? . . . . . . . . 223 B.3.33. Program B choice . . . . . . . . . . . . . . . . . . . . . . . . 224 B.3.34. Program C choice . . . . . . . . . . . . . . . . . . . . . . . . 225 B.3.35. Program D choice . . . . . . . . . . . . . . . . . . . . . . . . 226 B.3.36. Program E choice . . . . . . . . . . . . . . . . . . . . . . . . 227 B.3.37. Program F choice . . . . . . . . . . . . . . . . . . . . . . . . 228 B.3.38. Follow-up to “No” vote . . . . . . . . . . . . . . . . . . . . . 229 B.3.39. Preferences for policies other than cap-and-trade . . . . . . . . . 229 B.3.40. Most important attributes . . . . . . . . . . . . . . . . . . . . 230 B.3.41. Attachment to Oregon . . . . . . . . . . . . . . . . . . . . . . 230 B.3.42. Ethnicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 B.3.43. Sectors providing household income . . . . . . . . . . . . . . . 231 B.3.44. Political ideology . . . . . . . . . . . . . . . . . . . . . . . . 232 B.3.45. Political party identification . . . . . . . . . . . . . . . . . . . 233 B.3.46. Educational Attainment . . . . . . . . . . . . . . . . . . . . . 233 B.3.47. Employment status . . . . . . . . . . . . . . . . . . . . . . . 233 B.3.48. Attitude: climate change real and serious . . . . . . . . . . . . 234 B.3.49. Attitude: climate change human-caused . . . . . . . . . . . . . 234 B.3.50. Attitude: responsibility to fix climate . . . . . . . . . . . . . . 235 B.3.51. Inter-generational concern: descendants . . . . . . . . . . . . . 235 B.3.52. Inter-generational concern: ancestors . . . . . . . . . . . . . . 235 B.3.53. Primary heating fuel used . . . . . . . . . . . . . . . . . . . . 236 B.3.54. Usual forms of transportation . . . . . . . . . . . . . . . . . . 236 xiv Chapter Page B.3.55. Perception of researcher bias . . . . . . . . . . . . . . . . . . 237 B.3.56. Attitude: Respondent’s experience with survey . . . . . . . . . . 237 B.3.57. Feedback text message . . . . . . . . . . . . . . . . . . . . . 238 B.4. Appendix: Selection Model and Selection Correction . . . . . . . . . . 239 B.4.1. Variable Selection . . . . . . . . . . . . . . . . . . . . . . . . 239 B.4.1.1. Inventory of candidate explanatory variables . . . . . . 239 B.4.1.2. Probit selection model . . . . . . . . . . . . . . . . . 242 B.4.2. Selection Correction Strategy . . . . . . . . . . . . . . . . . . 243 B.5. Appendix: Choice Experiment Randomizations . . . . . . . . . . . . . 247 B.6. Appendix: Descriptive Statistics for Some Basic Variable Relationships . 253 B.6.1. Relationships among respondent characteristics and attitudes . . . 253 B.6.2. Share of "YES" votes for program by category of respondent . . . 253 B.6.3. Share of "YES" votes by aspect of choice task . . . . . . . . . . 264 B.6.4. Identical distributions of program attributes across tasks? . . . . 269 B.6.5. Votes as a function of non-mutually exclusive categories . . . . . 274 C. CHAPTER 4 APPENDIX . . . . . . . . . . . . . . . . . . . . . . . . . 277 C.1. Online Appendix: Sensitivity Analysis . . . . . . . . . . . . . . . . . . 277 REFERENCES CITED . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 xv LIST OF FIGURES Figure Page 1. Department Selection Heat Map . . . . . . . . . . . . . . . . . . . . . . . 40 2. Response time summary . . . . . . . . . . . . . . . . . . . . . . . . . . 41 3. Distribution of predicted WTP for basic cap-and-trade program . . . . . . . 87 4. Distribution of predicted WTP for alternative cap-and-trade program . . . . . 88 5. Map of predicted WTP for basic cap-and-trade program . . . . . . . . . . . 89 6. Map of predicted WTP for alternative basic cap-and-trade program . . . . . 90 7. FBI vs. OSP data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 8. Google Trends search for Measure 114 . . . . . . . . . . . . . . . . . . . 135 9. Synthetic control difference-in-difference for Oregon background checks (per capita) . . . . . . . . . . . . . . . . . . . . . . . 136 10. Weekly timeseries (2018-2022) . . . . . . . . . . . . . . . . . . . . . . . 137 11. Weekly timeseries (2022) . . . . . . . . . . . . . . . . . . . . . . . . . . 138 12. Daily timeseries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 13. Event Study for OSP background checks (11-08) . . . . . . . . . . . . . . 140 14. Event Study for OSP background checks (10-2) . . . . . . . . . . . . . . . 141 15. Timeseries for by majority vote share (2018-2022) . . . . . . . . . . . . . . 142 16. Timeseries by majority vote share (2022) . . . . . . . . . . . . . . . . . . 143 17. Timeseries by quartile vote share (2018-2022) . . . . . . . . . . . . . . . . 144 18. Timeseries by quartile vote share (2022) . . . . . . . . . . . . . . . . . . . 145 B1. Example email . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 B2. Example email . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 B3. Emails sent by week. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 xvi Figure Page B4. Departments by state included in the study. . . . . . . . . . . . . . . . . . 162 B5. Mean response rate by week by identities . . . . . . . . . . . . . . . . . . 168 B6. Mean response rate by week for all putative identities . . . . . . . . . . . . 169 B7. Response rates differentials by local population size . . . . . . . . . . . . . 170 B8. Response rates differentials by local department size . . . . . . . . . . . . . 172 D1. Demeaned response propensities . . . . . . . . . . . . . . . . . . . . . . 246 E1. Distribution of job loss randomizations . . . . . . . . . . . . . . . . . . . 249 E2. Distribution of program cost randomizations . . . . . . . . . . . . . . . . . 252 F1. Descriptive Statistics: Bias by ideology . . . . . . . . . . . . . . . . . . . 253 F2. Descriptive Statistics: Votes by age . . . . . . . . . . . . . . . . . . . . . 254 F3. Descriptive Statistics: Votes by income . . . . . . . . . . . . . . . . . . . 255 F4. Descriptive Statistics: Votes by ideology . . . . . . . . . . . . . . . . . . . 256 F5. Descriptive Statistics: Votes by education . . . . . . . . . . . . . . . . . . 257 F6. Descriptive Statistics: Votes by employment . . . . . . . . . . . . . . . . . 258 F7. Descriptive Statistics: Votes by climate 1 . . . . . . . . . . . . . . . . . . 259 F8. Descriptive Statistics: Votes by climate 2 . . . . . . . . . . . . . . . . . . 260 F9. Descriptive Statistics: Votes by ancestors . . . . . . . . . . . . . . . . . . 261 F10. Descriptive Statistics: Votes by cost decile . . . . . . . . . . . . . . . . . . 262 F11. Descriptive Statistics: Votes by benefits . . . . . . . . . . . . . . . . . . . 263 F12. Descriptive Statistics: Votes by task . . . . . . . . . . . . . . . . . . . . . 265 F13. Descriptive Statistics: Party votes by task . . . . . . . . . . . . . . . . . . 266 F14. Descriptive Statistics: Time on task . . . . . . . . . . . . . . . . . . . . . 267 F15. Descriptive Statistics: Touches per task . . . . . . . . . . . . . . . . . . . 267 F16. Descriptive Statistics: Clicks per task . . . . . . . . . . . . . . . . . . . . 268 F17. Descriptive Statistics: Cost per task . . . . . . . . . . . . . . . . . . . . . 269 xvii Figure Page F18. Descriptive Statistics: Benefits per task . . . . . . . . . . . . . . . . . . . 270 F19. Descriptive Statistics: Carbon jobs per task . . . . . . . . . . . . . . . . . 270 F20. Descriptive Statistics: Green jobs per task . . . . . . . . . . . . . . . . . . 271 F21. Descriptive Statistics: Auction per task . . . . . . . . . . . . . . . . . . . 271 F22. Descriptive Statistics: Equipment per task . . . . . . . . . . . . . . . . . . 272 F23. Descriptive Statistics: Workers per task . . . . . . . . . . . . . . . . . . . 272 F24. Descriptive Statistics: Relief per task . . . . . . . . . . . . . . . . . . . . 273 F25. Descriptive Statistics: Regulations per task . . . . . . . . . . . . . . . . . 273 F26. Descriptive Statistics: Votes by sector . . . . . . . . . . . . . . . . . . . . 274 F27. Descriptive Statistics: Votes by responsibility . . . . . . . . . . . . . . . . 275 F28. Descriptive Statistics: Votes by descendants . . . . . . . . . . . . . . . . . 275 F29. Descriptive Statistics: Votes by transportation . . . . . . . . . . . . . . . . 276 A1. Synthetic control difference-in-difference for Oregon background checks (raw count) . . . . . . . . . . . . . . . . . . . . . . . 278 xviii LIST OF TABLES Table Page 1. Comparison of departments included and excluded from experiment . . . . . 32 2. Identity Creation: Last names used in study . . . . . . . . . . . . . . . . . 33 3. Identity Creation: First names used in study . . . . . . . . . . . . . . . . . 34 4. Balance Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 5. Emails Categorized by Outcome . . . . . . . . . . . . . . . . . . . . . . . 36 6. Response Rate Differences by Race and Gender . . . . . . . . . . . . . . . 37 7. Response Rate Differences for Race and Gender Interactions . . . . . . . . . 38 8. Department Size Heterogeneity . . . . . . . . . . . . . . . . . . . . . . . 39 9. Descriptive statistics for cap-and-trade programs and for alternative ways of capturing individual preference heterogeneity . . . . . . . 91 10. Descriptive statistics for ZCTA-level heterogeneity in preferences selected by LASSO for Model 5 . . . . . . . . . . . . . . . . . 92 11. Differences in baseline marginal utility parameter estimates across alternative specifications . . . . . . . . . . . . . . . . . . . . . . . 94 12. Additional parameters for Model (2) in Table 11 . . . . . . . . . . . . . . . 96 13. Additional parameters for Model (3) in Table 11 . . . . . . . . . . . . . . . 96 14. Additional parameters for Model (4) in Table 11 . . . . . . . . . . . . . . . 97 15. Additional parameters for Model 5 in Table 11 . . . . . . . . . . . . . . . . 98 16. For comparison, different implications of estimated utility parameters across specifications . . . . . . . . . . . . . . . . . . . . . . . 100 17. Firearm laws in 2023 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 18. Change in Oregon Background Checks after Measure 114 . . . . . . . . . . 131 19. Cummulative change in Oregon Background Checks after Measure 114 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 xix Table Page 20. County-level changes in Oregon Background Checks after Measure 114 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 B1. Distribution of Race, Ethnicity and Gender identity assignment by state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 B2. Distribution of Race, Ethnicity and Gender identity assignment by week . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 B3. Response time and word count of response measured across identities . . . . 166 D1. Descriptive statistics: response/non-response models) . . . . . . . . . . . . 241 D2. Binary probit parameter estimates for selection model . . . . . . . . . . . . 245 xx CHAPTER I INTRODUCTION Economics as a discipline has expanded its scope of study immensely over the last few decades. This is due, in part, to the explosion of Big Data and Data Science methods, in tandem with advancements in computing technology. These two developments have opened doors to styles and topics of research previously considered infeasible, redefining what economists can study. The growing breadth of topics might also be attributed to shifting attitudes about what economists should study. These changing attitudes are driven by a combination of two prevailing forces. First, economists are realizing the potential benefits of using the tools and methods of economics to complement work done by researchers in other social science fields. Second, the global community is faced with an onslaught of pressing, complex and incredibly challenging social issues. Coping with these challenges and designing effective policy responses for any of these social crises requires a cross-disciplinary approach. The following chapters pertain to three of the many crises of our time: interactions between police and the public, market-based policies to limit climate change, and the (un)intended effects of firearms control laws. This dissertation highlights topics that economists can address quantitatively and where an economic perspective can help governments design better policies. Chapter 2 contributes to a growing literature that checks for evidence of bias in policing in the United States. I use an experimentally designed correspondence study to test statistically whether police departments respond differently to requests for information based on the perceived race/ethnicity or gender of the requester. I send email requests to more than 2,000 U.S. police departments asking for information about how to lodge a formal complaint about an officer in that department. I systematically vary the putative race/ethnicity (Black, Hispanic and White) and gender (male and female) of 1 the email sender. I analyse response behavior on a handful of dimensions (e.g. whether the department responds at all or how long it takes the department to respond). I then examine which, if any, observable departmental or jurisdictional characteristics (e.g. size of agency and median income for the department’s jurisdiction) are related systematically to response behavior. One of the most valuable features of the the study’s design is that it allows me to assess departments for evidence of police bias without relying on police- provided data. Results indicate that police are statistically significantly less likely to respond to requests from Black and Hispanic individuals than White individuals. In addition, White males are the most likely to receive responses, while Black and Hispanic males are the least likely. Chapter 3 (co-authored with Trudy Ann Cameron) addresses the increasingly pressing issue of climate change. The project was inspired by Oregon’s failed attempts (in 2018 and 2020) to pass a state-wide carbon cap-and-trade program. Opponents of the program advocated for a public referendum, but no referendum has taken place. In 2021, we surveyed a representative sample of Oregonians about their preferences concerning a range of hypothetical carbon cap-and-trade programs in Oregon. The survey is designed so that we are able to estimate not only the total willingness to pay (WTP) values for specific stylized programs, but also marginal willingness to pay for specific program attributes. First, we estimate preference parameters and WTP values for a representative Oregonian. Next, combining information volunteered by the respondents as well as a publicly available geographic data (e.g., from the American Community Survey and other geo-coded data), we estimate WTP values for different socio-demographic, regional and ideological groups. Using our marginal WTP estimates, we can produce two novel measures: (1) a social benefit of carbon (reduction) and (2) Oregonians’ marginal rate of substitution between jobs in carbon-intensive industries and jobs in green industries. 2 Finally, we conduct a benefit function transfer exercise to illustrate the implications of our estimates for support for carbon cap-and-trade programs at the ZCTA level across other U.S. states. Chapter 4 (co-authored with Katie Bollman, Ben Hansen, and Ed Rubin) examines the impact of firearms control laws. As for Chapter 3, recent Oregon legislation inspired this project. In November of 2022, Oregon voters narrowly passed Measure 114, an initiative aimed to curb access to firearms and high-capacity magazines. In addition to several other provisions, Measure 114 requires a permit to purchase a firearm for all Oregonians. Consistent with literature that examines anticipatory firearm-purchasing behavior, we document a surge in firearms sales in Oregon. In the first part of our analysis, we use data at the state-month level on FBI background checks as a proxy for firearms sales and apply a synthetic control difference-in-difference method to establish a strong causal effect of Measure 114 on firearms sales in Oregon. We then use background-check data provided by Oregon State Police to examine how Oregonians have responded to Measure 114 at the daily and county level. Using time-series data and event studies, we observe that Oregonians begin to increase their firearms purchases ahead of election day in anticipation of the law. Furthermore, after the passage of Measure 114 and before the law goes into effect, we observe a historic surge in firearms sales—far outpacing the documented response to any other event in Oregon. The observed stockpiling of firearms in response to Measure 114 is a critical lesson for policymakers who may seek to curb the proliferation of firearms in the community. Akin to the Green Paradox that exists in the world of environmental regulation, the anticipation of restrictive laws can lead to unintended and counterproductive impacts. Addressing the gun-related health crisis in the U.S. is essential for the welfare of American communities, but policymakers should be wary of this “Steel Paradox.” 3 CHAPTER II POLICE ARE LESS LIKELY TO RESPOND TO REQUESTS FOR HELP FROM MINORITIES: FIELD EXPERIMENT EVIDENCE OF POLICE DISCRIMINATION Numerous high-profile incidents have led to accusations that police departments struggle with equity and accountability. I use an experiment to test both. In the context of a correspondence study, I send emails to more than 2,000 U.S. police departments requesting information about how to lodge a complaint against an officer. Manipulating the names of the putative email sender, I compare department response rates according to specific characteristics of the complainant: race/ethnicity (Black, Hispanic and White) and gender (female and male). I find that departments are less likely to respond to emails signed with Black and Hispanic names. Differences in response rates become more pronounced when I interact gender with race/ethnicity. These differences exacerbate the more-basic problem of a low overall response rate of 67.4 percent. I find little evidence that either department size or the characteristics of the local population are correlated with response rates. Results from this experiment support the accusations that local policing suffers, on average, from issues of bias and transparency. 4 2.1 Overview Research suggests that police activities likely generate substantial benefits to society. Benefits can accrue from their direct effect of reducing crime (e.g., Chalfin, Hansen, Weisburst, & Williams, 2021; Mello, 2019; Weisburd, 2021; Weisburst, 2019b). Social benefits also can result from less-direct interventions, like a reduction in traffic fatalities (e.g., DeAngelo & Hansen, 2014). Chalfin and McCrary (2018) argue that the benefits of policing are so considerable that despite their substantial budgets, many departments remain underfunded. However, there is a longstanding debate concerning the impacts that police practices may have on social welfare. Of primary concern is the presence of bias in police behavior—in particular, racially motivated bias. In 2020, protests over racially biased policing broke out across the U.S. after officers in the Minneapolis police department killed George Floyd. Despite the constant presence of the topic of police reform in the national dialogue, and alarmingly frequent anecdotes reported in the media, few rigorous studies exist that causally document bias in local policing (M. R. Smith, Rojek, Petrocelli, & Withrow, 2017). Biased policing describes the tendency for police to interact with individuals at a different frequency or in a different manner depending on the sociodemographic characteristics of these individuals. Police dictate the frequency of interaction via their decisions about where to police (e.g., patrol routes) and whom they police (e.g., traffic stops). Conditional on an interaction taking place, the manner in which police conduct themselves can vary in terms of treatment during the interaction (e.g., use of force), the resulting outcome (e.g., caution, citation or arrest), or police accountability following an interaction (e.g., consequences in the event of police misconduct). A growing body of research highlights the disproportionate burden that policing can place on people of color in these various contexts. For example, M. K. Chen, 5 Christensen, John, Owens, and Zhuo (2021) find that neighborhoods with larger Black populations experience a considerably higher police presence. Similarly, there is evidence that people of color are more likely to be stopped by police (e.g., Bulman, 2019; Gelman, Fagan, & Kiss, 2007; Pierson et al., 2020). There is also evidence that the result of a police-citizen interaction depends on the citizen’s race. Numerous studies find that people of color are more likely to experience the use of force by police (e.g., F. Edwards, Lee, & Esposito, 2019; Fryer, 2020; Nix, Campbell, Byers, & Alpert, 2017; Ross, 2015). People of color are also more likely to be targeted for traffic citations and asset forfeitures (e.g., Goncalves & Mello, 2021; Makowsky, Stratmann, & Tabarrok, 2019; Sances & You, 2017; West, 2018). Research remains limited, however, on biases in police accountability. Stroube (2021) documents that, in Chicago, formal complaints made by Black residents were less likely to be sustained than formal complaints made by White residents. However, most studies that address biased policing, while descriptively informative, cannot unambiguously attribute causality. Establishing causality in the context of biased policing is difficult. First, differences in the frequency of interaction do not necessarily reflect biased policing. There is the possibility of a systemic selection problem. Consider the scenario where sociodemographic groups participate in criminal activity at different frequencies. In this case, unbiased policing could still result in heterogeneous rates of police-citizen interactions across sociodemographic groups (Fridell, 2017). Second, measuring biased policing by comparing outcomes for citizens conditional on an interaction with police does not permit causal inference. Suppose the motivation for police initiating an interaction with a citizen is biased, even thought outcomes for all police-citizen interactions are similar. In that case, a naive analysis can obscure the presence of biased policing (e.g., Knox, Lowe, & Mummolo, 2020; Ross, Winterhalder, & McElreath, 6 2018). Causal identification of bias in police accountability is especially challenging. Leveraging observational data necessitates substantial assumptions. Consider the problem of comparing differences in sanctions for officers between White and Black complainants. One must assume that the reason for the interaction, the conduct during the interaction, and the actions taken by the complainant after the interaction are approximately identical. Furthermore, to compare formal complaint outcomes, all complaints need to be filed, which could be a substantial obstacle (Ba, 2016). Thus, researchers remain divided on the existence and extent of biased policing (Fridell, 2017; M. R. Smith et al., 2017).1 Furthermore, while some researchers employ research designs that permit causal inferences, these analyses involve strong assumptions and the vast majority of these studies rely on self-reported police data. Such reliance on police-reported data can lead to inconclusive or incorrect conclusions if police departments strategically or unintentionally under-report or misreport (e.g., Luh, 2019). In this paper, I estimate the causal effect of bias on police transparency. To identify this effect, I conduct a field experiment on a sample of 2,134 U.S. police departments. I created six fictitious citizen identities with three different apparent races/ethnicities (Black, Hispanic and White) and two apparent genders (male and female).2 To create an identity, I choose common first and last names that are strongly associated with a specific race/ethnicity and gender, and create individual email accounts for each identity. I then use these identities to email each department a request for help in making a complaint about an officer in that department. I send each police department an identical email, but with two exceptions. First, I vary the email sender’s name to signal 1While limited, research has made efforts to address these challenges and find evidence of biased policing (e.g., Antonovics & Knight, 2009; Gelman et al., 2007; Ross, 2015; West, 2018). 2In this paper, I will use “race” as a catchall term for both race (Black and White) and ethnicity (Hispanic). I am sensitive to the distinction between race and ethnicity, and I chose to collapse the distinction in this study for simplicity’s sake. 7 race/ethnicity and gender. Second, I vary the sign-off of the email between an amicable and a curt tone. Police departments respond to 67.5% of these information requests. Response rates for emails from Black or Hispanic identities are both 10 percentage points (pp) lower than the response rates for emails from White identities—differences that are both significant at the 1% level. Emails from White male identities receive the highest response rate, at 75.8%, marginally higher than the response rate for emails from White female identities. Response rates signed with Black and Hispanic male identities are 13.9 and 15 pp lower than White male identities and are marginally lower than response rates for Black and Hispanic female identities. The tone of the sign-off for the email does not appear to affect response rates either on its own or when interacted with the different identities. I use this particular experimental design for several reasons. The challenge of causally identifying discrimination is not unique to the context of law enforcement. Over the last decade, correspondence studies, a type of randomized controlled trial (RCT), have become an increasingly popular tool for researchers studying the presence of discrimination (Bertrand & Duflo, 2017).3 Emulating the seminal work of Bertrand and Mullainathan (2004), researchers have used correspondence studies to identify a variety of types of discrimination (e.g., gender, age, or race) in various contexts (e.g., housing, medical services). To date, most correspondence studies focus on Black versus White discrimination, primarily in the context of hiring practices. There are very few audit or correspondence studies that focus on discrimination in the provision of public services in the United States (Butler & Broockman, 2011; Einstein & Glick, 2017; Oberfield & 3In a correspondence study, individuals (often fictitious)—who are identical in terms of all observable characteristics other than the characteristic of interest—apply for a job, service, or good. The researcher then examines whether the experimentally varied characteristic of interest affects the outcome of the application or request (Bertrand & Duflo, 2017). The present study uses email instead of the traditional approach of “snail mail,” and—as explained below—requests assistance from police departments instead of applying for jobs or making purchases. 8 Incantalupo, 2021; White, Nathan, & Faller, 2015). Considering that marginalized groups, on average, are more likely to depend on public services, discriminatory practices are of utmost concern for social planners. To the best of my knowledge, the only prior correspondence study that concerns law enforcement agencies is Giulietti, Tonin, and Vlassopoulos (2019). These authors conduct a correspondence study with a wide range of public institutions. Included in their list of public institutions are sheriff’s offices. In their study, the authors email the various public institutions with benign requests for information. The authors vary the identity of the requesters, using two distinctively black male names and two distinctively white male names. The authors find that these public institutions (ranging from public libraries to sheriff’s offices) are less likely to respond to emails from individuals with distinctively black names. Furthermore, among the various institutions, this effect is most pronounced for sheriff’s offices. By using a correspondence study, I overcome two of the main challenges in studying discrimination in the context of law enforcement: (1) finding causal estimates (as opposed to mere associations) and (2) avoiding potentially compromised self- reporting of data collected or provided by law enforcement agencies. Estimates from properly randomized correspondence studies can be reasonably assumed to be causal. As mentioned above, the primary obstacles to causal inference in the study of potentially biased policing arise from systematic selection—by types of people into criminal activities, and stemming from police discretion concerning with whom they interact. I avoid these challenges by creating a citizen-initiated police interaction not predicated on a crime taking place and by designating my outcome of interest as the police department’s decision to interact with the potential complainant.4 4I use “citizen” as shorthand for “member of the community” and not to imply any formal citizen/ resident alien/ illegal alien distinction. 9 Avoiding the use of administrative data provided by police departments has significant advantages. First, intentionally or inadvertently, departments can have ongoing difficulties reporting accurate data (e.g., Luh, 2019)). Second, police data can be a product of subjective reporting by individual police officers and department-specific classification conventions.5 Even when police officers honestly record officer conduct, decisions made in the heat of the moment during any given citizen-officer interaction could influence how events are recorded. Finally, departments may be unwilling to disclose “sensitive information.”6 A correspondence study also allows me to use a national sample of police departments. I use a sample of over 2,000 police departments representing all states except Hawaii.7 As a result, the measures of biased policing and accountability from this study represent policing in the United States on average rather than for any specific state, county, or city. Consequently, inferences made in this study are more likely to reflect systemic nationwide behavior patterns rather than specific department cultures. There are two motivations for using requests for a complaint form as the experiment’s intervention. To start with, I need a plausible reason for interaction to conduct a correspondence study involving police departments. As emergency responders, ethical and natural motives for citizen-initiated contact with the police are limited. Requesting a complaint form is particularly well-suited for a correspondence study. The average citizen likely does not already know how to lodge a formal complaint, and it is believable that a citizen would need guidance in the process. Additionally, it follows that 5For instance, PolicingProject.org describes the discrepancies across states in reporting requirements for officer-initiated stops. RevealNews.org finds that the Washington D.C. police department has a comparatively loose definition of “resisting arrest.”. 6Weisburst (2019a) does not find evidence of racially biased policing in Dallas. However, Weisburst hypothesizes that the department’s willingness to disclose its data to researchers might stem from the fact that the Dallas police do not appear to have a problem with biased policing. 7Hawaii’s exclusion was a result of random selection. Hawaii only has four distinct police departments. 10 a citizen who believes the police have wronged them would be unwilling or reluctant to talk on the phone or appear at the station in person (e.g., Ba, 2016). The primary motivation for requesting a complaint form is to examine openness to police accountability. Research remains limited on police accountability despite its centrality to conversations surrounding equitable policing. By asking departments for help in making a complaint about an officer in their department, I explicitly test the willingness of departments to hold their officers accountable. I explore two measures of police accountability. First, the overall response rate from this experiment is a descriptive measure reflecting a citizen’s likelihood of receiving assistance in any complaint process. Second, by comparing response rates between the different races and genders of the requesters, I measure the presence of racial or gender bias in the context of police accountability. Given that citizen complaints are one of the only tools available for citizens to address police misconduct, it is crucial to understand any obstacles to the use of these tools—especially racial or gender discrimination. In addition to establishing causal evidence of biased policing at the national level, I make several novel contributions. This study is the first correspondence study employed with local police departments and the first to focus on the potential for police accountability to be affected by bias. A particularly significant element of this study is the inclusion of Hispanic identities and gender. The majority of research concerning biased policing has been concerned with differences between White and Black citizens. However, as Weitzer (2014) points out, the lack of research on police-citizen interactions for Hispanics is “particularly puzzling,” especially considering the growing population of Hispanic Americans. Even independent of race, it is important to understand whether police tend to discriminate against a particular gender. Additionally, many stereotypes that may motivate biased policing frequently include the intersection of race/ethnicity and 11 gender. By including both race/ethnicity and gender in my study, I can test correlations between potentially problematic stereotypes and policing practices. 2.2 Experimental Design and Data 2.2.1 Experiment. In this section, I describe the design, and implementation of this correspondence study. The objective of the study is to test whether police departments exhibit signs of racial/ethnic or gender discrimination. In broad strokes, this study collects contact information for a nationwide sample of police departments in the U.S. and then emails each departments using one randomly assigned instance from a specially designed set of complainant “identities” that I create.8 Police Department Selection: The police departments included in this study are a stratified random sample. For inclusion in the study, I sought police departments that are associated with a local government (i.e., no state police) and that serve a population of at least 7,500 people. To assemble my list of police departments, I randomly sampled local governments, in batches of 1,000, from the universe of local governments provided by the U.S. Census.9 For my sample of local governments, I searched the internet for an email address for the corresponding police department; I conducted a separate search for an email address for each department (i.e., I did not use an LEA directory). Some of the selected local governments did not have their own local police department, and some police departments did not have publicly available email addresses. I recorded each of these outcomes when they occurred and dropped the local government in question from the study. 8I preregistered this experiment at the AEA RCT Registry, and the pre-analysis plan can be found here. 9I filtered the universe of local governments to exclude states, counties, and all governments with populations less than 7,500. 12 Through my sampling process, I identified 2,417 local governments with their own local police department. I could not find publicly available email addresses for approximately 12% (283) of these departments. Table 1 describes how these 283 “no email” departments differ from other departments. In the table, I compare the “no email” departments to (1) all local police departments, (2) local police departments serving populations of at least 7,500, and (3) the departments in my study for which I was able to locate email addresses. Many police departments had multiple publicly available email addresses. When deciding which address to select, I prioritized the general department email, then the police chief, and then the next-highest-in-command officer. In the end, I retained 2,134 local police departments as both eligible and able to receive emails, representing 49 states.10 Figure 1 illustrates the proportion of each U.S. state’s population that is represented by the departments selected for this study. To calculate these proportions, I summed the local populations served by the selected departments in a state and divided that sum by the state’s total population. Please refer to Appendix A.1 for more details. Identity Creation: I use the names of the putative email senders to signal race and gender. I created six broad categories of identities for this study: Black female, Black male, Hispanic female, Hispanic male, White female, and White male. Sixty unique first- name/last-name combinations are specified for each type of identity. For this study, I selected last names from the “Frequently Occurring Surnames in the 2010 Census” dataset. I selected last names that were both racially distinctive and commonly occurring. To select racially distinctive names, I found names highly concentrated in one racial group—this requires that the name is both common for a 10As noted above, Hawaii’s exclusion from the study was an unintentional result of the sampling process. 13 particular race and uncommon for other races. However, in some cases, the most racially distinctive names were not commonly used in the United States. I avoided using very uncommon surnames to avoid arousing the suspicions of the police departments. I constructed a simple algorithm, described in Appendix A.2, to select sufficiently racially distinctive and reasonably common names. I selected six last names for each race. I referred to Gaddis (2017b) and Gaddis (2017a) to select the first names. Motivated by the frequent use of names as signals for races in audit studies, Gaddis conducts two experiments that explicitly test which first and last names are racially distinctive. In these experiments, Gaddis asks subjects which race they associated with a particular name. Gaddis conducted this experiment for names commonly used to represent Black people (Gaddis, 2017b) and Hispanic people (Gaddis, 2017a) in audit studies. I chose the ten most racially distinctive first names for the respective identities from these two studies. In total, I created 360 unique names (6 identities × 6 last names × 10 first names). After selecting the names for each identity, I created a unique email address for each last name used in the study (e.g., olson.2922@examplemail.com). I then created a unique email address profile for each identity (e.g., Claire Olson ). As a result, the full names of the identities were visible in the email inboxes of the police departments (e.g., Claire Olson ).11 The complete list of names can be inferred from Tables 2 and 3 (360 unique name combinations). I ultimately omitted six high-profile recognizable celebrity names from my set of 360 names: Denzel Washington, Tyra Banks, DaShawn Jackson, Seth Meyer(s), Katelyn Olson, and Pedro Martinez. These names have widespread recognition and 11Please refer to Appendix A.3 for details concerning the specific email addresses used. 14 during the testing process, respondents noted that they strongly associated these names with celebrities having the same name. Email: Each department then received one email from a single randomly assigned identity. All these emails were identical with two exceptions (1) the idenity of the email sender and (2) the sign-off used in the email. I varied the sign-off to test whether the tone of the email might have any effect on police behavior. I decided to use the sign-off as the vehicle for this treatment because the sign-off influences the perceived amicability of the email but minimally alters the content of the email. I randomly assigned the sign-off across emails. I use the email sender’s name twice in each email to increase the salience of this information. Each email had the following format: From: Full name Subject: Complaint Assistance Body: X Police Department, My name is first name and I am interested in filing a complaint against an officer in your department. I am not sure what to do, and would like to request information on how to make a complaint. Can you please send me this information? Sign-off Full name The italicized words indicate that these words changed across emails. As described above, I created profiles for the email accounts so that departments would see the complainant’s full name twice and their first name three times. Police departments 15 were addressed directly—without, for example, a “Hello”—because I found during the pretesting process that the inclusion of a salutation increased the chance of the email being marked as spam. The sign-off varied between a cordial sentiment (“Thank you!”) and a curt sentiment (“Sincerely,”). Appendix A.4 includes images of example emails and other details on the design of the email template. Timing: I conducted the study over a ten-week period, from late June 2022 to late August 2022. I sent roughly 210 emails each week, split across Monday, Tuesday, and Wednesday. I sent all emails at approximately 9 a.m. local time for each police department. I rolled out the experiment over ten weeks to minimize the chance that a single unanticipated news event would compromise the generalizability of the results. Splitting the emails across days of the week simply reduced the logistical difficulty of sending the emails. I did not send emails on Thursday, Friday, or weekend days, to give departments at least two full weekdays to respond to the inquiry. Treatment Assignment The “treatment” here is the identity (race and gender) that each department sees. I stratified treatments by week and by state. As a result, the number of departments for each state is balanced each week. Treatment was then randomly assigned across departments within each week-state stratum. Appendix A.5 details the treatment-assignment process. 2.2.2 Data. I use several additional datasets in this study. As mentioned, I used data from the US Census to create a pool of local governments in the department- selection process (U.S. Census Bureau (2021)). I limited the local governments eligible for inclusion in the study to exclude state and county governments and governments with 16 populations less than 7,500 residents.12 I then matched my selected departments, ex post of their selection, with databases of police departments from OpenPolice.org and ICPSR (Lesko, Silverman, & Troup, 2021, April 23; United States & Bureau of Justice Statistics, 2012)).13 These department databases provided information about exact agency locations and unique official identification numbers.14 The study includes data for several other observable department characteristics. These characteristics, ex ante, seemed to be potentially important determinants of the response behaviors of police departments: numbers of officers and civilian employees for each department; county-level income information; and county-level racial/ethnic composition. I use the Uniform Crime Reporting (UCR) Program data compiled by Kaplan (2021) for employee counts for each department. The UCR dataset includes employee counts through 2020. However, a handful of departments have missing data for 2020. I use the most-recent employee count since 2010 (for 231 of these departments) where available. If a department does not have an employee count after 2010, I record that department’s employee count as missing (29 departments). I compile income and race data from the 2019 American Community Survey (U.S. Census Bureau, 2019). In the 1-year ACS data, 148 counties are missing data for the median income of Black households, and 55 counties are missing data for the median income of Hispanic households. Police departments selected for the study are associated with governments smaller than counties. However, it is not clear with exactly which population each department would interact. If I use data for a geography that is too precise (e.g., the 12During the collection process for police department email addresses, due to multiple instances of the same state having multiple local governments with the same name, I inadvertently included 117 departments in communities with populations less than 7,500. I nevertheless include these departments in the study, and demonstrate that the main results are robust to their exclusion. 13These databases did not include email address contact information. 14I collected the Originating Agency Identifier (ORI) numbers for all departments that have ORI. See Office of Justice’s explanation for details. 17 zip code of the department), I risk mischaracterizing a department’s local context. Accordingly, I use county-level data to characterize the economic and racial composition of a department’s environs. I sacrifice some precision with this approach but avoid inaccuracy. Table 4 shows relevant department characteristics. Column (1) of the table is the mean value for each different characteristic for departments that received emails from White-male identities. Columns (2) through (6) are the differences between the White-male mean value and the mean values for other identities. Table 4 confirms that the treatment was successfully randomized across the most obvious department characteristics relevant to this study. Only one of the 80 differences throughout the rows and columns is statistically significant, and only at the 10% level (Pop. % Black (county- level)). 2.3 Results 2.3.1 Summary Statistics. I sent the first emails on Monday, June 27, 2022, and the last emails on Wednesday, August 31, 2022. In total, I attempted to contact 2,134 police departments. Table 5 summarizes the outcomes of these emails. My final analyses (below) exclude the 37 emails that were undeliverable (i.e., denied or failed).15 Table 5 indicates a response rate of 66.31%. If I exclude the 37 undeliverable emails (Denied or Failed), the response rate is 67.48%. This response rate, however, is aggregated across all correspondent identities and says nothing about biased policing. The Denied category in column 1 represents emails that were blocked by police departments. The small number of Denied emails does not cause concern for the experiment’s validity. 15During the experiment, I received “undeliverable” messages from the email server I used. These “undeliverable” emails explained why the email I sent could not be delivered. In some cases, the email address I used for a police department was incorrect or no longer existed; these are the “failed” emails. In other cases, the police department email server blocked my email for some unknown reason; these are the “denied” emails. 18 However, it is concerning that some police departments have structured their email server to block a seemingly legitimate request for help. Of course, the request is part of an experiment, but it is easy to imagine a citizen with a genuine complaint making a similarly formatted request. The Failed emails are also a cause of concern in terms of police accountability. Given that I manually collected the email addresses for each police department from the department’s own website, a Failed email potentially implies that a department may neglect to maintain updated and accessible contact information. I summarize department response times in Figure 2. A large majority of responses from police departments occurred during the first 24 hours after I sent the email, and I received 97% of the responses within two days. The timing of the responses suggest that the departments take the request for help seriously. Combining these two results paints a picture that not all departments are willing to assist the public in making a complaint against an officer. However, the willing departments engage promptly.16 As mentioned, the study’s emails were sent out in batches over ten weeks to reduce the chances of current events influencing police department response behavior. In Appendix A.8, Figure B5 depicts the response rate for all identities by week, and Figure B6 breaks down the weekly response rates by identity. The figures suggest that response behavior did not change much over time, at least during the ten weeks of the experiment. 2.3.2 Main results. The primary focus of this study concerns the effect of racial/ethnic or gender biases on transparency in policing. To this end, I estimate variations of the following equation: 1(Responsei) = β11(Genderi = Female)+β21(Racei = Black) +β31(Racei = Hispanic)+FEs,t + εi 16In Appendix A.7, I examine whether response time differs systematically across identities but find no evidence. 19 Where i indexes individual police departments, t indexes the week the email is sent, and s indexes a department’s state. The emphasis in this initial analysis concerns the differences in police department response behaviors to White putative identities and Black/Hispanic putative identities. Accordingly, the omitted identity in the analysis is either White or White male. The main outcome for this study, 1(Responsei), is a binary indicator for whether a police department responded, within four weeks, to an email (I record a response for a department if that department replies within 28 days.17 FE represents fixed effects for the week I sent an email and a department’s state. Given that I stratified the treatment assignment on week and state, I include fixed effects for both week and state throughout my analyses. Additionally, I cluster the standard errors by week and state. Table 6 reports the most straightforward analysis of differences in response rates across identities using two alternative weighting schemes. The two weighting schemes allow us to infer two different population parameters. The “unweighted” results describe response rates for an average police department. In other words, if a person were to contact a randomly selected police department, the unweighted response propensities (columns 2 and 4) are relevant. However, departments that serve larger populations interact with more citizens and are thus likely to receive more requests for assistance. By weighting each observation by that department’s local population (i.e., the population of its jurisdiction), the response rates shift the interpretation of the key coefficients from the average department’s behavior to what the average person should expect to encounter.18 17Automatically generated emails from departments merely acknowledging that the treatment email was received I did not record as responses. I discuss alternative definitions of the response variable below. 18The square root of the population is used as a weight instead of simply the population because of the large distribution of population sizes. For example, Los Angeles has a population of close to 4 million, which is over 200 times as large as the median local population (18,000). However, using the standard method of logging the populations would reduce the disparity between populations too much. The log of Los Angeles’s population is approximately 15, which is comparatively similar to the log of the median local population (log(18000)≈ 9.8). 20 Column (1) of Table 6 compares unweighted department response rates for emails with Hispanic identities (Hispanic emails) and Black identities (Black emails) to the mean response rate for emails with White identities (White emails). The response rate for White emails is 74.86%. Compared to the White email response rate, the response rate for Black emails is 10.42 percentage points (pp) lower [4.33, 16.49], and the response rate for Hispanic emails is 10.66 pp lower [5.54, 15.77]. Both differentials are statistically significant at the 1% level.19 Column (2) repeats the comparison in column (1) while weighting observations by the local population. Estimates across columns (1) and (2) are effectively identical for Black emails. Weighting by population marginally increases estimated discrimination against Hispanic emails from 10.66 pp to 11.38 pp [8.48, 14.28]. Columns (3) and (4) compare department response rates for emails with any type of female identity to the mean response rate for emails with any type of male identity (66.03%). The unweighted estimate from column (3) shows that females, on average, were 2.26 pp more likely [-3.92, 8.44] to receive a response. However, the difference is not statistically significant. Column (4) indicates that when weighting by population, the response rate difference between females and males shrinks to 0.35 pp [-5.91, 6.60]. 2.3.2.1 Interaction Effects. Literature shows that race and gender are often individually related to discrimination (Bertrand & Duflo, 2017). There is also evidence that the intersection of race and gender is an essential dimension of discrimination (e.g., Browne & Misra, 2003; Ifatunji & Harnois, 2016)). The intersectionality of race and gender also plays a significant role in the criminal justice system (e.g., Doerner & Demuth, 2010; Steffensmeier, Painter-Davis, & Ulmer, 2017; Steffensmeier, Ulmer, & Kramer, 1998). Motivated by the significance of race-gender intersectionality in 19Comparison of the coefficients of the response rate for Black emails and the response rate for Hispanic emails reveals that the estimates are not statistically significant from each other, although each is individually significantly different from the response rate for White emails. 21 discrimination and the criminal justice system, I test whether intersectionality plays a role in discriminatory policing. I do so by estimating a specification that defines (Whitei ×Malei) as the base category and distinguishes five alternative race/ethnicity and gender combinations, for a mutually exclusive and exhaustive set: 1(Responsei) = β11(Whitei ×Femalei)+β21(Blacki ×Malei)+β31(Blacki ×Femalei) +β41(Hispanici ×Malei)+β51(Hispanici ×Femalei)+FEs,t + εi Where each β indicates the difference in response rate for each combination relative to the omitted (Whitei ×Malei) response rate. I omit White male for two reasons. First, parameter estimates from this specification distinguish between the groups commonly discriminated against (e.g., people of color or female) and the group commonly given preferential treatment (White males). Second, this specification permits the most straightforward interpretation of results, given that the White male identity has the highest response rate (75.78) among the six identities. Column (1) of Table 7 reports the percentage-point differential in response rates for the five identities compared to the White male identity. Column (2) reports the results of the same basic specification as column (1) but weights observations by the local populations of the police departments. Column (1) reveals that response rates for Black and Hispanic males were significantly lower than White males at the 1% level and are the lowest among all the identities. Specifically, Black males were 13.94 [6.55, 21.33] pp less likely to receive a response, and Hispanic males were 15.00 [8.05, 21.94] pp less likely to receive a response than White males. The corresponding response rates for females (specifically Black and Hispanic) were higher than their male counterparts but still significantly lower than White males. Black females were 9.70 [1.92, 17.48] pp less likely to receive a response than White males and Hispanic females were 9.28 [0.99, 22 17.58] less likely to receive a response than White males. The estimates are statistically significant at the 5% and 10% levels. Testing for equality between the coefficients within each race group between genders finds that the response rates for Black males and Black females are not statistically significantly different (p-value = 0.3119). In contrast, the response rates for Hispanic males and females are statistically significantly different (p-value = 0.0035). The response rate for White females is 2.85 [-6.55, 12.25] percentage points lower than for White males but the difference is not statistically significant at the 10% level. White females are the only female identity with a lower response rate point estimate than their male counterpart within each race/ethnicity grouping. The heterogeneous differences in response rates for gender, when interacted with race, suggest the importance of the intersectionality of race and gender. When individuals are White, males receive preferential treatment. However, this relationship reverses for the Black and Hispanic groups. Likely, studying discrimination with respect to only race or gender, without consideration of the other characteristic, may obscure important aspects of the underlying situation. Weighting by local populations increases the estimated disparity in the response rate for White males versus all the other identities, except for Hispanic males. The response rate for White females in column (2) (6.50 [-4.27, 17.28]) is more than double the point estimate from column (1) but is still not statistically significant. When I include local population weights, the response rates for Black females (9.96 [5.20, 14.73]) and Hispanic females (13.91 [3.81, 24.01]) increase in magnitude and statistical significance. With these weights, the estimated response rate for Black males (16.71 [7.72, 25.70]) becomes the lowest response rate. The response rate for Hispanic males (14.65 [6.66, 22.64]) decreases marginally but remains statistically significant at the 1% level. In 23 contrast to the estimates in column (1) of Table 7, testing for equality across genders for the coefficients within each race/ethnicity group suggests that the response rates for Black males and Black females are statistically significantly different at the 10% level (p-value = 0.0861). The response rates for Hispanic males and Hispanic females are not statistically significantly different (p-value = 0.8545). 2.3.2.2 Department Size. Police department response rates likely depend on the interplay between many factors. Department size, and the population that a department must serve, could affect response rates by departments. For instance, large departments might have the option to dedicate staff solely to the task of replying to requests for help in making complaints. Alternatively, smaller departments might be more sensitive to officer complaints because each staff member is likely to be more familiar with all officers in the department. Larger populations being served could mean that departments have more requests to fulfill. On the other hand, if departments serve small populations, they might be more familiar with all department activity and therefore more suspicious of the genuineness of the email they receive.20 Table 8 displays the results of a model that interacts race and gender and defines (Whitei × Malei) as the base category, like Table 7, but additionally interacts the race/gender variables with a binary indicator for department size. I use the median number of total employees for the departments included in the study to determine a department’s large/small size category. The results reveal that smaller departments seem to discriminate less against White females and Black males, compared to bigger departments. The estimated response rate for White females at smaller departments is almost identical to the White male response rate. Response rates for Black males are 20Several responses mentioned that the police department checked the police logs and had no record of an interaction with a person that matched the name in the email they received. Consistent with the pre-analyis plan guidelines, I did not respond. 24 lower than for White males, both for bigger and smaller departments. However, the point estimates for these two response rate differentials are larger in the case of bigger departments. For Black males, the response rate differential from bigger departments increases by 6.97 [19.68, -5.75] percentage points . In contrast to White females and Black males, Hispanics and Black females have higher response rates when interacting with bigger departments. Of all the identities, only the response rate for Hispanic males is significantly different across department sizes. 2.4 Discussion 2.4.1 Interpreting Bias. The results of this field experiment constitute substantial evidence of racially biased police practices. When aggregated across genders, compared to White emails, the response rates for Hispanic and Black emails are both 10 percentage points lower and statistically significantly different from White response rates at the 1% level (Table 6). The question of gender-biased policing is more nuanced than racially biased policing. When comparing the pooled response rates for males to the pooled response rates for females, the results suggest that police departments are slightly more likely to respond to female requests. However, comparing response rates for each race-by-gender group tells a different story (Table 7). White males receive the highest rate of response out of all six identities. The low response rates for Hispanic and Black males drive the lower response rates for all males. Hispanic and Black female response rates are both about 9 percentage points lower than White males and significant at the 10% and 5% levels, respectively. Comparing the results of Table 6 and Table 7 reveals that the intersection of race/ethnicity and gender is an essential part of the story. That Hispanic males receive the lowest response rates of all groups indicates the importance of expanding research about police-citizen relationships to include Hispanic demographics (Weitzer (2014)). 25 Identifying the mechanism(s) for the observed hierarchy of response rates of the six identities is beyond the scope among this paper. However, it is worth considering why Hispanic males and Black males received the lowest rates of response, despite White males receiving the highest rate of response. The discrepancy could be explained by the historical narrative of black and brown males being viewed as criminals (e.g., the racist stereotype of the “superpredator”). A common rebuttal to this hypothesis is that these groups might be more likely to participate in crime—echoing the challenge researchers encounter when attempting to separate biased policing from different levels of participation in criminal activities amongst different ethnic/racial groups. However, in the context of the present study, no crime has been committed. Black and Hispanic males are simply not treated the same way as their White counterparts. An alternative explanation for the mechanism behind this discrepancy is that departments hypothesize that the nature of the complaint might differ across groups. For instance, research suggests that police are more likely to use excessive force with people of color (e.g., F. Edwards et al., 2019; Fryer, 2020; Nix et al., 2017; Ross, 2015). The lower response rates from the present study might reflect that departments think that complaints from Hispanic and Black males are more likely to concern excessive use of force from one of their officers. As a result, these departments may avoid assisting the individual in making a complaint in fear of the consequences should the complaint be lodged. Previous work has suggested that documented racial/ethnic discrimination may reflect bias against poorer or less-educated communities—and race/ethnicity serves as a proxy for wealth and education (Tilcsik, 2021). For instance, Giulietti et al. (2019) make an effort to separate the two types of factors in their correspondence study. However, in practice, this distinction may not matter. The lived experiences of Black and Hispanic 26 populations includes bias—regardless of whether the bias results from racism or classism. It is likely that police who disproportionately target non-White groups are engaged to some degree in both statistical targeting and biased policing (Bertrand & Duflo, 2017). Tilcsik (2021) argues that statistical discrimination “can lead people to view social stereotyping as useful and acceptable and thus help rationalize and justify discriminatory decisions.” 2.4.2 Accountability. The primary question this study seeks to answer is whether police departments discriminate on the basis of race/ethnicity or gender. The study’s design also emphasizes another critical topic for policymakers interested in reforming police practices: accountability. The design of this correspondence study forces police departments to decide whether to respond to an inquiry based solely on a citizen’s race/ethnicity and gender. However, the study also assesses the willingness of departments to assist a citizen attempting to hold one of their officers accountable. In the existing literature, the only prior correspondence study that includes any type of law enforcement agency is Giulietti et al. (2019). Their correspondence study interacts with many public institutions, including sheriff’s offices. In their study, the authors email the various public institutions with requests for relevant but benign information. They use two black male names and two white male names to vary the identity of the person asking for information. The authors find that these public institutions (ranging from libraries to county clerks, in addition to sheriff’s offices) are less likely to respond to email requests from individuals with distinctively black names. Giulietti et al. find that response rates for their sheriff’s offices are approximately 53% for White male emails and 46% for Black male emails. These overall response rates are noticeably lower than the average response rates in the present study. However, the difference in response rates by race in the Giulietti et al. study is considerably smaller. One explanation for the difference is that 27 Giulietti et al. target sheriff’s offices instead of local police departments, and sheriff’s offices may face different expectations for accountability. However, an alternative explanation is that departments do not treat a simple request for general assistance with the same urgency as a request for help in making a complaint against a police officer. When a request for assistance concerns making a complaint, police departments appear more responsive but may be more likely to discriminate. It should be noted that average response rates are low in both the Giulietti et al. study and the present study. The average response rate of 67.4% for the present study (with a low of 60.6% for Hispanic males) is concerning. Even the most-responded- to identity, White males, has a response rate of only 75%. By design, the complaints mentioned in the present study are fictitious. However, in reality, a citizen attempting to file a formal police complaint suggests potentially serious misconduct on the officer’s part. Suppose only six out of ten citizens can obtain assistance making a complaint. In that case, simply a count of citizen-initiated formal complaints about police officers may not represent a reliable or just basis for holding police officers accountable. This concern is amplified when groups of people who more frequently interact with police (i.e., people of color) are also less likely to be assisted in making a complaint. 2.4.3 Caveats. This study seeks to understand whether police departments tend to discriminate based on race, ethnicity, or gender. The results suggest that police departments, on average, do discriminate. However, there are a few caveats to this study. First, police departments were indeed selected randomly (see Appendix A.1). However, a department was only eligible for inclusion in the study if it had a publicly available email address. There are likely to be non-random department characteristics that distinguish between departments that make their email addresses available to the public and those that do not. In Table 1, I examine how departments without publicly available email addresses 28 differ from other departments. In comparison to the departments I did contact, I find that these departments tend to have lower income levels, higher shares of poverty (especially for Black and Hispanic populations), smaller shares of Hispanic residents, and higher shares of rural residents. In addition, departments on average seem to be smaller–though the estimates are noisy. Consequently, this study’s results reflect average department behavior only for a specific type of department. It is plausible that departments willing to share a contact email might also be more willing to engage with the public. Forty departments included in the study had contact emails found somewhere other than on the police department’s official website (e.g., the police chief’s contact information may have been posted on the city’s website but not on the police department’s own website or the department’s specific page within the city’s website). The response rate for emails found in this non-standard way was almost 20 percentage points lower than the overall mean (47% versus 66%). Drawing clear inferences from such a small sample is challenging. However, this difference in response rates suggests that departments with easier-to-find email addresses may be more systematically willing to engage with the public. Second, this analysis does not seek to identify a fixed effect for each individual police department. The results demonstrate that, on average, police departments in the United States have a higher propensity to respond to White emails than Black or Hispanic ones. However, the data include only one observation per department. Thus it is not possible to infer systemic bias within individual police departments. Revisiting this type of RCT with a specific aim of learning more about within-department behavior may be of value in future studies.21 Finally, one must keep in mind that the conclusions in this study pertains to a specific context. This study demonstrates that police departments discriminate on the basis of race/ethnicity or gender when contacted via email for help 21Multiple requests to one department may raise suspicions about the inquiries, however. 29 making a complaint against an officer. It is unclear the extent to which the types of bias detected in this study are necessarily present in other contexts, for example, a police officer’s decision to pull over a vehicle. 2.4.4 Conclusion. This study uses a correspondence study to establish strong causal evidence of biased policing in the United States. Across 2,134 police departments, departments were 10 percentage points more likely to respond to emails from White identities than Black or Hispanic identities. Interacting the race/ethnicity and gender of the identities revealed that White male identities had the highest response rates and Black male and Hispanic male identities had the lowest response rates—respectively, 13.94 and 15 percentage points lower, respectively, than the White male response rates. The low overall response rates, and significant bias in responses across identities, are each concerning. Low response rates suggest police departments resist accountability. Bias in responding to minority identities suggests that departments are especially unwilling to engage with communities of color—disproportionately policed communities. While the existing literature has been inconclusive about the existence of biased policing, the results of this study suggest that bias in policing does exist and that it may hinder progress toward police transparency and accountability. 30 2.5 Tables Figures 31 32 Table 1. Comparison of departments included and excluded from experiment All Departments Departments > 7,500 Searched Departments Mean Difference Mean Difference Mean Difference Income (county-level) Median Income all HH ($1,000s) 59.71 3.5*** (1.02) 67.34 -4.02*** (1.12) 66.61 -3.4** (1.11) Median Income Black HH ($1,000s) 43.38 2.04 (1.26) 47.6 -1.93 (1.19) 47.67 -2.25 (1.27) Median Income Hispanic HH ($1,000s) 50.27 0.69 (.095) 53.18 -2.27** (0.87) 52.93 -1.97* (0.89) Median Income White HH ($1,000s) 64.88 5.41*** (1.11) 74.71 -4.18*** (1.22) 73.64 -3.35** (1.23) % Pop. in poverty 13.4% -0.6% (0.31%) 12.3% 0.5% (0.29%) 12.2% 0.6%* (0.30%) % Black pop. in poverty 23.5% 0.3% (1.41%) 21.9% 1.8%** (0.62%) 21.8% 1.9%** (0.65%) % Hispanic pop. in poverty 21.2% -0.1% (0.67%) 19.6% 1.5%** (0.48%) 19.3% 1.8%*** (0.48%) % White pop. in poverty 10.4% -1.0%*** (0.26%) 8.8% 0.6%** (0.22%) 8.9% 0.5%* (0.23%) Population Local government pop. (hundreds) 18.57 19.6** (6.63) 48.05 -8.37 (11.21) 50.4 -12.23 (10.31) Pop. % Black (county-level) 9.4% 1.4% (0.75%) 10.4% 0.7% (0.70%) 10.2% 0.6% (0.72%) Pop. % Hispanic (county-level) 10.8% 1.1% (0.82%) 15.4% -3.4%*** (0.98%) 14.1% -2.2%* (0.92%) Pop. % White (county-level) 73.7% -3.3%** (1.17%) 66.3% 3.5%** (1.25%) 68.2% 2.2% (1.25%) Pop. % rural (county-level) 38.7% -15.4%*** (1.83%) 18.5% 4.2%*** (1.22%) 20.2% 3.1%* (1.28%) Department size (# of employees) Total employees 44.78 35.46 (31.04) 113.97 -30.96 (53.18) 128.81 -48.57 (34.16) Total officers 35.48 27.72 (22.32) 89.45 -24.12 (38.21) 103.47 -40.27 (28.79) Total civilian employees 9.3 7.74 (9.16) 24.52 -6.84 (15.72) 25.34 -8.3 (6.13) This table compares mean values of geographic and police departmental characteristics to the police departments for which I was not able to find a publicly available email address. There are 283 “no email” departments. Mean indicates the average value for all departments and Difference indicates the differential from the mean for departments without email addresses. The first two columns report the mean for all police departments (n = 12,523) in the U.S. to the departments without publicly available email addresses. The next two columns report the mean for police departments in the U.S. with local populations ≥ 7,500 (n = 4,575) to the departments without publicly available email addresses. The last two columns report the mean for police departments include in my original sample (n = 2,417) to the departments without publicly available email addresses. See Appendix A.1 for details on sampling procedure. Standard-errors in parentheses. Signif. Codes: *: 0.5, **: 0.1, ***: <0.1 Table 2. Identity Creation: Last names used in study White Black Hispanic Olson Washington Hernandez Schmidt Jefferson Gonzalez Meyer Jackson Rodriguez Snyder Joseph Ramirez Hansen Williams Martinez Larson Banks Lopez 33 34 Table 3. Identity Creation: First names used in study White Male White Female Black Male Black Female Hispanic Male Hispanic Female Hunter Katelyn DaShawn Tanisha Alejandro Mariana Jake Claire Tremayne Lakisha Pedro Guadalupe Seth Laurie Jamal Janae Santiago Isabella Zachary Stephanie DaQuan Tamika Luis Esmeralda Todd Abigail DeAndre Latoya Esteban Jimena Matthew Megan Tyrone Tyra Pablo Alejandra Logan Kristen Keyshawn Ebony Rodrigo Valeria Ryan Emily Denzel Denisha Felipe Lucia Dustin Sarah Latrell Taniya Juan Florencia Brett Molly Jayvon Heaven Fernando Juanita 35 Table 4. Balance Table Putative Identity (1) (2) (3) (4) (5) (6) White Male White Female Hispanic Male Hispanic Female Black Male Black Female (n = 359) (n = 352) (n = 350) (n = 361) (n = 358) (n = 354) (Mean) Differential Income (county-level) Median Income all HH (hundreds of dollars) $667 -0.3 (13) 0.7 (13) -5 (13) 3.4 (13) -1 (13) Median Income Black HH (hundreds of dollars) $477 -19 (15) 5 (15) -1.7 (15) 6.3 (15) 7.6 (15) Median Income Hispanic HH (hundreds of dollars) $531 -6.8 (10) 3.3 (10) -6.8 (10) -2.7 (10) 4.8 (10) Median Income White HH (hundreds of dollars) $733 9.6 (14) 6.8 (15) -12 (14) 3.4 (14) 3.5 (14) % Pop. in poverty 12 0.39 (0.36) 0.40 (0.36) 0.12 (0.35) -0.11 (0.35) 0.21 (0.36) % Black pop. in poverty 22 0.90 (0.76) -0.07 (0.77) -0.20 (0.76) -0.61 (0.76) 0.33 (0.76) % Hispanic pop. in poverty 19 -0.04 (0.56) 0.25 (0.57) 0.13 (0.56) 0.00 > (0.56) -0.16 (0.56) % White pop. in poverty 9 0.06 (0.27) 0.08 (0.27) -0.06 (0.27) -0.17 (0.27) 0.02 (0.27) Population Local government pop. (hundreds) 491 41 (128) -11 (128) 130 (127) -104 (127) 17 (127) Pop. % Black (county-level) 10 1.57* (0.84) 0.19 (0.85) 0.64 (0.84) 0.07 (0.84) 0.84 (0.84) Pop. % Hispanic (county-level) 14 -0.21 (1.11) 0.72 (1.11) 0.37 (1.10) 0.16 (1.11) 0.33 (1.11) Pop. % White (county-level) 69 -1.99 (1.48) -1.15 (1.48) -1.32 (1.47) -0.23 (1.47) -1.67 (1.48) Pop. % rural (county-level) 21 -2.19 (1.51) -1.29 (1.51) -0.69 (1.50) 0.61 (1.50) -0.43 (1.51) Department size (# of employees) Total employees 128 -1 (43) -13 (43) 48 (43) -22 (43) -8 (43) Total officers 104 -1 (36) -12 (36) 40 (36) -22 (36) -9 (36) Total civilian employees 24 0 (8) -1 (8) 8 (8) 0 (8) 1 (8) This table compares mean values of geographic and police departmental characteristics across the different identities. For each variable being compared, Column 1 displays the mean value of the variables for departments that received emails from White male identities. Columns 2 through 6 show the difference between the value in Column 1 and the mean value for the other five identities. For example, the average county median income for departments that received emails from White male identities is $66,700. In comparison, the average county median income for departments that received emails from Black female identities is $100 lower, with a standard error of $1,300. The absence of strongly statistically significant differences across the table reflects that the variables are not correlated with treatment assignment. 117 departments served local populations < 7,500. Department size data was missing for 29 departments, and department size data for the year 2020 was missing for an additional 231 departments. 148 observations were missing for median income of Black hh and 55 observations were missing for median income for Hispanic hh in each department’s county. Standard-errors in parentheses. Signif. Codes: *: 0.1 Table 5. Emails Categorized by Outcome Email Outcome Total Percent of Total Single Response 1,226 57.45% Multiple Response 189 8.86% No Response 682 31.96% Denied 15 0.70% Failed 22 1.03% Sent 2,134 100.00% I categorize each email by its outcome. The results show an overall response rate of 66.3% (57.45 + 8.86). Thirty-seven emails were undelivered because the police department’s address was incorrect (failed) or police departments blocked the emails (denied). The response is slightly higher (67.5%) if I drop the 37 undeliverable emails from the calculation. Of the 1,415 departments that responded, 189 departments sent multiple emails. 36 Table 6. Response Rate Differences by Race and Gender Dependent Variable: Response Model: (1) (2) (3) (4) Variables Black -0.1042∗∗∗ -0.1041∗∗∗ (0.0310) (0.0285) Hispanic -0.1066∗∗∗ -0.1138∗∗∗ (0.0261) (0.0148) Female 0.0226 0.0035 (0.0315) (0.0319) Fixed-effects Week Yes Yes Yes Yes State Yes Yes Yes Yes Weights None Sqrt of local pop. None Sqrt of local pop. Fit statistics Observations 2,095 2,095 2,095 2,095 R2 0.05993 0.06897 0.04937 0.05700 Within R2 0.01171 0.01271 0.00061 0.00001 Clustered (Week & State) standard-errors in parentheses Signif. Codes: ***: 0.01, **: 0.05, *: 0.1 I compare differences in response rates for races and genders, unweighted and weighted by local population. Black and Hispanic identities were less likely to receive responses from police departments than White identities. Black and Hispanic response rates were, respectively, 10.42 pp and 10.66 pp lower than the White response rate (74.82)—both significant at the 1% level. Females were marginally more likely, 2.23 pp, to receive responses than males (66.02). 37 Table 7. Response Rate Differences for Race and Gender Interactions Dependent Variable: Response Reference group mean White male 0.7578 Model: (1) (2) Variables White × Female -0.0285 -0.0650 (0.0479) (0.0550) Hispanic × Male -0.1500∗∗∗ -0.1465∗∗∗ (0.0354) (0.0408) Hispanic × Female -0.0928∗ -0.1391∗∗ (0.0423) (0.0515) Black × Male -0.1394∗∗∗ -0.1671∗∗∗ (0.0377) (0.0459) Black × Female -0.0970∗∗ -0.0996∗∗∗ (0.0397) (0.0243) Weights Standard OLS Sqrt of local pop. Fixed-effects Week Yes Yes State Yes Yes Fit statistics Observations 2,095 1,979 R2 0.06214 0.07374 Within R2 0.01404 0.01526 Clustered (Week & State) standard-errors in parentheses Signif. Codes: ***: 0.01, **: 0.05, *: 0.1 Results display the difference in response rate for each identity compared to White males. The White male response rate of 75.78 is the highest, and I use it as the control group. Model 2 has the exact specification as Model 1 but weights observations by the size of the local population for each department. Black and Hispanic males were the least likely to receive responses from police departments. Black and Hispanic females also have lower response rates than White males, but the magnitude and statistical significance of the estimates are not as large as their male counterparts. White females are marginally less likely to receive responses than White males, but the differences are not statistically significant. 38 Table 8. Department Size Heterogeneity Dependent Variable: Response Model: (1) Variables White × Female -0.0008 (0.0568) Hispanic × Male -0.1808∗∗∗ (0.0406) Hispanic × Female -0.1125∗ (0.0503) Black × Male -0.1070∗∗ (0.0350) Black × Female -0.1068∗ (0.0526) White × Female × Big_agency -0.0543 (0.0473) Hispanic × Male × Big_agency 0.0723∗∗ (0.0272) Hispanic × Female × Big_agency 0.0503 (0.0652) Black × Male × Big_agency -0.0697 (0.0649) Black × Female × Big_agency 0.0211 (0.0447) Fixed-effects Week Yes State Yes Fit statistics Observations 2,066 R2 0.06669 Within R2 0.01700 Clustered (Week & State) standard-errors in parentheses Signif. Codes: ***: 0.01, **: 0.05, *: 0.1 Notes. I create a binary indicator for department size, dictated by the median department size. I compare response rates for each identity to the White male response rate. Black males and White females have comparatively lower response rates when interacting with “bigger” departments. In contrast, Hispanic females and males, and Black females have comparatively higher response rates when interacting with “bigger” departments. The only statistically different response rate across department sizes is for Hispanic males. 39 AK ME VT NH Pop. Served by Depts. TX (77.6%) WA MT ND MN WI MI NY MA RI 70% ID WY SD IA IL IN OH PA NJ CT 60% 50% OR NV UT NE MO KY WV MD DE DC 40% CA AZ CO KS AR TN VA NC 30% 20% NM OK LA MS AL SC KY (10.8%) TX GA HI FL Figure 1. Department Selection Heat Map The figure portrays the proportion of each state’s population represented by the local police departments I contacted. To calculate the proportion, I summed the local populations for each included local police department in a state and then divided that sum by the state’s total population. The state with the lowest representation was Kentucky ( 11%), and the state with the highest representation was Texas ( 77%) 40 Figure 2. Response time summary I bin the number of responses by response time. I received most of the responses (809) within three hours of initial contact. Out of all the responses, I received 97% within 48 hours of contacting the departments. I received two emails outside the 4 week window, and I decided to record these emails as non-responses, because I deemed 4 weeks too long of a wait time to be considered helpful. 41 CHAPTER III PUBLIC PREFERENCES FOR A STATE-LEVEL CARBON CAP-AND-TRADE PROGRAM The survey described in this chapter was developed by Trudy Ann Cameron and me, with helpful input from family, friends and colleagues. We report upon the results from a set of survey-based choice experiments designed to assess state-level demand for carbon cap-and-trade programs with different attributes. The evidence from 1050 respondents confirms that these state-level preferences are strongly heterogeneous with respect to political ideologies and opinions about climate change. Our models allow us to calculate the implied social benefits of carbon emissions reductions. Our SBC measure complements existing measures of climate mitigation benefits based on the social cost of carbon (the SCC, which is an avoided- cost measure, as opposed to a willingness-to-pay measure). Willingness to bear the household costs of a cap-and-trade program is affected by the extent of carbon emissions reductions the program would provide, but also by the changes in the number of jobs in carbon-intensive industries and in “green” industries. We estimate the marginal rate of substitution between “carbon” jobs and “green” jobs for different preference classes. There is heterogeneity in the extent to which the share of permits auctioned, or the uses of auction revenue, affect demand, and evidence of the extent to which people would prefer a program that includes additional regulations to limit co-pollutant emissions by firms that buy carbon permits to cover increased carbon emissions. Methodologically, we account for systematic sample selection of respondents relative to the quota-driven sample of invitees from our internet panel, and we net out the effects of earlier randomized choice scenarios on later choices by the same respondent. 42 3.1 Introduction Even as the destructive effects attributed to climate change intensify (Intergovernmental Panel on Climate Change (IPCC) (2018) Intergovernmental Panel on Climate Change (IPCC) (2021)), the United States remains polarized about climate change policy (Egan and Mullin (2017)). Optimal design and successful implementation of carbon mitigation policy has proven to be exceedingly difficult and a subject of cardinal importance (Aldy and Pizer (2009)). Carbon pricing is often touted as the economically efficient and potentially politically feasible solution to the carbon emission crisis (e.g. Metcalf (2009), Y. Chen and Hafstead (2019) ). Carbon cap-and-trade programs have thus gained traction as a leading tool for climate change mitigation (Newell, Pizer, and Raimi (2014), Raymond (2019)). Regional carbon cap-and-trade programs have been adopted, but the United States has yet to launch such a program at the federal level (see Schmalensee and Stavins (2017) for discussion). The increasingly urgent need for mitigation policy, along with federal inaction, may necessitate that regional coalitions or individual states, other than just California, implement policies at a sub-federal level (Fullerton and Karney (2018)). In 2019 and 2020, Oregon’s legislature twice attempted, unsuccessfully, to create a carbon cap-and-trade program. Oregon’s attempts and failure to launch such a program highlight the complicated and contentious political, environmental, and social concerns regarding environmental regulation (Farber (2012), Deryugina, Fullerton, and Pizer (2019) and Fowlie, Walker, and Wooley (2020)). Successful passage of a carbon cap-and-trade program for Oregon, and in other states, will rely on the extent to which policy makers understand population preferences for a number of key program attributes. Lack of support for carbon cap-and-trade programs could be the product of a number of different factors. For instance, conservative politicians who oppose many types of regulations have employed the phrase “job-killing regulations” to sour the 43 public against programs like cap-and-trade, despite a lack of conclusive evidence from existing programs (Coglianese, Finkel, and Carrigan (2013)). Additionally, market-based carbon emission solutions, like cap-and-trade, have been criticized for their inattention to distributional impacts (e.g. Fullerton and Muehlegger (2019), Goulder, Hafstead, Kim, and Long (2019) W. A. Pizer and Sexton (2019), and Feger and Radulescu (2020)). A number of the papers in the literature have found evidence that carbon pricing can be regressive (e.g. Burtraw, Sweeney, and Walls (2009), C. A. Grainger and Kolstad (2010), and Moz-Christofoletti and Pereda (2021)).1 Carbon cap-and-trade programs also raise environmental justice concerns with respect to their distributional effects (Kaswan (2008), Farber (2012)). To date, these theoretically possible concerns have been largely unsubstantiated (e.g. [Fowlie, Holland, and Mansur (2012), C. M. Anderson, Kissel, Field, and Mach (2018) and Hernandez-Cortes and Meng (2020)). They remain an impediment nonetheless. The biggest obstacle to public support may simply be partisanship. A review of climate change opinion surveys in the United States by Egan and Mullin (2017) finds that not only is partisanship the paramount driver in support of carbon-reduction policies, but the gap between Republicans and Democrats has become even more pronounced in recent years. This assessment has been corroborated using revealed preference precinct- level voting data (e.g. S. Anderson, Marinescu, and Shor (2019)). This partisan division appears to be due, at least in part, to a considerable effort on the part of corporations fostering opposition to climate policies through misinformation campaigns (Farrell (2016), Westervelt (2018)). A better understanding of the factors that affect willingness to pursue these programs is necessary if policy makers are to design a carbon cap-and-trade program that has adequate public support. 1Carbon pricing policies also have the potential to be regressive in their benefits (Fullerton (2011)). 44 In this study, we conduct a stated-preference survey to measure individual preferences for key attributes of carbon cap-and-trade programs. Using randomized choice experiments in an online survey, we collect a quota-based sample of about 1,000 Oregonians. Each survey asks respondents to consider six unique cap-and-trade programs. In each choice scenario, the respondent can cast an advisory vote in favor of the program or to keep the status quo (i.e. no program). We then use these six votes per person to estimate a random utility model. The marginal utilities estimated for this model allow us to calculate individuals’ marginal willingness to pay (MWTP) for various program attributes. Importantly, the survey also collects sociodemographic information about each respondent. This enables us to infer differences in MWTP for programs with different attributes across different sociodemographic groups (e.g. income levels, zip codes, and political ideologies). Another advantage of our study is we are able to measure key sociodemographic characteristics for quota-screened and eligible respondents who subsequently drop out of the survey, which allows us to undertake systematic sample selection correction.2 The present study makes contributions to a number of different veins of research in the broader literature.3 First, we make a contribution to the growing field of literature that has used stated-preference studies to understand public preferences for carbon pricing policies. Carattini, Carvalho, and Fankhauser (2018) provide a survey of recent stated- preference work aimed at understanding public opposition to carbon pricing.4 The bulk 2The survey screens potential respondents against quotas, so we learn the age, race, gender, income level, and zip code of every respondent before the respondent learns the topic of the survey. 3Appendix B.1 provides more-detailed reviews of other papers in the related literature. Here, we merely summarize these papers by group. 4Other examples of stated preference studies that focus on preferences regarding carbon pricing include Berrens, Bohara, Jenkins-Smith, Silva, and Weimer (2004), Aldy, Kotchen, and Leiserowitz (2012), Kotchen, Boyle, and Leiserowitz (2013), Duan, Lü, and Li (2014), Yang, Zou, Lin, Wu, and Wang (2014), Gevrek and Uyduranoglu (2015), Raux, Croissant, and Pons (2015), C. Y. Lee and Heo (2016), Tvinnereim, Fløttum, Gjerstad, Johannesson, and Nordø (2017), Li et al. (2019), Rotaris and Danielis (2019), Böhringer et al. (2020), and Daziano, Waygood, Patterson, Feinberg, and Wang (2021). 45 of the extant literature focuses on carbon taxes or undefined carbon reduction policies (Raymond, 2019). For instance, Kotchen, Turk, and Leiserowitz (2017) conducted a survey of Americans to measure the WTP for a carbon tax as well as preferences for how the tax revenue is spent. Their results indicate a substantial mean WTP ($177 per year). Additionally, 80% of the respondents indicated they would be in favor of using the revenue to fund green projects and 70% were in favor of using the revenue to support the a “just transition” for coal workers. Carattini, Baranzini, Thalmann, Varone, and Vöhringer (2017) use a recent ballot initiative for context in a follow-up stated-preference survey. They focus on a carbon tax and find lump sum redistribution of tax revenue and “social cushioning” to be popular. They also find that including more information improves the acceptability of the policy. Research has found that revenue recycling, and providing the public with tangible public benefits, could significantly improve support for carbon pricing (Amdur, Rabe, and Borick (2014), Beiser-McGrath and Bernauer (2019). In a similar vein, some national-level studies have focused on measuring preferences for the distribution of costs imposed by a climate change policy (e.g. J. J. Lee and Cameron (2008), Cai, Cameron, and Gerdes (2010) ). Brannlund and Persson (2012) use an internet-based survey in Sweden to measure preferences concerning an unspecified carbon pricing policy. They described policy alternatives in terms of (a) their development of green tech, (b) their ability to increase climate change awareness, (c) their monthly cost, (d) their distribution of costs, and (e) their geographic distribution of carbon reductions. They find that their respondents prefer policies that are (a) progressive, (b) have lower costs, and (c) raise awareness for climate change. On the other hand, Baranzini and Carattini (2017) conduct a qualitative-quantitative hybrid survey and find that individuals are more concerned about the environmental effectiveness of a carbon tax than with the distributional challenges that result, or the potential effects of the tax on firm competitiveness. It is 46 likely that public preferences for carbon policies differ dramatically across contexts and geography. In general, studies have avoided asking the public about a detailed carbon cap- and-trade program (e.g. Alberini, Ščasný, and Bigano (2018)). However, there have been a few studies that have asked respondents about cap-and-trade programs. For instance, Kotchen et al. (2013) conducted a stated-preference survey that measured WTP for carbon reduction across different policy instruments (e.g. carbon tax, cap-and-trade, and a "policy to regulate carbon dioxide as a pollutant") and find no preference across instruments. However, their survey does relatively little to inform respondents about the various difference between these policies. Choi, Gössling, and Ritchie (2018) and Baranzini, Borzykowski, and Carattini (2018) conduct surveys that focus on “offset” preferences, where offsets are one aspect of many carbon cap-and-trade programs. To the best of our knowledge, the present study constitutes the most-detailed analysis of preferences for different possible attributes of alternative carbon cap-and-trade programs. Considering the leading role that carbon cap-and-trade programs are presently taking in climate change policy, this is an important contribution. We also contribute to the body of work that measures public opinion regarding carbon policy at the state level (e.g. Holian and Kahn (2015), Burkhardt and Chan (2017), and S. Anderson et al. (2019)). S. Anderson et al. (2019) use voting data from two failed carbon tax bills in Washington State and find that political party affiliation is by far the biggest indicator of support or opposition to the policies, with political ideology accounting for 91% of the variation in vote shares across precincts. This finding is consistent with the broad review of climate change opinion surveys by Egan and Mullin (2017). While we rely on hypothetical advisory votes, we advance these studies by using individual-level data, rather than precinct-level data. This involves less measurement error 47 than would be involved with the use of precinct-level averages as a proxy for individual characteristics. Our study has the added benefit of measuring climate change policy opinions in a state (Oregon) that has recently experienced an onslaught of extreme temperatures and serious wildfires that are most likely attributable, at least in part, to the effects of climate change on long-term drought conditions. Using zip code information, we can match respondents to their local context, including local weather-related conditions, recent drought conditions, and wildfire exposure. This permits us to estimate the effects of exposure to some likely consequences of climate change on respondents’ preferences for climate-change mitigation policy. This aspect of our analysis complements other work measuring the exposure effect (e.g. Spence, Poortinga, Butler, and Pidgeon (2011), Bain, Hornsey, Bongiorno, and Jeffries (2012), and Scannell and Gifford (2013)). 3.2 Basic Choice Model 3.2.1 Homogeneous preferences. Our choice experiments describe each potential cap-and-trade program in terms of nine attributes: (1) the monthly cost per household; (2) the percent change in carbon emissions (all negative); (3) the percent change in carbon-industry jobs (all negative); (4) the percent change in green-industry jobs (all positive); (5) the percent of carbon emissions permits that would be auctioned to firms, rather than given away for free; (6)the percent of auction revenue that would be used to fund equipment/machinery as the economy adapt to carbon pricing; (7) the percent of auction revenue that would be used to help displaced workers or affected communities adapt to carbon pricing; (8) the percent of auction revenue that would be placed into the state’s General Fund to replace existing taxes; and finally, (9) whether the program would include additional regulations on other co-pollutants, to prevent their levels from increasing around firms that may buy enough carbon permits to allow their emissions of carbon (and other co-pollutants) to increase. 48 Our model is based on the indirect utility of respondent i under carbon cap-and- trade Program A. Similar to the usual specification for a public policy choice model in the stated-preference literature, our simplest model is linear and additively separable in the attributes for each program: V Ai = β1(Yi −CAi )−β2(%∆carbon emissions)Ai (3.1) +β3(%∆carbon jobs)Ai +β4(%∆green jobs) A i +β5(%permits auctioned)Ai +β6(%revenue to equip) A i +β7(%revenue to workers)Ai +β81(pollution regs) A A i +ηi , Note that since the three percentages of auction revenue sum to 100, so we designate the percent of permit auction revenue going to the state’s General Fund as the omitted category, with this share determined by the percentages not destined for the other two uses. Indirect utility for respondent i under the status quo, V Ni , involves no policy and therefore no decrease in household income, no carbon reduction, no changes in the numbers of carbon jobs or green jobs in the respondent’s county, no auctioned permits and therefore no permit auction revenue to spend on equipment, workers, or to provide tax relief, and no extra regulations to prevent increases in local pollution. However, most researchers allow for some “inertia” associated with the status quo, and include a status quo indicator variable, SQN = 1 for the No Program alternative, where SQA = 0 (implicitly) in equation (3.1) . This utility for the No Program alternative is given simply by: V Ni = β1Yi +β9SQ N +ηNi , (3.2) since none of the features of Program A will be experienced. Each respondent’s choice between policy A and the status quo is determined by whether policy A yields greater utility. Let ∆V A = V Ai i −V Ni be the difference in indirect 49 utilities for respondent i from policy A and the status quo option N, so that policy A is chosen if and only if V Ai ≥ V Ni or ∆V Ai > 0. Individual i’s baseline level of income drops out, so our simplest linear-in-variables econometric specification is as follows. ∆V A Ai = β1(−Ci )+β2(%∆carbon emissions)Ai (3.3) +β3(%∆carbon jobs)Ai +β4(%∆green jobs) A i +β (%permits auctioned)A +β (%revenue to equip)A5 i 6 i +β7(%revenue to workers)Ai +β81(pollution regs) A N A i −β9SQ + εi , where εA = ηA −ηNi i i , an error term that is mean-zero. The respondent is presumed to know their true utility for a specific program, and will vote for Program A if the utility difference, ∆V Ai , is positive, and to vote against Program A if the utility difference is negative. The researcher, however, does not observe εAi . The assumed distribution for this random error term determines the functional form of the log-likelihood function used to estimate the preference parameter vector (β1, ...,β10) in equation (3.6) via maximum likelihood methods. If each individual in the sample is presented with t = 1, ...,T choices, the joint probability (likelihood) function for this con(ditional logit mo)del is then given by N T J yit jexp(V jt) L (β) = ∏∏∏ iJ kt (3.4) i=1 t=1 j=1 ∑k=1 exp(Vi ) where yit j = 1 if alternative j is chosen and is zero otherwise. To yield a unique set of parameter estimates, it is necessary to normalize by differencing utility relative to a numeraire alternative—the No Program option in our case. For the numeraire alternative, the utility differences is zero so that term (in the ratio becomes ex)p(0) = 1: N T J ∆ jt yit j ∏∏∏ exp( V )L (β) = iJ−1 kt (3.5) i=1 t=1 j=1 1+∑k=1 exp(∆Vi ) In our data, there are only two alternatives in each choice set, but T = 6 choice sets: j1 = (A,N), j2 = (B,N), j3 = (C,N), j4 = (D,N), j5 = (E,N), j6 = (F,N). Thus the function 50 is even simpler for each choice occasion whe(n J = 2 (a binary)conditional logit model):5 N T J y∆ jt it j∏∏∏ exp( V )L (β) = i kt (3.6) i=1 t=1 j=1 1+ exp(∆Vi ) The fitted model can be used to solve for each respondent’s marginal willingness to pay (WTP) for each of the policy attributes described in our choice tasks. For the homogeneous-preferences specification in equation (3.6), we set the utility difference, ∆V A, equal to zero and solve for the program cost, CA∗i , that would make their representative individual indifferent between paying the cost and enjoying the benefits of the program, or keeping the money and doing without the program. This maximum total willingness to pay for a policy with a specified set of characteristics will be given by: A β̂ ŴT Pi =C A∗ 2 i = (%∆carbon emissions) A i (3.7)β̂1 β̂3 β̂%∆carbon jobs A 4+ ( )i + (%∆green jobs) A β̂1 β̂ i 1 β̂5 β̂ + (%permits auctioned 6)Ai + (%revenue to equip) A β̂ β̂ i1 1 β̂7 %revenue to workers A β̂8 + ( )i + 1(pollution regs) A i −β9(−1)β̂1 β̂1 where in the difference between any program and the status quo, ∆SQN = −1 for any active program. We take advantage of the mean-zero error term and predict WT PAi at the mean error.6 The parameters in this random utility model, when estimated by maximum likelihood, are distributed asymptotically joint normal (typically with non-zero covariances). Willingness to pay estimates, therefore, rely on one of several methods for 5The model is only slightly more complex if there is more than one choice set for each individual and we wish to accommodate similarities among the choices for any given individual. 6The negative sign on the cost term in equation (3.6) means that after we set the utility-difference to zero and subtract cost term from both side, the negative sign conveniently cancels. 51 accommodating the fact that the ratio of two normally distributed random variables has a mean that is undefined. Researchers often use the Krinsky and Robb (1986) parametric bootstrap simulation method, which relies on a large number of random draws from the joint distribution of the estimated parameter vector, with each draw being used to calculate marginal WTP for a given attribute, or total WTP (TWTP) for a specified program, according to the formula in equation (3.7). Over the large number of draws, a sampling distribution is built up for ŴT P. Descriptive statistics for the distribution of individual WTP function coefficients, or for total WTP, are calculated across all these random draws. Researchers typically use the mean and median, and the 90 or 95% interval, which function as point estimates and an interval measure for predicted marginal WTP for each program attribute in equation (3.7), or for total WTP.7 Marginal willingnesses to pay for each program attribute are simply one type of marginal rate of substitution that can be estimated using this choice model. Marginal rates of substitution between attributes A and B are given by the negative of the ratio of the marginal utilities for each attribute.8 Equation (3.6) can likewise be used to quantify other types of tradeoffs that people are willing to make in considering carbon cap-and-trade policies, not just the tradeoff between income and the the other attributes of the program. For example, another politically important feature of all these cap-and-trade programs is that they result in a loss of jobs in the carbon-intensive sector. Rather than thinking about willingness to pay (i.e. willingness to put up with higher costs for more 7Some alternative strategies for deriving interval estimates for either some marginal WTP or for TWTP include the delta method and Fieller’s method. Hole (2007) developed an add-in for Stata (wtp.ado) that can calculate marginal WTP measures in clogit models that are linear and additively separable. Another Stata add-in (wtpcikr) supports models that are either linear or logarithmic in the additively separable net income variable and can be used following probit, logit or bivariate probit commands. 8For the WTP calculations, cost CAi is technically the attribute of the program, but we model net income as the factor that drives utility for the consumer. The marginal rate of substitution between program attributes and net income (Yi − CAi ) is positive, although the marginal rate of substitution between program attributes and cost itself (CAi , a “bad”) would be negative, since more of some desirable attribute would be required to make up for a higher cost. 52 of each other attribute), we can instead think about willingness to incur job losses in the carbon sector to enjoy the benefits of the policy. We could again set the utility difference equal to zero and solve for the (% carbon jobs lost)Ai that would make people just indifferent between getting the carbon cap-and-trade program and its carbon-reduction benefits, along with those job losses, or forgoing the program and protecting those jobs. Willingness to swap (WTS) carbon-intensive jobs for a carbon-reduction program with a given set of other attributes could be calculated as: β̂ β̂ β̂ WT S(%∆carbon jobs A 1 CA − 2 4)i = ( i ) (%∆carbon emissions)Ai − (%∆green jobs)Aβ̂ β̂ β̂ i3 3 3 − β̂5 β̂(%permits auctioned)Ai − 6 (%revenue to equip)A β̂ β̂ i3 3 − β̂7 β̂(%revenue to workers)Ai − 8 1(pollution regs)A β̂ β̂ i3 3 It can be challenging to keep track of the signs on the variables in the data and how these interact with the signs on the marginal utilities that are specifically associated with increases in these variables. For example, all cap-and-trade programs offered in the survey have (%∆carbon emissions)Ai < 0 and (%∆carbon jobs)Ai < 0). But they likewise have (%∆green jobs)Ai > 0. The marginal utility of net income, β1, is expected to be positive. The marginal utility for an increase in carbon emissions, β2 should be negative, at least if the average person is concerned about the negative effects of climate change. The marginal utilities of (increases in) carbon-intensive jobs and green-industry jobs, β3 and β4, should both be positive (assuming jobs are good). We have no priors, however, about the signs of the coefficients on the other program attributes: β5, β6, β7, β8 and β9. Preferences about ways in which the programs could be implemented, and therefore the possible distributional consequences of these programs, remain an empirical question, and these preferences may be heterogeneous within the sample. 53 Note that using intuition analogous to that for marginal WTP estimates, equation (??) allows us to determine, for example, what percent of jobs in carbon- intensive industries in their county people would be willing to give up to achieve a given percent reduction in carbon emissions, holding all other program characteristics constant. This is an elasticity-type measure. Likewise, we can use this equation to determine what percent of jobs in carbon-intensive industries in their county people would be willing to give up to get given percent increase in the number of green jobs in their county via a cap- and-trade program, holding all other program attributes constant. Finally, there is also an analog to the concept of a total willingness to pay. If we are given the cost and effectiveness of a carbon cap-and-trade program, along with the percent increase in green jobs that would result, as well as the five variables that define how the program would be implemented, equation (??) can tell us what percentage of jobs in carbon-intensive industries people would be willing to sacrifice, in that context, to achieve a one percent decrease in carbon emissions.9 3.2.2 Heterogeneous preferences. 3.2.2.1 Mixed logit models. Mixed logit models are continuous mixture models. The mixed logit model starts from a homogeneous-preferences specification as in equation (3.6). However, instead of assuming that each marginal utility parameter is a true but unknown constant, identical for everyone in the sample, the mixed logit allows some or all of the marginal utilities to have a distribution across the population. An explicit functional form must be selected for the assumed distribution of each parameter, and the goal of estimation shifts to estimation of the central tendency and dispersion of these parameters in the population. Instead of just estimating the expected value of marginal utility, therefore, we typically estimate both the mean and the standard deviation 9A bootstrap approach, or the delta method, or Fieller’s method would likewise need to be used to provide point and interval estimates for this percentage, since we are still dealing with ratios of utility parameters estimated by maximum likelihood. 54 of the distribution of that marginal utilities, across the population. These distributions accommodate “unobserved heterogeneity” in preferences. We do not seek to attribute this heterogeneity to any specific, observable respondent characteristics. Instead, we merely permit heterogeneity in preferences to exist. Mixed logit probabilities are the integrals of ordinary conditional logit probabilities, with the integrals taken over the density function for the random parameters in the model. Suppose we begin with the conditional logit choice probabilities that individual i will select alternative j, where we now denote this logit probability as Li j. We now normalilze on alternative J, to permit unique estimates of the parameters β : exp(∆V jt L (β) = i ) i j 1+∑J−1 (3.8) k=1 exp(∆V kt i ) The mixed-logit probability that individua∫l i will select alternative j is given by: Pi j(β) = Li j(β) f (β)dβ (3.9) where f (β) is the density function for the parameter vector that accommodates unobserved heterogeneity in preferences, and the observed portion of utility, V jti (which captures at least the baseline attributes of each alternative, as featured in equation (3.1)), is embedded in the Li j(β) term. The mixed logit probability formula can be interpreted as a weighted average of the standard conditional logit formula evaluated at all the different values of the parameter vector β, where the weights are given by the parameter density function f (β), also known as the “mixing distribution.” We note that if the mixing distribution is degenerate at a set of fixed β parameters, the mixed logit model collapses to just the standard conditional logit model. If the mixing distibution is discrete, the mixed logit model becomes the latent class model (i.e. a finite mixture model) described in the next section. Mixed logit estimation algorithms permit the user to choose among a variety of distributions for each parameter. If the parameter vector is assumed to be multivariate 55 normal with mean vector b and covariance matrix W , then the mixed-logit choice probabilities are given by: ∫ ( )exp jt(∆Vi )Pi j = J−1 kt φ(β |b,W )dβ (3.10)1+∑k=1 exp(∆Vi ) A log-normal distribution is sometimes chosen, however, because of its ability constrain the sign on a parameter. Parameters can be individually noisy but independent, or they can also be correlated across individuals in the sample. For example, people with a higher-than-average marginal (dis)utility from losses of jobs in the carbon-intensive sector may also derive lower-than-average marginal utility from gains of jobs in the green sector. These could be older workers with less ability to change careers. Other people in the sample may have a lower-than-average (dis)utility from losses of carbon jobs but a higher-than-average marginal utility from gains in green jobs. These folks could be younger and still flexible in their career choices. A mixed logit with correlated preference parameters permits these more-general forms of heterogeneity in preferences while still permitting us to estimate ‘average” preferences for use in scaling WTP estimates from a representative sample to the entire population. In cases where there is more than one choice per individual, there may be some commonality among different choices by the same individual. In estimation of the mixed logit by maximum simulated likelihood methods, it is appropriate to make one draw from the joint distribution of the preference parameters per individual, rather than separate draws for each choice occasion. Mixed-logit models that allow for unobserved variation in preferences are relatively parsimonious. For our model in equation (3.6), there are eight basic marginal utility parameters. If we allow each of these parameters to be random but independent, the parameter space expands to 16. If we had enough data to permit all parameters to be random and also correlated across respondents, there would be eight parameter means and 56 eight corresponding parameter standard errors to estimate, along with the 8(8−1)/2 = 28 off-diagonals of the symmetric parameter covariance/correlation matrix to estimate. Random parameters mixed-logit models, especially when there are repeated choices for each respondent, can greatly improve a researcher’s ability to estimate the average preferences in the population, controlling for unobserved heterogeneity. But mixed logit models do not help us identify interesting systematic variation in preferences according to observed respondent characteristics—namely observed preference heterogeneity that may be very important to our understanding of the distributional consequences of a policy, when we are likely to need to understand which groups of people are willing to pay more for these programs (i.e. will derive greater benefits) and which are willing to pay less (i.e. will derive lesser benefits).10 3.2.2.2 Latent class models. Many researchers working with choice data now entertain latent-class models. In these models, it is assumed that respondents’ preferences are a finite mixture of a small number of underlying preference types, each with it own vector of preference parameters, βc, for each preference class c, entering into the expression for ∆V jti . Within preference class c, individual i’s choice probabilities are given by: ( ) T J exp ∆V jt yit j ( P (β ) = ∏∏ i )i c (3.11) t=1 j=1 1+∑J−1 ktk=1 exp(∆Vi ) These latent preferences are assumed to be homogeneous within class c, so that only the linear-in-variables preference parameters β = (β1, ...,β9) would be estimated for each class. 10For the preliminary results reported in this version of the paper, we use Stata’s mixlogit algorithm. For subsequent versions of the paper, we expect to shift to the specialized choice-modeling algorithms offered in Apollo, a software package based on the R language, offered by the Choice Modelling Centre at Leeds University in the UK. 57 The difference from the split-sample model in equation (??), however, is that we do not observe class membership. Instead, preference class membership is only probabilistic. These distinct sets of preference parameters are subsumed in a model that also employs a class membership equation. The respondent’s latent class membership probability depends not on the attributes of the different cap-and-trade programs, but exclusively on the individual or neighborhood characteristics of the respondent. Let si be a vector of such (typically sociodemographic) characteristics for respondent i or the population in their geographic area (e.g. ZIP code, county, etc.). These respondent characteristics do not differ across alternatives in the program choice tasks, so the class-membership part of the model takes the form of a so-called multinomial logit model, where the probability of belonging to a particular class involves a different set of multinomial logit coefficients for each class (and the coefficients are normalized to zero for an arbitrarily designated numeraire class). The same set of respondent characteristics leads to different probabilities of belonging to each preference class only because the coefficients that multiply these characteristics differ across classes. Thus if there are C different preference classes, there will be C − 1 sets of coefficients on the vector of respondent or neighborhood characteristics that determine the probabilities of class membership. Then, conditional on class membership, the model embeds a conventional conditional-logit-type choice model for each class that involves homogeneous preferences within that class. The the probability that individual i belongs to preference class c is given by the multinomial logit probability: exp(s π i θc) ic(θc) = 1+∑C−1 (3.12) l=1 exp(siθl) where the vector si typically includes a constant term and θc is a conformable vector of class-membership model coefficients for class c, with θC normalized to 0 for 58 identification (i.e. so that the θ vectors are uniquely estimated). The full set of class membership coefficients is then Θ = (θ1, ...θC−1). For the full latent class model, the joint likelihood of individual i’s choice will then be:11 C L (B,Θ) = ∑ πic(θc)Pi(βc) (3.13) c=1 Latent class models can be somewhat balky to estimate when there are many respondent characteristics to consider. Suppose there are k1 individual or neighborhood characteristics for respondents, k2 carbon cap-and-trade program attributes, and C latent classes are being entertained. Then the parameter space for the model will be on the order of (C − 1)× k1 +C × k2. The greatest success generally comes from starting with an extremely parsimonious specification and gradually adding more respondent characteristics (implicitly freeing up their coefficients to be non-zero). For each additional program attribute, C additional parameters are added to the model. For each additional respondent characteristic, C − 1 additional coefficients are added. For each additional class of preferences, k1 + k2 more parameters must be estimated.12 3.2.2.3 Preferences that vary systematically with observable respondent characteristics. When the researcher has a wealth of information about respondent characteristics, it is possible to estimate models where each marginal utility parameter in the choice model is permitted to vary systematically as a function of these characteristics, as indicated by the data. We reserve for future extensions of this paper a set of specifications that accommodate this “observable” preference heterogeneity via interaction terms between program attributes and respondent characteristics, as selected 11For repeated choices by the same individual, the model can be estimated to allow for commonalities in choices within an individual. 12For the results presented in this version of the paper, we use Stata’s lclogit2 algorithm to search for a set of parameter estimates that brings us close to the maximum likelihood solution but does not produce a parameter variance-covariance matrix. These estimates are then used as starting values for the follow-on algorithm, lclogitml2, to attain the maximum likelihood solution and produce the parameter variance- covariance matrix needed for hypothesis testing and for the calculation of WTP estimates. In future versions of the paper, we expect to use Apollo’s R-based altorithms. 59 by LASSO methods. Specifications such as these can be very useful in helping to establish the so-called construct validity of the preference estimates. If marginal utilities vary in ways one would expect across people with different characteristics, these relationships add credence to the empirical results. 3.3 Outline of Survey and Data 3.3.1 Sketch of the survey instrument. Our survey was initially drafted in the lead-up to the consideration of Oregon House Bill 2020 in the winter of 2019, but the state legislature’s vote on that bill never took place because the Republican house members left the state to prevent the Democratic members from reaching a quorum. Thus we shelved the project for several months until Oregon Senate Bill 1530 was proposed for the 2020 session, when the same exodus of Republican representatives occurred. After the 2020 legislative session concluded, however, Oregon experienced the worst wildfire season in years, and significant drought conditions continue. We resolved to redesign the cap-and-trade survey and use it to try to determine which features of a potential cap-and- trade program might account for heterogeneity in support. Casual empiricism suggests that attitudes toward climate change are determined predominantly by partisanship. The goal of our survey, therefore, is to explain some of the options for cap-and-trade program design and to learn whether support for cap-and-trade programs in Oregon varies only with political ideology, or whether there is evidence of systematic differences in support as a function of program attributes, other individual characteristics, or neighborhood characteristics for the respondent. The survey was developed during the winter and spring of 2021, and the full launch commenced on August 5, 2021. Quota sampling was used to produce an sample of completed responses for which the marginal distributions of age (over 18), gender, race, and household income are consistent with the marginal distributions for these variables in the population of Oregon. 60 The structure of our survey is described in detail in Appendix B.2. One instance of the randomized survey instrument, as it would be viewed by a respondent, is included as Appendix B.3. 3.3.2 Sample Selection. One challenge in using data based on a voluntary survey is the distinct possibility of sample selection bias. If characteristics that determine an individual’s propensity to complete the survey also systematically affect their WTP for these program, then the WTP estimates from a naive model risk being biased. For instance, if people who work in the logging industry are less likely to complete our survey and have a lower WTP for carbon emission reduction then we will tend to overestimate WTP for carbon emission reductions in the general population of the state. On the other hand, if people in higher income brackets have higher marginal values of their time and are thus less likely to take the survey, and these higher-income people are also more willing to pay the costs of a carbon cap-and-trade program , then we will tend to underestimate the WTP in the general population. Given the highly politicized and socially polarizing nature of climate change policies we are acutely aware of the necessity of evaluating our data for sample bias, and to correct for this bias if it exists. To address sample selection, we leverage a set of “screening” sociodemographic variables elicited from all survey invitees, including those who drop out after learning the survey’s topic. These variables include self-reported age, gender, race, and income bracket, as well as the potential respondent’s ZIP code of residence). Based on the ZIP code information for the respondent’s neighborhood (we asked respondents to report their neighborhood ZIP code if they collected their mail from a PO Box), we merge in a host of external information that can be geographically indexed to ZIP codes or their centroids.13 13Additionally, a handful of other survey response variables are know for all respondents (e.g. day survey was taken and if the survey was taken on a mobile device). 61 The external data sets we employed to create profiles of each respondent’s neighborhood include: the American Community Survey 5-year ZCTA-level data (2014-2019), the MIT Election Data and Science Lab’s County Presidential Election Returns (2020), Oregon State Office Returns for 2016 state legislative district votes (by major party), drought data from National Drought Mitigation Center, and wildfire data from Wildland Fire Decision Support System. We create population proportion sociodemographics (e.g. share of the population for each ZIP code that has access to the internet) as well as a number of climate-related statistics for each zip code (e.g. Drought Monitor rating, or distance to the nearest wildfire in 2020). The zip-code-level profiles are constructed to capture political ideologies, salience of climate change, and other sociodemographics that we hypothesize may impact each potential respondent’s prpensity to continue with the survey, to completion, after learning its topic. We use a probit model to estimate each eligible potential respondent’s propensity to complete the survey. We then allow the fitted propensity to shift all of our estimated preference parameters. This ad hoc correction approach allows us to simulate, counter- factually, what would have been the marginal utility from each program attribute had everyone in the estimating sample been equally as likely to complete the survey as the average for all eligible respondents. It should be noted that while we correct for sample selection in terms of the willingness of individuals to complete a carbon cap-and-trade survey, another potential type of sample selection remains unavoidable: people who are willing to take an (internet) survey upon an invitation from Qualtrics may not be representative of the general population. While this other form of sample selection is perhaps of lesser concern, it exists nonetheless and is difficult to address. 62 3.4 Results and Discussion 3.4.1 Sample Selection. One challenge in using data based on a voluntary survey is the risk of sample selection bias. If characteristics that determine an individual’s propensity to complete the survey also systematically affect their WTP for these programs, then the WTP estimates from a naive model may be biased. For example, people who work in the logging industry might be less likely to respond to our survey and have a lower WTP for carbon emission reductions. Basing our estimates strictly on respondent preferences, we might tend overestimate WTP for carbon emission reductions in the general population of the state. Another possibility might be that people in higher income brackets have higher marginal values of their time and are thus less likely to complete the survey. If these higher-income people are also more willing to pay the costs of a carbon cap-and-trade program, then we might tend to underestimate the WTP in the general population. Given the highly politicized and socially polarizing nature of climate change policies we are acutely aware of the necessity of evaluating our data for sample bias, and to seek to correct for this bias if it exists. To address sample selection, we leverage a set of “screening” sociodemographic variables available for all survey invitees, including those who drop out after learning the survey’s topic. These variables include self-reported age, gender, race, and income bracket, as well as the potential respondent’s ZIP code of residence). Based on the ZIP code information for the respondent’s neighborhood (we ask respondents to report their neighborhood ZIP code if they collect their mail from a PO Box), we merge in a host of external information that can be geographically indexed to ZIP code polygons or centroids.14 We explicitly model the propensity for an invited respondent to complete the survey. Ideally, each person would be equally likely to respond, so we take the mean 14Additionally, a handful of automatically collected survey-context variables are known for all respondents (e.g. the date and time when survey was accessed and whether the survey was taken on a mobile device). 63 predicted response propensity in the invited group and subtract this from the predicted response propensities of everyone in the respondent sample, creating a variable that would be zero if everyone in the estimating sample were equally likely to complete the survey. In our models, we allow for interactions between this demeaned response propensity, denoted dm : R̂P, and every program attribute. We then counterfactually simulate what would be the predicted marginal utilities for each program attribute if everyone had a response propensity equal to the mean among all invited participants (i.e., had dm : R̂P = 0). We use these corrected marginal utilities in calculating the implications of our estimates. For details about our response propensity model, please see Appendix B.4 in the online Supplementary Materials. 3.4.2 Program choice model: Homogeneous preferences. Our choice experiments involve binary choices between one cap-and-trade program and the status quo. Each of our six cap-and-trade programs is described in terms of a common set of attributes. The cost of the program is in dollars per household per month. The benefit from the program is the percentage change in carbon emissions to be achieved with the program (always negative). Other attributes of the program include its consequences in terms of jobs in the respondent’s own (named) county: the expected percentage-point change in carbon-intensive industry jobs (always negative), and the expected percentage- point change in green industry jobs (always positive). Other program attributes include the percentage share of carbon permits that will be auctioned, and for permit auction revenues, the percentage share that will be spent on equipment and machinery that will help households and industries adapt to a lower-carbon economy, and the percentage share that will be spent to help workers and communities adapt to the new conditions. The remaining percentage share of revenues (the omitted numeraire share) will be added to Oregon’s General Fund and used to replace other existing tax revenues. The final attribute of each program is an indicator for whether there will be new regulations on 64 co-pollutants to prevent firm that purchase carbon permits from simultaneously increasing their local emissions of conventional pollutants that are not globally uniformly mixing.15 We consolidate the descriptive statistics for all variables used in the different models in this paper into Tables 9 and ??. Panel 1 of Table 9 provides descriptive statistics for the different cap-and-trade programs offered to respondents; each respondent sees a different set of six randomly generated programs, called A through F. Our sample of 6300 programs reflects six program choices for each of 1050 respondents. Inferences derived from choice experiments can sometimes be sensitive to the specification of the choice model and its assumed stochastic structure. All of our specifications focus on the nine β parameters appearing in equation (3.6), but we will report estimates of these marginal utilities from a set of five different estimating specifications: 1. A simple linear-in-parameters conditional logit choice model that assumes homogeneous preferences but allows a respondent’s predicted survey response propensity to shift one of the marginal utilities; 2. A mixed logit model where the marginal utilities for attributes other than program cost are allowed to vary randomly but independently across respondents; 3. A latent class model with two classes where class membership is allowed to vary systematically with a number of respondent characteristics; 4. An alternative latent class model with two classes where (endogenous) political ideologies and attitudes toward climate change influence class membership; 5. A streamlined model where LASSO methods have been employed to select important dimensions of systematic heterogeneity in marginal utilities across a selection of county-level contextual variables for each respondent. This model can be employed, cautiously, for “benefits transfer” exercises to extend the inferences from our study to states other than Oregon. For ease of comparing our estimates of the β parameters across specifications, we use Table 11 to report just these basic utility parameters for all five models (along with the 15See Appendix B.5 for details on how the choice sets were generated for our cap-and-trade program alternatives. 65 log-likelihood value and counts of respondents, programs, and alternatives). For Model 1, of course, these are the only parameters to be estimated. For models 2, 3, 4, and 5, however, we provide Tables 12, 13, 14 and 15, each reporting the additional parameters unique to that specification (beyond just the basic marginal utility coefficients). Our simplest program choice specification is a straightforward conditional logit model with homogeneous preferences, shown as Model (1) in Table 11. The respondent’s choice between the program being offered and “No Program” depends on all of the program attributes as well as the usual “Status quo” indicator variable. The coefficient on “Status quo” conveys the extent to which respondents are systematically more or less likely to choose “No Program” regardless of the attributes of the particular cap-and-trade program they are being offered.16 Model (1) suggests that all program attributes other than those related to the auctioning of permits and the use of the resulting revenues have non-zero marginal utilities. Higher program costs are undesirable, as are higher carbon emissions, but respondents, on average, derive positive utility from any cap-and-trade program to reduce carbon emissions, regardless of its specific attributes. More jobs are desirable, whether they are carbon jobs or green jobs, as are new regulations on other pollutants. However, the restrictions implicitly embodied in this model are rejected by our richer specifications. We note, however, that Model (1) in Table 11 shows that our ad hoc selection correction term, (dm : R̂Pi), has a statistically significant effect on the estimated marginal utility from a proportional change in green jobs in the respondent’s county.17 After 16For ease of interpretation, it will sometimes be convenient to convert this, ex post, to an “Any program” indicator by multiplying both the indicator and its coefficient by -1. 17An uncorrected model, as well as a model with dm : R̂Pi permitted to shift all of the marginal utility parameters, are explored in Stanford and Cameron (2022). Model (1) involves eight parameter restrictions relative to the fully interacted model, but the maximized log-likelihood for the more-general model increases by only about three points, suggesting that the restrictions embodied in Model (1) reported here cannot be rejected. 66 selection correction, the baseline coefficient on marginal utility for green jobs increases in magnitude and remains positive and statistically significant at the 5% level. The negative coefficient on the interaction term between the green jobs attribute and the demeaned selection propensity indicates that the more likely a respondent is to participate in the survey, the lower their marginal utility from an increase in green jobs.18 3.4.3 Program choices: Heterogeneous preferences. 3.4.3.1 Unobserved heterogeneity: Mixed logit specifications. As noted in Section 2, heterogeneous preferences can be modeled in a variety of ways in the analysis of choices. If the researcher has no information beyond just the attributes of the alternatives and respondent’s preferred option in each choice task, the most common way to accommodate heterogeneity in preferences is to explore mixed logit models (i.e., models with random parameters, where each member of the population is assumed to have different marginal utilities for each attribute, and these marginal utilities have specific parametric distributions). A common exception is the cost variable. Benefit-cost analyses in the U.S. typically assume that everyone shares the identical marginal utility of income, which would imply a common marginal disutility of program cost. We follow the convention of constraining the coefficient on the cost variable to be a fixed parameter, rather than a random parameter. But we let all of the other coefficients in our basic model have normal distributions, and we estimate both means and standard deviations for these parameter distributions. Model 2 in Table 11 allows each marginal utility other than that for the cost variable to be independently normally distributed. In shifting to this model, the mean values of these marginal utilities for most program features increase in magnitude and in 18The sign of this coefficient runs counter to our initial expectation. We anticipated that people who are more concerned about climate change and more optimistic about green jobs would be more likely to complete our survey, but it seems that responses were also more likely from people who see climate policy, and especially the shift to green jobs, as a threat to their well-being. 67 some cases in significance, but the fixed estimate for the cost coefficient also increases, so the net effects on our eventual willingness-to-pay calculations will be smaller. Among the more fragile coefficients on the auction-related attributes, the mean marginal utilities for the share of permits auctioned changes sign but remains statistically insignificant. The marginal utility for the shares of auction revenues going to workers/communities becomes significant at the 5% level. If auction revenue is generated by a given cap-and- trade program, there is thus some suggestion that respondents may approve of revenue recycling directed towards workers and communities. The point estimates suggest that revenue recycling toward businesses, for updating of equipment and machinery, may be viewed less favorably. The mean marginal utilities for Model (2) differ somewhat from the fixed marginal utilities for Model (1) because Model (2) also estimates standard deviations for these marginal utilities across the sample of respondents. Table 12 gives Model (2)’s additional point estimates and standard errors for these marginal utility standard deviations. The standard deviations of the marginal utilities for several attributes—i.e., “Any program,” carbon emissions, carbon jobs, the share of auction revenues going to workers, and the presence of new regulations on other pollutants—are all statistically significantly different from zero, pointing to discernible heterogeneity in preferences concerning those attributes.19 3.4.3.2 Latent class models. For each of our two different latent class models, only two classes of preferences can be reliably distinguished. Each respondent’s 19Stanford and Cameron (2022) also reports a specification where these marginal utilities (other than that for the cost attribute) are also allowed to be correlated across respondents. Some of these 28 correlations are statistically significant. For example, the marginal utilities for the two types of jobs are positively correlated, suggesting that people either care about jobs in general, or do not care as much about jobs. The marginal utility for regulations on other pollutants is positively correlated with the marginal utility from the status quo, suggesting perhaps that climate skeptics do not want a cap-and-trade program, but they may still be concerned about other types of pollution. However, a likelihood ratio test for all 28 of the additional parameter correlations does not reject the model where parameters are independently distributed, so we report only the model with independently distributed random parameters here. 68 characteristics are permitted to explain latent class membership in one submodel that has the structure of a multinomial logit model, and one set of marginal utility parameters for program attributes is estimated for each class of preferences, with these two submodels each having a structure analogous to a conditional logit model. Our first latent class specification, Model (3), allows class membership to be determined by the set of respondent characteristics with descriptive statistics shown in section 2 of Table ??. The two sets of preference parameters are displayed in Table 11, and the coefficient estimates for the single-index class membership model are shown in auxiliary Table 13. Class membership is a function of broad bins for several sociodemographic characteristics: educational attainment, income, gender, age, parenthood status, awareness of forebearers,20 and past and expected future duration of residence in Oregon. In the two discernible classes of preferences, the positive baseline marginal utility associated with “Any program” shows that Class 1 preferences are generally in favor of cap-and-trade programs, regardless of their attributes, and Class 2 preferences tend to oppose any of these programs. Both preference classes care about the proportional change in carbon emissions (Class 2 somewhat more so). Both classes care about carbon jobs in Model 3, but only Class 1 cares about carbon jobs in Model (4). Class 1 cares about green jobs in both models, but Class 2 does not care about green jobs in either model. The coefficient for the selection correction term also bears a different sign across these two groups. Given that there is no separately estimated scale factor (error variance) for one of the two sets of preferences, we should be able to compare the marginal utilities for each class. Beyond the sign difference for the “Any program” and selection effects, however, it is not particularly intuitive to compare the two sets of preferences because 20We included our questions about forebearers and descendants to permit hypothesis testing about whether someone who feels a greater degree of connection to past (i.e. inter-generational awareness) may feel more obligated to support policies that will reduce climate change damages for future generations. 69 they do not share the same estimated marginal utility of income (implied by the negative of the marginal (dis)utility of the cost attribute). In a later section, where we discuss the implications of our estimated models, we report comparable estimates of the trade-offs people willingly make, as implied by different preferences, for all our different models, so we postpone discussion of these implications until then. Model 4 in Table 11 also allows two latent classes of preferences, but explores how class membership is related to four sets of indicators variables that classify respondent’s political ideologies and attitudes about climate change. Section 3 of Table ?? gives descriptive statistics for the proportions of the sample that identify with the Democratic Party or the Republican Party (where the omitted category is "Independent or other"). The next dimension is whether their political ideology is “liberal” or “very liberal” or “conservative” or “very conservative” (relative to the omitted category of “moderate”). The next set draws from respondents’ answers on a five-point Likert scale concerning whether climate change is real and is a serious threat. Finally, we use answers to a Likert-scale question about whether climate change is human-caused (where we combine the two “disagrees” categories due to low numbers). We chose these four dimensions of political ideology or attitudes about climate change because casual empiricism suggests that these factors have been extremely important with respect to Oregon’s recent experience with legislation concerning potential carbon cap-and-trade programs. The marginal utility parameters for the two latent classes of preferences in Model (4) are given in Table 11. The negative status quo effect reveals that Class 1 finds cap-and-trade programs to be desirable in general, whereas Class 2 is more likely to choose “No program,” regardless of the levels of the program’s other attributes. Class 1, in this case, cares about both carbon jobs and green jobs, whereas Class 2 is not particularly concerned about either kind of jobs (these marginal utilities are 70 not statistically significantly different from zero for Class 2). However, there is some evidence from this specification to suggest that both groups view revenue recycling directed to workers as desirable, although both classes are statistically indifferent to the share of permits being auctioned. We explore four different sets of indicator variables that capture a respondent’s political ideology and attitude towards climate change. In Stanford and Cameron (2022), we rotate through these four sets of variables and find that all four sets, individually, have strongly significant effects on class membership. In model (4) of Table 11, we include all four sets of ideological and climate-change attitude variables in the same specification, to determine whether one type of variable dominates when we control for the others. The most statistically significant of the four sets of factors is whether the respondent agrees or strongly agrees that climate change is real, human-caused and is a significant threat. The negative coefficients on these indicators imply that these respondents are more likely to belong to Class 2. Climate attitudes are clearly a very strong driver of preferences for cap-and-trade programs. One interesting result is that if we use only the party affiliation indicators (e.g. Democrat, Republican, from results reported in Stanford and Cameron (2022)), Republicans are less enthusiastic about cap-and-trade programs than non-Republicans. However, when we control for attitudes about climate change (the level of threat it presents, and whether it is human-caused), Republican respondents are actually more likely to support cap-and-trade programs. 3.4.4 Program choices: Observable heterogeneity and benefits transfer. Our survey sample was limited to the state of Oregon.21 Oregon is sometimes perceived to be a very liberal state, but there is great variation in sociodemographics and political ideologies across urbanized and rural counties in the state. We estimate Model 5 in Table 11 as a choice model with observed heterogeneity as a function only of census ZIP 21This was an artifact of our funding sources. 71 code tabulation area (ZCTA)level characteristics for each respondent. We allow the best set of predictive ZCTA characteristics within the Oregon estimating sample to be selected by LASSO methods. If the variability across Oregon ZCTAs is sufficient, compared to the variability across all ZCTAs in the lower-48 U.S. states, it may be safe to transfer a model fitted for Oregon ZCTAs to all ZCTAs in other states. Table 10 reports a LASSO-selected set of ZCTA-level characteristics that capture preferences for cap-and-trade policies. We report the means and standard deviations of these ZCTA proportions associated with each respondent in our sample, along with the same ZCTA-level characteristics for the 430 ZCTAs in Oregon, and then for all 33,300 ZCTAs across the lower-48 states of the U.S.22 For Model 5, shown in the last column of Table 11, Table 15 shows the interaction terms that survive the LASSO variable-selection process. The greatest number of dimensions of heterogeneity is evident for the marginal utility from Any Program, independent of the program’s attributes. Neighborhood characteristics that account for a statistically significant increase in a respondent’s utility from any program include the proportion of their ZCTA that: is multiracial (of two or more races), is divorced or separated, has a bachelor’s degree, has income between $15,000 and $25,000, is a native U.S. citizen but born outside the US, is employed in the utilities industry, uses a smartphone to access the internet, or commutes by “other means.” Utility from “Any Program” is statistically significant lower, the greater the proportion of the respondent’s ZCTA that: has income of $10,000 or less, or income of $65,000 to $75,000, accesses the internet via a tablet or portable wireless device, heats their home with fuel oil, kerosene, or similar, or has one or two vehicles available for commuting. 22ZCTAs within a handful of relatively liberal urban counties, many in the northern part of the Oregon, contain much of the population, and most of the eastern counties are very sparsely populated and relatively conservative. It is worth noting that factions in the eastern part of Oregon have occasionally lobbied to join Idaho, rather than stay with the rest of Oregon. 72 Table 15 also shows that the marginal (dis)utility from higher carbon emissions is more-negative for someone in a ZCTA with a higher proportion of the population with commuting times of 60 minutes or more (typically people living in suburban areas), and less negative, the higher the proportion of people commuting 10 minutes or less. Protection of carbon jobs is more important in ZCTAs with higher proportions of people employed in wholesale trade, whereas increases in green jobs are more important where higher proportions of people are employed in real estate, renting, or leasing (more likely in urban/suburban areas). On average, according to Models 1 through 4, the share of permits auctioned does not have a statistically significant effect on the utility people derive from a carbon cap-and-trade program. However, Table 15 reveals that enthusiasm for the auctioning of permits is statistically significantly greater for respondents who live in ZCTAs with higher proportions of: Native Hawaiian or Pacific Islanders, dial-up-only internet access, or people who commute via public transportation (excluding taxis). Enthusiasm for auctioned permits is less for respondents who live in ZCTAs with higher proportions of: Asians, or people who heat their homes using “Other fuel” or “No fuel”. As to the use of any auction revenues for replacing equipment, there is less support for respondents from ZCTAs with higher proportions of people who speak some other language at home, but still speak English well, and higher proportions of people who are employed in Public Administration. Support for using auction revenues to help workers and communities adjust to higher carbon prices is greater when a ZCTA has a higher proportion of people employed in the Utilities industry, but less when a ZCTA has more people who use “No fuel” for heating. On average, respondents prefer programs that include new regulations on other pollutants. Table 15 shows that this support is lower, however, the greater the proportion of the respondent’s ZCTA that is American Indian or Alaska Native. However, support 73 for programs with new regulations is stronger, the higher the proportion of people in the respondent’s ZCTA who rely only on a computer for internet access. 3.4.5 Implications of estimated models. Due to the estimated differences in the marginal utility of income across the two classes in our latent class models, our estimated utility parameters under different model specifications can be less intuitive to compare than the estimates they can be used to produce for a number of valuation measures for different types of cap-and-trade programs. Table 16 collects a number of results based on some key ratios of marginal utility parameters calculated using the distributions of the estimated parameters in each of our five models.23 We begin with our different models’ implied point estimate of the social benefits from a reduction of one ton of carbon emissions (SBC). These numbers can, in principle, be compared to measures in the literature for the social cost of carbon (SCC). Our marginal utilities for proportion carbon reductions must be converted to the implied per-ton basis. Thus our SBC estimates, for Oregon, start with our estimated marginal willingness to pay (MWTP) carbon reductions. Respondents were shown graphics during the survey that conveyed that the approximate aggregate annual carbon emissions for Oregon at present is about 64 million metric tons, so total carbon emissions per month average about 5,333,333 tons. A 1.0 proportional change in these emissions would thus be about 5,333,333 tons per month. The total number of households in Oregon is about 1,649,000. Thus whatever estimate we get for the MWT Pemissions needs to be multiplied by 1,649,0005,333,333 to yield the aggregate willingness to pay by Oregon households to have Oregon’s carbon emissions reduced by 1 metric ton. According to Model 1 in Table 16, with homogeneous preferences, this SBC estimate is about $47/ton of carbon. Model (2) implies $61. For our two latent-class 23We rely on the wtp.ado code in Stata, developed by Hole (2007), to calculate point and interval estimates for these ratios, with standard errors calculated using the delta method. 74 models, Class 1 preferences imply an SBC of either $64 or $56, and Class 2 preferences imply only $35 or $28. For Model (5), where preferences are allowed to depend on county-level characteristics, the SBC for a given county depends on anything which systematically affects the estimated marginal utility of carbon emissions. In Model (5), this utility is affected by the proportion of people who commute less than 10 minutes, or more than 60 minutes (and possibly with access to broadband satellite internet). For Model (5), we estimate the model using demeaned values of all of the interaction terms, so that where these interaction terms are all zero, we obtain an estimate of the SBC “at the means of the data.” This estimate is $49. A second measure of interest for each model, in the second row of Table 16, is a point estimate of the marginal rate of substitution (MRS) between carbon jobs and green jobs. This measure involves the estimated coefficients on the proportional change in carbon jobs and the proportional change in green jobs, in the respondent’s own county, as a result of these cap-and-trade programs. The ratio of the two coefficient is: βProp change in carbon jobs ∂ (Green jobs)/Green jobs = (3.14) βProp change in green jobs [∂ (Carbon jobs)/C]a[rbon jobs ] ∂ (Green jobs) Carbon jobs = ∂ (Carbon jobs) Green jobs so that the MRS (i.e., the number of green jobs willingly sacrificed to keep one carbon job): ∂ (Green jobs) MRS(Carbon jobs, Green jobs) = (3.15) [∂ (Carbon jobs) ][ ] βProp change in carbon jobs Green jobs = βProp change in green jobs Carbon jobs If we wish to know how many green jobs people would require, on average, to make up for the loss of one carbon job, we would use the negative of this MRS. The ratio of the two coefficients is constant, but the MRS also depends on the ratio of green jobs to carbon 75 jobs. Thus we cannot quote a specific willingness to be compensated by green jobs for lost carbon jobs without specifying the prevailing ratio of these two types of jobs.24 As a representative ratio of green jobs to carbon jobs, we use the state-level ratio, and report a point estimate and a confidence interval for this representative MRS.25 For our models with homogeneous preferences, our estimates suggest that on average, people need about 1.27 to 1.72 green jobs to make up for each carbon job lost due to a cap-and-trade policy. However, our two latent class models reveal very different implications across the two preference classes. In each case, the relatively “pro-cap-and- trade” preference class would be satisfied, on average with only about one new green job for each carbon job lost. From Model 3, we know that Class 1 membership is more likely for younger people who are college graduates, males, and people who expect to stay in the state for at least two more decades. These groups may be more confident that they would be able to retool for a career in a green job, rather than a carbon sector job. Preference Class 2, however, the anti-cap-and-trade class, would demand about 2.2 to 6.8 new green jobs to make up for each carbon job lost. These people are more likely not to be college graduates, to be female, to be older, and to expect to reside in Oregon for less than 20 more years.26 The remaining rows in Table 16 report, for each of our models, the calculated marginal willingness to pay (MWTP) per month for a program with one more unit of each attribute, along with a confidence interval. For attributes described as proportions 24In general, the MRS will be smaller when there are relatively more carbon jobs and fewer green jobs. It will be larger when there are relatively fewer carbon jobs and more green jobs. 25Stata’s wtp.ado algorithm can be adapted to calculate point and interval estimates for this ratio. 26The statistically significant but negative effect on Class 1 membership of knowing one’s ancestors beyond great-great-grandparents may seem counter-intuitive. We had been curious about whether people who pay attention to their family history might have been more likely to care about what their descendants think of their position on climate change policies. Instead, it may simply be the case that older people are more likely to be interested in genealogy than younger people, but younger people feel greater urgency about limiting climate change. 76 or shares, these marginal WTP amounts are for a 1.00 or 100% proportional change, or for a 1.00 or 100% change in share. The reported marginal WTP amounts could be scaled to a 1% change in jobs, or a 1% change in share by dividing the MWTP estimate (and its confidence bounds) by 100. First, consider the baseline WTP for any type of cap and trade program, regardless of its attributes. This MWTP implied by the coefficient on the “Any Program” indicator. For Models (1) and (2), the estimated (mean) preferences suggest a positive WTP for any program, although this baseline positive WTP could be offset by too-small reductions in carbon emissions or too-great losses of carbon jobs. Each of our two latent-class models, however, reveal very different baseline WTP amounts for cap-and-trade programs across the two preference classes. In both latent class models, Class 1 has a baseline WTP for a cap-and-trade program on the order of about $253-$284. Class 2, however, has a negative baseline WTP, on the order of -$75 to -$107. This baseline, however, could also be overcome by a sufficiently large reduction in carbon emissions, a sufficiently small loss of carbon jobs, and/or a sufficiently large increase in green jobs, augmented by new regulations on other pollutants. Second, the distributional effects of alternative cap-and-trade programs will be felt partly through their effects on employment in different sectors. Models (1), (2), and (5), with single estimates of this measure, imply that proportional changes in carbon-sector jobs are valued more highly than proportional changes in green jobs. However, the two latent-class specifications reveal that the pro-cap-and-trade Class 1 preferences are willing to pay much more for jobs of either type than are Class 2 preferences. The difference for Class 2 preferences in Model 4 is particularly striking. Class 2 membership is dominated by people who do not think climate change is a threat, or that it is human- caused. Controlling for those attitudes about climate change, they also tend not to be Republicans. This preference class has a marginal willingness to pay for proportional 77 changes in carbon jobs of only $154, and for green jobs of only $79. The corresponding MWTP estimates for Class 1 in Model (4) are $527 and $589! Clearly, social justice in terms of the distribution of employment impacts is vastly more important to Class 1 than Class 2, according to Model (4). Finally, the MWTP for cap-and-trade policies that include new regulations on other pollutants is positive for all our models. Distributional effects of climate policies may also depend on the extent to which transactions involving carbon permits move production around spatially. If increases in carbon emissions are correlated with increases in other types of pollution, residents in the vicinity of plants that buy carbon permits could be at a disadvantage. Models (1), (2), and (5), and Class 1 preferences in both our latent class specification, imply that programs with new regulations on other pollutants are valued about $53 to $62 higher per month than for programs without this feature. Many people care about the environmental justice implications of cap-and-trade programs. For Class 2 in both of our latent class models, however, marginal WTP for a program with new regulations on other pollutants is lower, at about half the size ($29 to $35 only). MWTP for this feature is still positive for Class 2, but this group appears to be less concerned about the environmental justice implications of any cap-and-trade program. 3.4.5.1 Benefit function transfer to all ZCTAs in the lower-48 U.S. states. To accomplish this crude benefits transfer exercise, we need to assume that our fitted preference parameters (from our Model (5) with observable heterogeneity at the respondent’s ZCTA level), employed with ZCTA-level characteristics for any other ZCTA in the U.S. can predict an approximate willingness to pay for a cap-and-trade program with specific characteristics for a representative household in that ZCTA. We could consider any cap-and-trade program that could be described by levels of the attributes we include in our study. For simplicity, however, we will calculate 78 the representative total WTP for each ZCTA in the lower-48 U.S. states for just two programs: (a) a "basic" program that reduces carbon emissions by 40 percent but (artificially) involves no change in the numbers of carbon jobs or green jobs, and (b) and “alternative” program, also with a 40 percent carbon emissions reductions, but with an arbitrary 10% decrease in county-level carbon jobs, a 10% increase in county- level green jobs, 50% of carbon permits auctioned, with 30% of auction revenues going towards equipment (re-tooling) and 30% of auction revenue going to help workers and communities, and with new regulations on other pollutants. Our first "basic" program auctions no permits, so the shares of revenue for equipment or for workers are also zero, and no new regulations for other pollutants are involved. By setting to zero all the other attributes, we can focus on ZCTA-level estimates of a representative ZCTA resident’s willingness to pay for a 40 percent carbon reduction by cap-and-trade before factoring in the (distributional) effects of the program on carbon jobs and green jobs, or different attitudes towards auctioning of permits and revenue recycling (another distributional issue) or the need for new regulations to limit other pollutants (also a distributional concern). For this benefit-function transfer exercise, the only heterogeneity in Model (5) that will matter is the heterogeneity in the marginal utility from “Any Program” and the heterogeneity in the marginal utility from a proportional change in carbon emissions. 3.4.5.2 Distribution of WTP for cap-and-trade programs across ZCTAs. Our benefit-function transfer process yields a distribution of 33,300 different predicted median WTP amounts–one for each ZCTA in the lower-48 U.S. states. The heterogeneity in preferences identified by our LASSO variable selection is considerable. To illustrate this heterogeneity, we provide just one examples of the differing marginal distributions of WTP for our “basic” and “alternative” cap-and-trade programs. We split the 33,300 79 ZCTAs by terciles of one ZCTA characteristics (here, the proportion of the population commuting by public transit, not including taxis). We display the marginal distribution of median WTP amounts for each tercile (11,100 ZCTAs each) and mention the median WTP for each tercile in the legend to the densities. We show the overall national median WTP (after weighting by ZCTA populations) below each combined density plot. For our “basic” program, these ZCTA public transit tercile distributions are shown in Figure 3. Across all ZCTAs in the lower- 48 states, the population-weighted median WTP for the basic program is $70. Median WTP for the lower tercile of public transit use is $26, while for the upper tercile, it is $145. Furthermore, within each tercile, there is extensive heterogeneity in WTP for the basic program, due to other characteristics of each ZCTA. Only 0.64% of the 33,300 ZCTAs have predicted WTP for this basic program greater than $800.27 For our “alternative” program, analogous ZCTA public transit tercile distributions are shown in Figure 4. Across all ZCTAs in the lower-48 states, the population-weighted median WTP for this alternative program is larger, at $114. Median WTP amounts for each tercile now decline with increasing terciles of public transit use (from $150 to $116 to $70), rather than increasing across terciles, as in the case of the basic program. WTP for this alternative program is sufficiently higher that 5.06% of ZCTAs have predicted WTP amounts in excess of $800.28 3.4.5.3 Spatial heterogeneity in WTP for different cap-and-trade programs. To visualize the spatial heterogeneity in WTP to reduce carbon emissions by cap-and- trade (distinct from any distributional consequences of such a program), we can apply 27Outlier amounts for predicted WTP are due to outlier values of (continuously measured) proportions of one or more of the ZCTA proportions for the characteristics that LASSO has selected for our model with observable heterogeneity. NOTE: “county” reference in figure caption will be corrected to “zcta” in subsequent revision. 28Again, outlier proportions of one or more characteristics in ZCTA populations account for outlier WTP estimates. 80 our model a representative household in each of the 33,300 ZCTAs in the lower-48 states of the U.S. For each ZCTA, we make a large number of draws from the joint distribution of our parameter estimates and calculate an estimate of WTP for each draw. Then we calculate both the mean and median WTP estimates for that ZCTA. Then we can map the spatial variation in predicted median WTP estimates in each ZCTA. For our “basic program” example, this spatial heterogeneity is shown in the map in Figure 5. For our arbitrarily chosen “alternative program” example, different people’s marginal WTP for the potential distributional consequences of non-basic cap-and-trade programs can be illustrated. The spatial heterogeneity in median WTP for our alternative program is shown in the map in Figure 6. 3.5 Directions for Future Research Based on our study, respondents do not universally have strong preferences for the percent of permits auctioned or the uses of the auction revenue, although our mixed logit model with correlated marginal utilities suggest there is significant heterogeneity across the population in preferences for the share of permits auctioned and the share of revenue allocated to workers and communities. Thus, people’s auction and revenue recycling preferences might vary across the population in ways that cancel out, on average. Certainly, our LASSO-selected heterogeneity included in Model (5) suggests that people are not universally indifferent to these attributes. Future explorations of models allowing for attribute interactions are also a possible extension of our work. Another possible explanation for the apparent lack of importance of some cap-and-trade program design features (such as the percent of permits auctioned, and the uses of auction revenue) could, of course, be that our survey was overly complex for the average respondent. Statistically insignificant marginal utility estimates may be a result of inattention to these features on the part of a sufficient number of respondents. Further analysis of a relationship between respondents’ measured and 81 self-reported attention to different program features may help clear up what is going on with these less-robustly estimated average marginal utilities. We are not yet entirely confident about the reliability of our benefit-function transfer exercise, where we estimate our model of heterogeneous preferences based on observable ZCTA-level characteristics for each of our 1050 Oregon respondents. We use this model of ZCTA-level representative preferences to extend our predictions of WTP for specific types of cap-and-trade programs to all ZCTAs in the lower-48 U.S. states. We plan to enhance our model with ZCTA-level observable heterogeneity to include county- level vote shares for the 2020 Presidential election. Given that our latent class model with heterogeneity in terms of respondent ideology and climate-change attitudes achieved the highest maximized log-likelihood of all our models, we expect that voting data (while available nationally only at the county level, as opposed to the ZCTA level) may greatly improve the predictive power of our model. As a strategy for limiting the influence of outliers in our benefit-transfer exercise, we may also explore binning for our continuous ZCTA proportions. 3.6 Conclusions We have described a choice-experiment study of preferences over a variety of attributes of potential carbon cap-and-trade programs to be implemented at the state level. We have considered specifications that assume homogeneous preferences, as well as heterogeneous preferences. In the second category, we include a random-preferences model with purely unobservable heterogeneity, two different latent-class specifications (one with class membership determined by a small set of respondent characteristics and one with class membership determined by respondent attitudes/ideologies), and a model with respondent neighborhood-level observable heterogeneity only (designed to facilitate benefit-function transfer on a national scale). 82 For our models with a single central tendency for marginal utilities (i.e., Model (1)-homogeneous clogit, and Model (2) (independent mixed logit), the preference parameters are statistically significant for (a) cost per month, (b) an any-program indicator, (d) carbon emission reductions, (d) carbon-intensive jobs, (e) green jobs, and (f) additional regulations.29 Our estimate of average MWTP for a 1.0 proportional change in carbon emissions by 2050 is between about $150 and $200, including the wider variety of models also discussed in Stanford and Cameron (2022). However, proportional changes that large (i.e. 100 percent) were not included in our study. A recent proposal by the state of Oregon suggests reducing emissions by 45% relative to 1990 levels by 2035 and by 80% relative to 1990 levels by 2050. Graphics in our survey for the time trend in emissions for Oregon suggests that 1990 levels were about 57 million metric tons of carbon dioxide equivalent, and that 2021 levels at the time of the survey were about 61 million metric tons. Our survey described the emissions reduction target as "the percent reduction of total annual carbon emissions in Oregon by the year 2050, relative to Oregon’s current carbon emissions. The suggested 2050 policy target then suggests reducing total emissions to just 11.4 tons. This 80% emissions reduction was the largest proportional reduction in our randomized choice experiment designs (where we asked respondents to consider reductions between 10% and 80%. But a reduction from the current 61 million tons to just 11.4 tons would be a about an 81% reduction, or a “Prop change in C emissions” of - 0.81. For our fixed coefficients logit specification, the MWTP estimate implies an average willingness to pay, per month, of $153 * .81 = $124 to achieve the state’s goal of roughly 29One caveat relating to these results is that our sample-selection correction method, upon which these results are based, is currently an ad hoc approach based on individual deviations from the mean survey response propensity in among eligible respondents. There is some evidence that preferences may differ to some extent between the respondent sample and the general population. Our sample-selection treatment is more-rigorous than most, but could still be improved upon. See a dissertation chapter in Mitchell-Nelson (2022) for a more-sophisticated approach. 83 an 80% reduction by 2050. By a similar calculation, however, our correlated mixed logit model implies an average willingness to pay, per month, of about $166. A rough interval estimate would be about $120 to about $212 . Our preliminary findings support that, in addition to the obvious cost and benefits of a carbon reduction policy (cost and emissions reductions), a cap-and-trade program’s effect on jobs is of great importance to the public. A consistent significant and positive sign on the "Any Program" marginal utility estimate indicates that, on average, the public is supportive of a carbon cap-and-trade policy, regardless of its specific attributes. These two results, taken together, suggest that successful implementation of a carbon program in Oregon is likely to be highly dependent on designing a policy that navigates the contentious “just transition” debate. In our models with heterogeneous preferences, the presence of additional regulations to limit emissions of other pollutants is generally a statistically significant determinant of support for carbon cap-and-trade programs. This result suggests the likely importance of additional protections for local residents who live around facilities that may buy large numbers of carbon permits if a cap-and-trade program is implemented. Our estimates for the Social Benefits of Carbon (SBC) emissions reductions represent a useful complementary measure of the extent to which society may value these emissions reductions. The customary benefits measure is, instead, an avoided- cost measure: the Social Cost of Carbon. The Social Cost of Carbon (SCC), or marginal avoided impact of greenhouse gas, has received a considerable amount of attention. Nevertheless, there is no consensus on the true size of the SCC. Depending on the researcher, agency, administration, or country, the estimate of SCC varies widely. Under the Obama administration the SCC has estimated at $50 per tonne (in 2020 US dollars) of CO2, with a range of $15-$75, while the Trump administration revalued the SCC at $1-$7 per tonne (Wagner et al. (2021)). Subsequently, the Biden administration has 84 established an interim central value of $51/ton, and a range between $14 and $152/ton. It is not surprising that the official SCC is highly dependent on the presiding administration, considering the strong partisan differences in WTP for carbon emissions reductions identified in this study and other research. However, estimates for the SCC are also vary markedly within the field of economics. This uncertainty is due in large part to the fact that the question of how to properly measure the SCC is still up for debate (Pindyck (2019)). Nevertheless, improving our estimation of the SCC is imperative to designing appropriate climate change policies (Aldy, Kotchen, Stavins, and Stock (2021)). Our Social Benefit of Carbon (reduction), or SBC, measures the overall social willingness to pay for a one-tonne reduction in carbon emissions via a cap-and-trade program where increased costs will be borne by individual households. Our estimate for our homogeneous clogit is only $47/ton , but for our more-general mixed logit models, the estimate is $61 (or as high as $64 in the correlated mixed logit reported in Stanford and Cameron (2022)), consistent with the Obama and Biden administration.30 However, our latent class models estimates reveals that different political and ideological groups in society have substantially different SBC estimates. In comparison to the SCC values laid out by past and current administrations, our SBC estimates tend to be reasonably similar. This suggests reasonable convergent validity for our estimates. This convergent validity is important because the Social Cost of Carbon (SCC) is a fundamentally different way to measure the benefits of carbon emissions reductions, based on avoided damages, compared to our Social Benefits measure, which is based on individual willingness-to-pay for carbon reductions. The consistency between the SCC and SBC measures suggests that the SBC may be well-suited as an alternative way to measure the effects of carbon emissions on 30The Trump administration’s valuation of less than $10/ton is largely ignored as a legitimate estimate of the SCC. 85 social welfare. This stated-preference approach has the potential to improve benefit- cost analyses of public programs to reduce carbon emissions, and affords an option to consider in more detail the distributional consequences of these policies to reduce carbon emissions. 86 Figure 3. Distribution of predicted WTP for basic cap-and-trade program Across 33,300 lower-48-state ZCTAs, distribution of predicted WTP for basic cap-and- trade program (40% carbon emissions reduction and no other special features). Example: split by terciles of the proportion of the population commuting by public transit. Overall national median WTP = $70 per month. 87 Figure 4. Distribution of predicted WTP for alternative cap-and-trade program Across 33,300 lower-48-state ZCTAs, distribution of predicted WTP for alternative cap- and-trade program (“(-4,-1,1,5,3,3,1,1)” means 40% carbon emissions reductions, 10% decline in county carbon jobs, 10% increase in county green jobs, 50% of permits auctioned, 30% of auction revenue to equipment, 30% of auction revenue to workers/communities, includes additional regulations on other pollutants, and an “any program” indicator switched on). Example: split by terciles of the proportion of ZCTA population commuting by public transit. Overall national median WTP = $114 per month. 88 89 Figure 5. Map of predicted WTP for basic cap-and-trade program By ZCTA: predicted WTP for basic cap-and-trade program (40% carbon emissions reduction and no other special features) 90 Figure 6. Map of predicted WTP for alternative basic cap-and-trade program By ZCTA: predicted WTP for alternative cap-and-trade program (40% carbon emissions reduction, 10% decrease in county carbon jobs, 10% increase in county green jobs, 50% of permits auctioned, 30% of auction revenue to equipment, 30% of auction revenue to workers/communities, additional regulations on other pollutants) Table 9. Descriptive statistics for cap-and-trade programs and for alternative ways of capturing individual preference heterogeneity mean (std. dev.) 1. Cap-and-trade program attributes (All models, n=6300 programs) Monthly cost 195.31 (101.87) Prop change in C emissions -0.456 (0.226) Prop change in carbon jobs -0.112 (0.064) Prop change in green jobs 0.110 (0.062) Share of permits auctioned 0.458 (0.230) Share of rev. for equip. 0.304 (0.213) Share of rev. to workers 0.305 (0.215) 1=New regs other pollut. 0.495 (0.500) 2. Preference heterogeneity: Individual demographics (Model 3, n=1050 respondents) 1=College graduate 0.427 0.495 1=Income greater than 75K 0.441 0.497 1=Identifies as non-male 0.527 0.499 1=Own age:18-34 0.335 0.472 1=Own age:65+ 0.215 0.411 1=Has no children -0.000 0.482 1=Knows ancestors beyond gg grndprts 0.403 0.490 1=Has resided in Oregon 18+ years 0.191 0.393 1=Expect reside Oregon 20+ years 0.224 0.417 3. Preference heterogeneity: Ideology and opinions about climate change (Model 4, n=1050 respondents) 1=Identifies as Democrat 0.420 (0.494) 1=Identifies as Republican 0.246 (0.431) 1=Ideology:Strongly liberal 0.178 (0.383) 1=Ideology:Somewhat liberal 0.207 (0.405) 1=Ideology:Somewhat conservative 0.164 (0.370) 1=Ideology:Strongly conservative 0.099 (0.299) 1=Str.agree clim.change threat 0.592 (0.491) 1=Agree clim.change a threat 0.223 (0.416) 1=Disagree clim.change a threat 0.048 (0.213) 1=Str.disagree clim.change a threat 0.024 (0.152) 1=Str.agree clim.change human-caused 0.450 (0.497) 1=Agree clim.change human-caused 0.309 (0.462) 1=Disagree clim.change human-caused 0.068 (0.251) 91 Table 10. Descriptive statistics for ZCTA-level heterogeneity in preferences selected by LASSO for Model 5 Compare means (and standard deviations) for estimating sample, for all Oregon ZCTAs, for National (lower-48 U.S. states) ZCTAs to be employed in benefits-function transfer exercise (1) (2) (3) Est. sample Oregon National ZCTA ZCTA ZCTA data data data (unwgtd) mean/sd mean/sd mean/sd ZCTA pr:Two or more races 0.077 0.064 0.046 (0.027) (0.050) (0.055) ZCTA pr:American Indian/Alaska Native 0.010 0.017 0.014 (0.012) (0.061) (0.077) ZCTA pr:Asian 0.048 0.018 0.022 (0.055) (0.033) (0.055) ZCTA pr:Native Hawaiian/Pacific Islander 0.004 0.002 0.001 (0.005) (0.007) (0.006) ZCTA pr:Other lang., but English good 0.100 0.060 0.071 (0.059) (0.065) (0.103) ZCTA pr:Divorced or separated 0.143 0.144 0.129 (0.031) (0.070) (0.080) ZCTA pr:High school grad. (incl. equiv.) 0.220 0.279 0.323 (0.084) (0.129) (0.149) ZCTA pr:Bachelor’s degree 0.219 0.164 0.159 (0.094) (0.104) (0.110) ZCTA pr:Income=10K or less 0.132 0.144 0.139 (0.034) (0.075) (0.083) ZCTA pr:Income=15K to 25K 0.124 0.142 0.132 (0.030) (0.091) (0.075) ZCTA pr:Income=65K to 75K 0.044 0.040 0.039 (0.012) (0.038) (0.035) ZCTA pr:Native US cit.; born outside US 0.013 0.009 0.010 (0.006) (0.010) (0.020) ZCTA pr:Indus=Mine/quarry/oil/gas extr. 0.001 0.003 0.010 (0.002) (0.027) (0.040) ZCTA pr:Indus=Wholesale trade 0.025 0.022 0.023 (0.012) (0.031) (0.035) ZCTA pr:Indus=Utilities 0.008 0.010 0.011 (0.006) (0.020) (0.028) ZCTA pr:Indus=Information 0.016 0.014 0.013 (0.011) (0.027) (0.025) ZCTA pr:Indus=Real estate/rent/lease 0.020 0.016 0.014 (0.011) (0.020) (0.028) ZCTA pr:Indus=Mgt. of companies/enterp 0.002 0.001 0.001 (0.003) (0.003) (0.004) Continued on next page 92 Table 10 – continued from previous page (1) (2) (3) Est. sample Oregon National ZCTA ZCTA ZCTA data data data (unwgtd) ZCTA pr:Indus=Arts/entert./recr. 0.021 0.021 0.018 (0.016) (0.042) (0.038) ZCTA pr:Indus=Public administration 0.047 0.057 0.050 (0.029) (0.083) (0.063) ZCTA pr:Smartphone 0.883 0.792 0.783 (0.049) (0.189) (0.187) ZCTA pr:Tablet/portable wireless 0.650 0.577 0.557 (0.073) (0.179) (0.186) ZCTA pr:Other computer only 0.000 0.000 0.000 (0.001) (0.001) (0.004) ZCTA pr:Dial-up only 0.003 0.006 0.004 (0.004) (0.019) (0.019) ZCTA pr:Broadband satellite internet 0.064 0.129 0.094 (0.048) (0.144) (0.099) ZCTA pr:Heat=Bottled tank LP gas 0.016 0.051 0.148 (0.027) (0.096) (0.188) ZCTA pr:Heat=Fuel oil kero etc. 0.013 0.043 0.075 (0.017) (0.081) (0.166) ZCTA pr:Heat=Solar energy 0.001 0.001 0.001 (0.003) (0.006) (0.009) ZCTA pr:Heat=Other fuel 0.007 0.013 0.013 (0.011) (0.021) (0.039) ZCTA pr:Heat=No fuel used 0.004 0.005 0.006 (0.006) (0.012) (0.020) ZCTA pr:Commute=10 min or less 0.158 0.214 0.156 (0.085) (0.183) (0.145) ZCTA pr:Commute=25 to 29 min 0.066 0.053 0.065 (0.033) (0.055) (0.070) ZCTA pr:Commute=45 to 59 min 0.065 0.080 0.083 (0.041) (0.106) (0.088) ZCTA pr:Commute=60 min or more 0.058 0.081 0.091 (0.035) (0.102) (0.099) ZCTA pr:Commute=1 vehicle avail 0.204 0.150 0.163 (0.087) (0.118) (0.132) ZCTA pr:Commute=2 vehicles avail 0.399 0.346 0.373 (0.072) (0.165) (0.168) ZCTA pr:Commute=public transp (not taxi) 0.036 0.013 0.015 (0.041) (0.031) (0.054) ZCTA pr:Commute=other means 0.189 0.199 0.128 (0.091) (0.172) (0.126) Observations 1050 430 33300 93 94 Table 11. Differences in baseline marginal utility parameter estimates across alternative specifications For comparison, differences in baseline marginal utility parameter estimates across alternative specifications. Models 2 through 8 include additional parameters that allow for heterogeneous preferences, to be presented in subsequent tables. Model (1) (2) (3) (4) (5) Homogeneous Independent Latent Class, demographics Latent Class, ideology For transfer clogit mixed logit class 1 class 2 class 1 class 2 (when shifters=0) Monthly cost -0.00308∗∗∗ -0.00843∗∗∗ -0.00465∗∗∗ -0.0127∗∗∗ -0.00500∗∗∗ -0.0164∗∗∗ -0.00344∗∗∗ (0.000290) (0.000846) (0.000447) (0.00132) (0.000395) (0.00218) (0.000305) 1=Any program 0.224∗ 0.406 1.319∗∗∗ -0.957∗∗∗ 1.268∗∗∗ -1.747∗∗∗ 0.239∗∗ (0.115) (0.275) (0.186) (0.346) (0.161) (0.565) (0.118) Prop change in C emissions -0.469∗∗∗ -1.650∗∗∗ -0.954∗∗∗ -1.454∗∗∗ -0.909∗∗∗ -1.458∗∗ -0.541∗∗∗ (0.139) (0.270) (0.223) (0.416) (0.196) (0.639) (0.144) Prop change in carbon jobs 1.433∗∗∗ 4.607∗∗∗ 2.908∗∗∗ 3.521∗∗ 2.638∗∗∗ 2.519 1.539∗∗∗ (0.448) (0.802) (0.656) (1.502) (0.590) (2.141) (0.463) Prop change in green jobs 0.945∗∗ 4.114∗∗∗ 3.668∗∗∗ 0.585 2.949∗∗∗ 1.295 1.117∗∗ (0.457) (1.118) (0.864) (1.362) (0.710) (1.940) (0.480) ... × (dm:R̂P) -0.702 -3.579 2.796∗∗∗ -0.210 1.183 -2.428 -0.626 (0.481) (2.794) (1.067) (1.668) (0.801) (2.577) (0.555) Share of permits auctioned 0.0346 -0.0330 -0.0262 0.146 0.00734 0.311 0.0321 (0.117) (0.195) (0.185) (0.352) (0.163) (0.508) (0.122) Continued on next page 95 Table 11 – continued from previous page Share of rev. for equip. -0.205 -0.300 -0.0362 -0.536 -0.108 -0.282 -0.162 (0.128) (0.225) (0.199) (0.398) (0.177) (0.648) (0.134) Share of rev. to workers 0.170 0.537∗∗ 0.316 0.674∗ 0.292 1.004∗ 0.185 (0.127) (0.267) (0.204) (0.383) (0.179) (0.579) (0.131) 1=New regs other pollut. 0.186∗∗∗ 0.509∗∗∗ 0.289∗∗∗ 0.449∗∗∗ 0.264∗∗∗ 0.471∗ 0.184∗∗∗ (0.0528) (0.0994) (0.0831) (0.159) (0.0737) (0.253) (0.0547) See subsequent tables for additional parameters in each specification n/a Table 12 Table 13 Table 14 Table 15 Max. log-likelihood -4268.58 -3188.72 -3287.83 -3181.54 -3980.82 No. respondents 1050 1050 1050 1050 1050 No. choices 6300 6300 6300 6300 6300 No. alternatives 12600 12600 12600 12600 12600 Standard errors in parentheses. For all models, observations are weighted by county population proportions in sample versus general population of the state. Table 12. Additional parameters for Model (2) in Table 11 Marginal utility parameter standard deviation estimates for the independent mixed logit specification Coef. Est. Std. Err. σ (1=Any program) -3.056∗∗∗ (0.269) σ (Prop change in C emissions) -1.554 (3.810) σ (Prop change in carbon jobs) -0.502 (2.723) σ (Prop change in green jobs) -3.845 (2.432) σ (Prop change in green jobs × dm:R̂P) 11.48∗∗∗ (2.052) σ (Share of permits auctioned) -0.134 (0.429) σ (Share of rev. for equip.) -1.006 (1.569) σ (Share of rev. to workers) -2.103 (3.141) σ (1=New regs other pollut.) -0.703∗∗ (0.293) Table 13. Additional parameters for Model (3) in Table 11 Submodel for Class 1 membership propensity as a function of demographic variables. Class 1 membership Coef. Est. (Std. Err.) 1=College graduate 0.338∗∗ (0.152) 1=Income greater than 75K 0.213 (0.153) 1=Identifies as non-male -0.402∗∗∗ (0.142) 1=Own age:18-34 0.779∗∗∗ (0.172) 1=Own age:65+ -0.108 (0.184) 1=Has no children -0.00796 (0.154) 1=Knows ancestors beyond gg grndprts -0.385∗∗∗ (0.142) 1=Has resided in Oregon 18+ years -0.254 (0.161) 1=Expect reside Oregon 20+ years 0.311∗∗ (0.153) Constant 0.194 (0.236) 96 Table 14. Additional parameters for Model (4) in Table 11 Submodel for Class 1 membership propensity as a function of ideology and opinions about climate change. Class 1 membership Coef. Est. (Std. Err.) 1=Identifies as Democrat 0.216 (0.213) 1=Identifies as Republican 0.432∗ (0.254) 1=Ideology:Strongly liberal 0.157 (0.275) 1=Ideology:Somewhat liberal 0.363 (0.252) 1=Ideology:Somewhat conservative -0.123 (0.258) 1=Ideology:Strongly conservative -0.386 (0.342) 1=Str. agree clim. change a threat 1.906∗∗∗ (0.336) 1=Agree clim. change a threat 1.200∗∗∗ (0.301) 1=Disagree clim. change a threat -0.587 (0.670) 1=Str. disagree clim. change a threat -2.131 (1.439) 1=Str.agree clim. change human-caused 0.793∗∗∗ (0.294) 1=Agree clim. change human-caused 0.602∗∗ (0.257) 1=Disagree clim. change human-caused -0.702 (0.504) Constant -1.434∗∗∗ (0.287) 97 Table 15. Additional parameters for Model 5 in Table 11 LASSO-selected systematic variation in marginal utilities according to ZCTA- level characteristics (controlling for demeaned individual fitted response propensity) Heterogeneity Coef. Est. (Std. Err.) 1=Any program ... × (ZCTA pr:Two or more races) 7.470∗∗∗ (1.350) ... × (ZCTA pr:Asian) 0.0864 (1.888) ... × (ZCTA pr:Divorced or separated) 7.050∗∗∗ (1.423) ... × (ZCTA pr:High school grad. (incl. equiv.) -1.680 (1.108) ... × (ZCTA pr:Bachelor’s degree) 3.476∗∗∗ (1.131) ... × (ZCTA pr:Income=10K or less) -4.391∗∗∗ (1.158) ... × (ZCTA pr:Income=15K to 25K) 4.976∗∗∗ (1.750) ... × (ZCTA pr:Income=65K to 75K) -10.53∗∗∗ (3.156) ... × (ZCTA pr:Native US cit.; born outside US) 14.55∗∗ (6.239) ... × (ZCTA pr:Indus=Utilities) 4.123 (9.146) ... × (ZCTA pr:Indus=Public administration) -0.500 (1.933) ... × (ZCTA pr:Smartphone) 5.950∗∗∗ (1.125) ... × (ZCTA pr:Tablet/portable wireless) -1.785∗∗ (0.822) ... × (ZCTA pr:Broadband satellite internet ) 0.0220 (1.722) ... × (ZCTA pr:Heat=Bottled tank LP gas) 0.968 (3.289) ... × (ZCTA pr:Heat=Fuel oil kero etc.) -11.83∗∗∗ (2.187) ... × (ZCTA pr:Commute=25 to 29 min) 0.201 (1.085) ... × (ZCTA pr:Commute=45 to 59 min) 0.483 (1.528) ... × (ZCTA pr:Commute=1 vehicle avail) -1.207∗ (0.652) ... × (ZCTA pr:Commute=2 vehicles avail) -2.606∗∗∗ (0.618) ... × (ZCTA pr:Commute=public transp (not taxi) -0.194 (2.060) ... × (ZCTA pr:Commute=other means) 1.476∗∗ (0.742) Prop change in C emissions ... × (ZCTA pr:Broadband satellite internet ) -3.047 (2.946) ... × (ZCTA pr:Commute=10 min or less) 3.188∗∗∗ (0.982) ... × (ZCTA pr:Commute=60 min or more) -6.705∗∗∗ (2.060) Prop change in carbon jobs ... × (ZCTA pr:Asian) 10.71 (8.659) ... × (ZCTA pr:Indus=Wholesale trade) 52.94∗∗ (21.66) ... × (ZCTA pr:Indus=Mgt. of companies/enterp) -16.22 (103.7) ... × (ZCTA pr:Indus=Arts/entert./recr.) -19.87 (17.40) ... × (ZCTA pr:Heat=Bottled tank LP gas) -32.60 (30.42) Prop change in green jobs ... × (ZCTA pr:Indus=Real estate/rent/lease) -114.6∗∗∗ (26.28) ... × (dm:R̂P) -0.626 (0.555) Share of permits auctioned ... × (ZCTA pr:Asian) -5.520∗∗ (2.716) ... × (ZCTA pr:Native HI/Pacific Islander) 32.33∗∗∗ (11.80) ... × (ZCTA pr:Indus=Information) 1.212 (6.190) ... × (ZCTA pr:Indus=Mgt. of companies/enterp) 4.873 (32.54) ... × (ZCTA pr:Dial-up only) 38.82∗∗∗ (14.94) ... × (ZCTA pr:Heat=Other fuel) -18.93∗∗∗ (6.284) Continued on next page 98 Table 15 – continued from previous page ... × (ZCTA pr:Heat=No fuel used) -28.23∗ (16.59) ... × (ZCTA pr:Commute=45 to 59 min) -3.744 (2.749) ... × (ZCTA pr:Commute=public transp (not taxi)) 7.937∗∗ (3.466) Share of rev. for equip. ... × (ZCTA pr:Other lang., but English good) -5.201∗∗∗ (1.728) ... × (ZCTA pr:Indus=Mine/quarry/oil/gas extr.) 93.14 (59.57) ... × (ZCTA pr:Indus=Mgt. of companies/enterp) -33.82 (37.00) ... × (ZCTA pr:Indus=Public administration) -14.90∗∗∗ (5.314) ... × (ZCTA pr:Heat=Solar energy) 47.07 (33.12) Share of rev. to workers (none) ... × (ZCTA pr:Asian) -2.663 (2.369) ... × (ZCTA pr:Indus=Mine/quarry/oil/gas extr.) 47.53 (69.20) ... × (ZCTA pr:Indus=Utilities) 37.19∗ (21.56) ... × (ZCTA pr:Heat=No fuel used) -67.21∗∗∗ (19.46) 1=New regs other pollut. ... × (ZCTA pr:American Indian/Alaska Native) -9.992∗∗ (4.767) ... × (ZCTA pr:Indus=Mine/quarry/oil/gas extr.) 35.62 (31.46) ... × (ZCTA pr:Other computer only) 117.6∗∗∗ (44.12) 99 100 Table 16. For comparison, different implications of estimated utility parameters across specifications Model (1) (2) (3) (4) (5) Homogeneous Independent Latent Class, demographics Latent Class, ideology For transfer clogit mixed logit class 1 class 2 class 1 class 2 (when shifters=0) SBC reduct. (dollars/ton) 47 61 64 35 56 28 49a MRS(cbn jobs, grn jobs) @ mix now 1.72 1.27 .90 6.81 1.01 2.20 1.56 (-.21, 3.64) (.51, 2.03) (.33, 1.46) (-24.9, 38.53) (.37, 1.66) (-5.42, 9.83) (-.03, 3.15) MWTP(1=Any program) 73 48 284 -75 253 -107 70 (1.53, 144) (-12, 108) (199, 369) (-134, -17) (187, 320) (-183, -31) (4.17, 135) MWTP(Prop change in C emissions) -153 -196 -205 -115 -182 -89 -157 (-238, -67) (-264, -127) (-294, -117) (-178, -51) (-254, -110) (-165, -13) (-236, -78) MWTP(Prop change in carbon jobs) 466 547 626 277 527 154 447 (169, 763) (341, 753) (327, 925) (45, 510) (285, 770) (-104, 412) (174, 720) MWTP(Prop change in green jobs) 307 488 789 46 589 79 325 (12, 603) (253, 724) (389, 1190) (-165, 257) (295, 884) (-155, 313) (47, 603) MWTP(Share of permits auctioned) 11 -3.91 -5.64 12 1.47 19 9.32 (-63, 86) (-49, 42) (-84, 72) (-42, 65) (-62, 65) (-41, 79) (-60, 78) MWTP(Share of rev. for equip.) -67 -36 -7.8 -42 -22 -17 -47 (-149, 16) (-88, 17) (-92, 76) (-104, 20) (-91, 48) (-95, 61) (-124, 30) MWTP(Share of rev. to workers) 55 64 68 53 58 61 54 (-26, 137) (2.97, 124) (-19, 155) (-6.62, 113) (-12, 129) (-8.38, 131) (-22, 129) MWTP(1=New regs other pollut.) 60 60 62 35 53 29 54 (25, 96) (36, 85) (25, 99) (9.9, 61) (23, 83) (-2.38, 60) (21, 86) Notes: Auction-related attributes did not tend to bear statistically significant coefficients, so we do not report their MWTP. aDepends on factors affecting MU(C emissions). bInterval estimates for this latent class model pending re-calculation. cA simple sign change converts MWTP for the status quo option, regardless of the attributes ofthe proposed program, into the corresponding MWTP for "Any program". dHeterogeneous across counties with different characteristics, so we do not report any single value. CHAPTER IV OREGUNIANS AND THE GUN-CONTROL PARADOX We study the impacts of newly passed gun legislation on the demand for firearms. We focus on Oregon, where voters passed Measure 114 in a referendum in November of 2022. After weeks of public debate and news coverage, the measure narrowly passed with 50.7 percent of the votes. We study the effects of this result on the demand for guns proxied by background checks. We find background checks surged by 400 percent for six weeks following the vote. After a judge’s decision prevented the law from taking immediate effect, background checks fell from their all-time high, but persisted at roughly 100 percent above their original levels, even five months after the original passage of the measure. Additionally, we conduct the first analysis of background check data at the county-daily level, using Oregon State Police administrative data. We find significant heterogeneity in Oregonians’ firearm purchasing behavior across counties. The surge in firearm demand resulting from Oregon’s Measure 114 exemplifies the gun-control paradox and provides a cautionary tale to policymakers. 101 4.1 Introduction The United States leads the developed world in gun deaths and has seen a steady increase in annual deaths in the last two decades. Indeed, 2021 marked an unfortunate high point, with nationwide gun deaths totaling 48,830 (Gramlich, 2023). Mass shootings and other high-profile gun accidents continue to make headlines, while federal changes to gun laws remain essentially non-existent since the expiration of the Assault Weapons Ban in 2004. While federal policies stagnate, some states have begun to limit access to firearms. Even Texas is debating an increase in the minimum legal age to purchase high-capacity rifles to twenty-one in response to the highly publicized tragedy at Uvalde in May of 2022. Oregon also recently joined a growing set of states moving to pass gun control measures intended to improve firearm safety. Oregon’s recent legislation came about through a public referendum process, appearing on the November 2022 ballot as Measure 114. If passed, this referendum promised background checks for all gun sales, restrictions on magazines, and a new permit-to-purchase program. The last provision attracted considerable debate in the final weeks leading up to the election. On the surface, one may expect the provisions specified in Measure 114 would be effective in reducing long-run gun deaths and accidents. Indeed, both D. Webster, Crifasi, and Vernick (2014) and Williams Jr (2020) find evidence that the repeal of Missouri’s permit-to-purchase program led to more gun purchases and additional homicides. Likewise, the passage of a permit-to-purchase restriction in Connecticut was followed by decreases in homicides (Rudolph, Stuart, Vernick, & Webster, 2015). Crifasi, Meyers, Vernick, and Webster (2015) study both Missouri’s and Connecticut’s permit-to-purchase programs and finds evidence that these restrictions serve to reduce suicide risk as well (Rudolph et al., 2015). However, if individuals anticipate the laws, the laws’ potential benefits may be attenuated (or even entirely offset). 102 Anticipatory behavior has been found in many other contexts. Becker, Grossman, and Murphy (1994) find that while smokers cut their smoking behavior anticipating future tax hikes, they also stockpile cigarettes to avoid future taxes. In labor economics, similar anticipation effects are seen in program participation, referred to as an “Ashenefelter Dip,” and in environmental economics, this has been noted for climate change policy, referred to as a “Green Paradox” (Ashenfelter & Card, 1985; Sinn, 2015). We find evidence that the salient threat of the implementation of Measure 114 led to a roughly 400 percent increase in gun sales. We contribute to a growing body of literature that suggests there is a “steel paradox.” While gun control policy may have benefits in the long-term, it may be met with short-term increases in firearm demand. Depetris-Chauvin (2015) finds President Obama’s election saw increases in background checks despite the Obama administration never leading any federal gun control policies. Levine and McKnight (2017) find that the Sandy Hook mass shooting led to increases in firearm purchases and gun accidents, again amidst no movement in federal legislation (and very little change in state-level laws). So while policies like a permit-to-purchase program and increased background checks may reduce gun sales in the long run, the short-run effects also warrant consideration for gun policy in the United States. The effects of gun availability on local crime patterns have long been debated (Kleck, 2004). Theoretically, gun ownership could create a deterrent effect against many types of crime. However, the bulk of the literature suggests that more guns lead to more crime, rather than less. Duggan (2001) finds several proxies for gun ownership are associated with increases in homicide risk and increases in the risks for other violent crimes as well. Cook and Ludwig (2006) find that gun ownership, proxied by the fraction of suicides involving guns, corresponds to higher rates of violent crime. Likewise, Billings (n.d.) finds that regions which exhibit more firearm background checks after 103 national high-profile mass shootings see increases in their local violent crime rates, and increases in property theft (particularly those which involve firearms). Thus, rather than deterring crimes, guns themselves are often the targets of crime. We add to this growing literatur with evidence that suggests that there is a “steel paradox” for firearms policy. In the long run, the passage of a substantive restriction like a permit-to-purchase program may increase public safety by reducing the number of guns in circulation—or preventing “dangerous” purchases. However, the passage of a permit- to-purchase program will likely spawn anticipation effects. To estimate short-run and medium-run effects, we use the National Instant Background Check System (NICS) data from the FBI. Using national data at the month-state level and a synthetic difference-in- differences approach (Arkhangelsky, Athey, Hirshberg, Imbens, & Wager, 2021), we find evidence of strong anticipation effects immediately after the passage of Oregon’s Measure 114. We also use NICS data at the county-day level from Oregon’s state police, focusing on the temporal patterns of anticipation effects and their geographic heterogeneity. We find that background checks actually began to increase in the weeks before the election and then immediately jumped after the outcome of the vote was known. We also find that background checks subsided after a judge halted the implementation of the new law. Still these checks remain elevated relative to their earilier baseline level, before the measure was first proposed. We also examine the heterogenous impact of Measure 114 across Oregon counties. Roughly equal shares of the increase in firearm sales can be attributed to counties that voted for or against the measure. However, the per capita effect is roughly fifty-percent larger for counties that largely did not support the measure. In counties most strongly opposed to Measure 114, we observe weeks in which the number of background checks translates to nearly 1% of the local population purchasing a firearm. It’s noteworthy that in absolute magnitudes, the anticipation effects we estimate 104 for Oregon are much larger than the Obama effect, the Sandy Hook effect, or the surge in gun purchases observed early during the COVID19 pandemic. 4.2 Background 4.2.1 Gun laws in the United States. Conversations surrounding firearms and firearm regulation play an increasingly important role in American politics. The firearm-regulation debate, and research concerning firearms, have both evolved over time in focus and framing (Carlson, 2020; Steidley & Yamane, 2022). However, the central tenets of proponents and opponents of firearm regulation remain relatively unchanged. Those in favor of regulation often cite the large number of injuries and deaths related to firearms each year (the CDC reported over 48,000 firearm-related deaths in 2021, and that firearms were the leading cause of death for children as of 2020). On the other hand, opponents of regulation frequently argue that access to firearms is a constitutional right and that firearms are an important tool for citizens to keep themselves safe. Carlson (2020) highlights that both sides argue that “evidence matters” and the other side “ignore[s] the facts.” Currently, firearms are regulated at both the national and state levels. Since the National Firearms Act of 1934, expansion of federal regulation has been modest.1 Until the passage of the Bipartisan Safer Communities Act (2022) under the Biden administration, the most recent national-level firearm regulation policies were the Brady Handgun Prevention Act (1993) and the Federal Assault Weapon Ban (1994–2004). In the latter half of the 20th century and the early 21st century, federal-level firearm regulation has often been catalyzed by either high-profile firearm-related incidents or by rising crime rates. In contrast to the few changes in federal regulation, there have been significant changes to state-level firearm laws in the United States over the last 20 years. Some states 1Vizzard (2015) provides a good overview of firearm policy in the United States. 105 have enacted more-stringent regulations on firearm ownership, while others have loosened their laws. A thorough discussion of firearm laws is beyond the scope of this paper. However, we briefly discuss three firearm-related laws relevant to this paper: background check, permit-to-purchase, and high-capacity magazine laws. Background Checks: Federal law requires a background check be performed through the FBI’s National Instant Criminal Background System (NICS) for all purchases from a federal firearms licensee (FFL), manufacturer, or importer. There are several modifications states have made to this federal mandate. First, depending on state law, FFLs either communicate with the NICS directly or alert a state-designated point of contact that then communicates with the NICS. Second, states have adopted laws that expanded background check requirements beyond FFL sales. For example, states with Universal Background Check laws (UBC) require background checks for all firearm sales and transfers (e.g., sales between private parties, sales at gun shows, or gifts). Third, states may run their own background checks that access state records that are not included in a NICS background check. Finally, state laws may allow for the substitution of an ATF-qualified alternate permit that can act in place of an NICS background check. However, an individual must undergo a background check to obtain the permit. Permit-to-Purchase: A permit-to-purchase (P2P) law requires individuals a permit or license before purchasing a firearm. In contrast to background checks, P2P laws have been implemented only at the state level. To obtain a permit, an individual must apply for a permit at a local agency. Typically, the local agency is a law enforcement agency, and the application must be completed in person. Depending on state law, successfully obtaining a permit may requirements in addition to a background check (e.g., a gun- safety course). Like UBC laws, P2P laws ensure that all gunowners are subject to a 106 background check. Some states have adopted both UBC laws and P2P laws, meaning that a background check is performed for individual at the time of the permit application and an additional time at the point of sale. High-capacity magazine restrictions: The Federal Assault Weapon Ban of 1994 banned the manufacture, sale, and possession of new high-capacity magazines (more than 10 rounds) for civilian use. However, it allowed individuals who already possessed high-capacity magazines before the ban to keep them. The ban had a sunset provision and expired in 2004. In response, several states have imposed limits on magazine capacity. Due to the frequent use of high-capacity magazines in mass shooting events (MSE), high- capacity magazines have come under renewed scrutiny. Table 17 reports the status of background check, permit-to-purchase, and high-capacity magazine laws in the United States. Twenty-one states have expanded background check laws beyond the federal mandate. Fourteen of these states require a background check at the point of sale for all firearm classes. Nine states require a permit (or license) to purchase all or some classes of firearms. Missouri (2007), Nebraska (2023), and North Carolina (2023) repealed their existing permit-to-purchase laws. 14 states have banned or restricted possession of high-capacity magazines. 4.2.2 Gun laws in Oregon. In November 2022, nearly two million Oregon voters narrowly passed Measure 114, an initiative aimed to curb access to firearms and high-capacity magazines. Measure 114 would require that buyers receive a permit from local law enforcement prior to purchasing a firearm. In contrast to previous background check requirements, Measure 114 would result in permits being denied more easily on the basis of concerns over an individual’s psychological state. Additionally, permits are contingent on demonstrated completion of a firearm safety course. Permits under Measure 107 114 are valid for five years. Beyond the gun-permitting requirements, Measure 114 would make it a criminal offense to possess magazines capable of holding ten or more rounds. The passage of Measure 114 was controversial throughout Oregon and followed a decade of gradually strengthening gun legislation in the state. Between 2015 and 2021, Oregon expanded its background check data infrastructure, implemented a “red flag” law enabling judges to order the removal of firearms from at-risk individuals upon a petition from a household member or law enforcement agency, extended gun restrictions for those under a restraining order or convicted of stalking, and mandated safe firearm storage practices. With the added provisions of Measure 114, Oregon rose in Everytown’s gun law strength rankings from 11th in the country to 9th in January 2023. Although Oregon referenda typically become effective 30 days after passage, uncertainty over how and when Measure 114’s implementation spread rapidly after the election. Some sheriffs announced that they would not enforce provisions of the law, and gun advocates challenged the constitutionality of Measure 114 in court. As of May 2023, litigation was ongoing and Measure 114 had not been implemented. In response to this uncertainty, in January 2023 Oregon lawmakers introduced Senate Bill 348, which would implement the primary provisions of Measure 114. Additionally, Senate Bill 348 would raise the minimum age for firearm purchasers. In April 2023, the bill successfully passed out of committee but awaits a vote by the Senate. 4.3 Data Measuring gun ownership Despite the prevalence of firearms in policy debates and in the national dialogue, firearm-related data are scarce or inaccessible. Rather than being a shortsighted blunder, this lack of data is often by design. For instance, despite including thousands of detailed product categories, the Nielsen Consumer Panel censors information about firearm purchases. One approach to filling this data gap has been to conduct national firearm-ownership surveys (e.g., the NRA’s National Gun 108 Owners Survey). However, this approach has several disadvantages. First, large-scale surveys are conducted infrequently, and granular temporal analysis using this type of data is not feasible. Second, surveys are always subject to sample selection bias, and the representativeness of any voluntary survey is often a concern. Finally, regarding selection bias, Urbatsch (2019) finds an increasing level of nonresponse over time for firearm-ownership surveys—especially among Republicans and those who do not trust the government. In the absence of direct measures, researchers have employed a wide range of firearm proxies varying from NRA membership to outdoor magazine subscriptions. Firearm Suicides divided by total Suicides (FSS) and NICS FBI background checks seem to have emerged as the two most popular proxies for firearm ownership. Each proxy presents unique advantages and disadvantages for researchers (Cook & Ludwig, 2019). For instance, FSS ensures that a gun is observed in the community, thus avoiding challenges presented by issues like illegal firearm markets or inherited firearms.2 However, due to the relative rarity of these events, using FSS as a proxy for firearm ownership is challenging in areas with small populations or across time (Cerqueira, Coelho, Fernandes, & Junior, 2018; Kleck, 2004). Lang (2013), who was one of the first researchers to implement the FBI background checks proxy, points out several drawbacks to using these background checks as a proxy. In some states, if a buyer possesses an up-to-date CCW permit, they may be excused from a background check when purchasing a firearm. On the other hand, some states periodically run “permit-check” background checks on all permit holders, regardless of a permit holder’s decision to buy a new firearm—compromising the reliability of background checks as a proxy for gun purchases. Additionally, while 2Because medical records indicate whether a firearm was used in the suicide there is no ambiguity about the presence of a firearm. 109 the Brady Act requires background checks for all purchases from Federal Firearm licensees (FFL), the federal government does not require background checks for private sales or for sales at gun shows. While some states have enacted laws that expand background checks for private and gun-show sales, many states have not. Finally, there is evidence that compliance with federal law is not universal and that compliance levels may vary systematically by geography (e.g., Castillo-Carniglia et al., 2018; Castillo- Carniglia, Webster, & Wintemute, 2019; Hepburn, Azrael, & Miller, 2022). Despite these drawbacks, Kim and Wilbur (2022) demonstrate that background checks outperform other commonly used proxies. OSP data This paper uses FBI background checks to proxy for firearm purchases. Since the passage of the Brady Gun Act in 1993, the federal government has required that all gun purchases from FFLs be conditioned on the results of a background check on the buyer through the FBI’s National Instant Criminal Background Check System (NICS). Our strategy to use FBI background checks as a proxy for firearm purchases is common in firearm research. However, our background-check data are uniquely spatially and temporally detailed. To date, research that uses background-check data relies on the NICS FBI background check data aggregated to the month and state level. All FFLs are required to run a background check through the NICS, but the particular entity that contacts the NICS depends on state law. In some states, local law enforcement agencies are appointed as intermediaries between sellers and the NICS. For other states, state laws require FFLs and other covered sellers to contact the NICS directly. In Oregon, firearm sellers contact the Oregon State Police (OSP), who then contact the NICS. Consequently, the OSP maintains detailed records of all NICS background checks requested in Oregon. The Oregon State Police (OSP) provided us with daily county-level background check data—significantly improving the level of detail possible in our analysis. The data begin 110 in February of 2018 and are appended every month. Currently, the overall period of observation for the OSP data runs through March of 2023. The data also indicate the status of each background check (i.e., Approved, Canceled, Denied, and Pending). FBI NICS data We complement the OSP data with the publicly available FBI NICS data. The FBI aggregates background data to the state-month level. As mentioned before, federal law requires gun sales from FFLs to be conditioned on background checks through the FBI’s NICS. Numerous states (e.g., California) have implemented laws that expand background checks to other types of purchases (e.g., gun shows). These laws are frequently referred to as Universal Background Checks (UBCs). However, because background-check laws vary across states, comparing total background checks across states is not always accurate. Fortunately, the FBI also distinguishes between the different types of background check. For instance, the FBI data indicate whether the background check was on behalf of an FFL (as opposed to a private sale) and also note the type of firearm for which the background check was being run (e.g., handgun or long gun). The observation period for our FBI data runs from January 2000 through April 2023. The data include Oregon’s background check data. The OSP data offer greater granularity than the publicly available FBI data, but the OSP records do not indicate the type of background check. A comparison the time-series of OSP data versus the time- series of FBI data for Oregon in Figure 7 suggests that the OSP data likely do not include permit checks. In the FBI data, Oregon agencies also requested very few background checks for private gun sales. Consequently, we use the set of Non-private gun sales among the FBI background checks for our analysis. Additional data We use several additional data sources for our analysis. First, we use the population data from the Census for state populations (2000-2022) and Oregon 111 county populations (2018-2022).3 We use the data to calculate changes in per-capita background checks. Given that there is considerable variation in the sizes of state and county populations that may be correlated with the propensity to own firearms, measuring background checks as a rate per capita is an essential adjustment to this measure. We also use data from the New York Times for Oregon’s 2022 election results. We look at both the referendum election results for Measure 114 and votes in the contemporaneous Oregon governor race. 4.4 Methodology The primary objective of our analysis is to examine how the firearm-purchasing decisions of Oregonians respond to the proposal and passage of Measure 114. Due to the granularity of the OSP data, we can track background checks at the daily level for individual counties in Oregon. However, we do not have daily county-level data for counties in other states. Thus we cannot simply compare the treated (Oregon counties) to the control (non-Oregon counties). Our estimation strategy therefore uses two steps to address this issue. Our first step is a proof of concept that Measure 114 has a causal effect on background checks in Oregon. For this step, we use a synthetic difference-in- difference (DiD) approach to compare month-state-level background checks in Oregon to a synthetic Oregon comprised of a weighted mix of other states. We explain background checks in State i at time t the following equation: bgcit = β11(Oregoni)+β21(Postt)+β3(1(Oregoni)×1(Postt))+ εit After establishing a causal effect of Measure 114 on background checks in Oregon, we use the OSP data to examine responses over time to Measure 114 in more temporal and geographic detail. More specifically, we measure changes in background 3The different time periods reflect county-level background data from the OSP beginning in February of 2018. 112 checks at the daily and weekly levels and examine heterogeneity in these changes across Oregon’s thirty-six counties. Our first approach is to examine the time-series of background checks in Oregon. Given that we have already established a causal relationship, a time-series model is perhaps the most informative method to employ as we examine the behavior of Oregonians. We also use a time-series model to examine the effect of a county’s support for Measure 114 on its response behavior. 113 To complement our time-series model, we implement an event study to measure the average treatment effect across counties in Oregon. We estimate this event study using the following specification: bgcit = β0 +βt1(Treated)+ εit (4.1) We use the event study to pin down the date of when anticipatory firearm sales begin in Oregon at a finer temporal level than is permited by the FBI background-check data. Next, we return to our initial synthetic-control DiD strategy to further examine county-level heterogeneity. However, instead of comparing all of Oregon to a synthetic Oregon, we now compare individual counties to synthetic counties. As mentioned, county-level data are unavailable for other states. Instead, our synthetic counties are compromised of a weighted mix of other entire states, excluding Oregon. Finally, we explore heterogeniety in background checks across counties in Oregon. We do so by splitting our OSP data by county-level election results for Measure 114. First, we split the data into two groups: counties where the majority of voters (50%>) supported Measure 114 and counties where the majority of voters did not support Measure 114. We use a time-series model to visualize the difference in behaviors. Second, we split the data into quartiles—dictated by voting results for Measure 114 for each county. Each quartile has nine Oregon counties. Again, use a time-series model to visualize the difference in behaviors. 4.5 Results 4.5.1 Treatment period. Before we estimate the effects of Measure 114 on background checks, we explore the timing of when Oregonians first began to respond to the bill. An obvious choice is when Measure 114 passed on November 8th, 2022. 114 However, Oregonians were likely aware of the bill ahead of election day and may have responded in anticipation of its passage. To get a sense of Oregonians’ awareness of Measure 114 prior to election day, we use Google Trends. We use the search “Measure 114” from January 2022 through April 2023. Figure 8 displays the results from Google Trends. While we see a spike in searches during the week of the election, we also notice that Google searches for “Measure 114” increase several weeks ahead of election day. 4.5.2 State-level synthetic control difference-in-difference. We use a synthetic control difference-in-difference strategy to establish a causal relationship between Measure 114 and background checks. First, we create a synthetic control for Oregon as a whole, based on other states. We then use a difference-in-difference specification to compare background checks in Oregon to a simulated counterfactual for Oregon had Measure 114 not passed. Figure 9 illustrates the effect of Measure 114 with background checks per capita as the outcome variable.4 The top panel compares Oregon and synthetic Oregon over time, and the bottom panel plots the difference between the two (Oregon minus synthetic Oregon). Figure 9 assumes that treatment begins only as of November 2023. However, the results from Figure 8 suggest that treatment—namely, anticipation of Measure 114 and its impact on Oregonians—may begin earlier than November. Given that the data are aggregated at the monthly level for this part of the analysis, we must choose between October and November 2022. Figure 9 illustrate an enormous surge in background checks in anticipation of Measure 114. The top panels of the figure confirm the findings in previous studies that firearm purchases respond to mass shooting events (MSE) and other events—like President Obama’s 2008 election and the onset of the COVID-19 pandemic. However, it is striking how much larger was the response of Oregonians to Measure 114 compared 4Results for the effect of Measure 114 with the raw count of background checks as the outcome variable can be seen in Appendix C.1. 115 to any other event over the last 20 years. It is also worth noting that the bottom panels of the figures suggest that Oregonians were more responsive than other similar states to the effects of COVID-19. We report additional results in Table 18, where we distinguish background checks by type. We can observe that the lion’s share of the increase in background checks concerned handgun sales. Table 19 reports the cumulative increase in background checks in the five months since the passage of Measure 114. During our observation period (November 2022 through March 2023), Measure 114 induced a cummulative increase of about 150,000 background checks in Oregon. In per-capita terms, this translates to approximately 3.5 background checks for every one hundred Oregonians. 4.5.3 Time-series models. As mentioned, in contrast with the FBI data, the background check data provided by the Oregon State Police has much higher resolution, being reported at the day and county level. This enables us to observe decisions about purchasing firearms in greater detail. However, we are constrained in that the data do not allow for an obvious counterfactual. Relying on the causal relationship established via our state-level synthetic control results, we first examine the OSP data as a timeseries from February 2018 through February 2023 aggregated, to the weekly level across all Oregon counties—shown in Figure 10. Our decision to aggregate to the weekly level is motivated by weekly patterns in the data. More specifically, there are sharp declines in background checks on Sunday and Monday of each week. Figure 10 demonstrates a large but brief spike in background checks during the week of 3/15/20, when the COVID-19 Pandemic unofficially “began.” The other distinct spike in background checks occurs in conjunction with the passage of Measure 114. In comparison to the spike related to COVID-19, this surge in background checks is substantially larger in magnitude and lasts considerably longer. 116 To get a better understanding of responses to Measure 114, we use same the weekly aggregated timeseries, but focus on the period between Januray 2022 and the end of the OSP data in February 2023—as seen in Figure 11. Figure 11 shows that there is anticipatory behavior beginning early in October 2022 (indicated in red), followed by a prolonged surge in background checks that continues through the end 2022 (indicated in blue). Finally, Figure 12 shows daily background checks around the time of the passage of Measure 114 on November 8th, 2022. To minimize day-of-week variation, we exclude Monday and Sunday from the data.5 However, a weekly spike on Saturday is still evident followed by fewer background checks on Tuesday. We can see an increase in background checks beginning in early October (highlighted in red) that then surge with the passage of Measure 114. “Black Friday,” the shopping event the day after the U.S. Thanksgiving holiday, 17 days after the vote on Measure 114, is the high-water mark for background checks. We omit Thanksgiving Day from the data to improve clarity of the trend of background-check trend.6 For, roughly 2,000 background checks on 12/13/2022 and 12/14/2022 in the OSP data, it is not possible to attribute the background check to a specific county. These checks were dropped, accounting for a sudden drop in background checks (highlighted in yellow). We also see that background checks dipped around Christmas Day (highlighted in green). 4.5.4 Event Study. We also use the OSP daily data by county to perform a selection of event studies. Each county is a group in our event studies, but because Measure 114 is a statewide law, each group has the same event date. We aggregate the data for each county to the weekly level and begin our observation period in January 5Sharp dips in background checks occur on Monday and Sunday each week, and excluding them from the figure paints a clearer picture than with their inclusion. 6The sharp decline in background checks on Thanksgiving Day reflect the closure of stores rather than a change in the firearm purchasing behavior for Oregonians 117 2022. We are not especially interested in the average treatment effect—the average increase in background checks per county. However, these treatment studies allow us to identify anticipatory behavior more accurately. We use two event dates for our event studies: the week Measure 114 passed (the week starting 11/07/2022) and the week when we see anticipatory behavior in our timeseries (starting 10/10/2022). By comparing Figures 13 and 14 it is apparent that there is indeed anticipatory behavior. When we designate the event date as the election week, we see that the weeks preceding Measure 114 are consistently statistically negative—instead of the white noise we would expect to see. In addition, we see that predictions are consistently statistically negative starting seven weeks after our reference point, which again contradicts the timeseries figures 10, 11, and 12. In contrast, when we designate our event date as the week of 10/10/2022, we no longer see the consistently statistically significant negative observations preceding Measure 114. We also observe that background checks do not return to pre-Measure 114 levels in the immediately following weeks. Instead, they remain positive and statistically significant. 4.5.5 County Heterogeneity. The temporal resolution of the OSP data allows us to analyze heterogeneity in background checks across counties. We combine the FBI and OSP data to perform a synthetic DiD for each county in Oregon. To create our synthetic control for each Oregon county, however, we use non-Oregon state-level (rather than county-level) background checks. Given that the FBI data are available only at the monthly level, our analysis of county treatment effects must be constrained to the monthly level. Table 20 reports the estimated treatment effect for each county, where the outcome variable is background checks per 100,000 county residents. Each row in Table 20 reflects the results of a distinct synthetic control model, where the counterfactual predicitions for each Oregon county stem from a different set of weights on a different subset of Non- Oregon states. There is significant variation across counties in the magnitudes of these 118 estimates—most estimates are statistically significant and positive, apart from those for Malheur and Wheeler Counties. Counties containing the largest urban centers (e.g., Multnomah County, which contains much of the metropolitan area associated with the city of Portland) see relatively modest increases in background checks. In contrast, the state’s more-rural counties (e.g., Harney County and Douglas County) featured much larger per-capita increases in background checks. We dig further into heterogeneity in county-level background checks by differentiating county background checks by county voting results for Measure 114. We do this in two ways. First, we separate counties into just two groups: counties that voted >50% in support of Measure 114 and those that voted >50% in opposition to Measure 114. Figure 15 shows the timeseries of background checks for these two groups from February 2018 through February 2023. The dependent variable in the top panel is total background checks, and the dependent variable in the bottom panel is background checks per 100,000 county residents. Figure 15 shows that while the total background checks between the two groups of counties are comparable, the per-resident background checks were roughly 300% larger for counties that voted in opposition to Measure 114. Figure 16 uses the same specifications as Figure 15 but focuses our attention on just the months of 2022. Focusing on the weeks surrounding Measure 114, the bottom panel of Figure 16 indicates that the rates of per-capita background checks were over 50% higher for the counties that voted against the law. Next, instead of grouping counties simply by majority support for or against Measure 114, we bin counties into quartiles of the voting percentage distribution—determined by an equal number of counties for each quartile. Despite its passage overall, most counties did not individually vote in favor of Measure 114. Across counties, the range for he first quartile for support for Measure 114 is [11%, 20.8%]. For the second quartile, it is (20.8%, 31%); for the third quartile, it is (31%, 43.5%]; and for 119 the fourth quartile, it is (43.5%, 74%]. Figure 17 tells an even more striking story about which Oregon counties supported Measure 114 and how they responded to its passage. The top and bottom panels of Figure 17 again summarize the total background checks and background checks per county resident, respectively. The small rural counties, with the strongest opposition to Measure 114, are responsible for a small share of total checks, but they saw massive increases in per-capita background checks. Comparing per-capita rates of background checks between quartiles, we see that increases for counties in the first and second quatrile were approximately twice as large as for counties in the fourth quartile. Weekly per-capita rates for counties in the first and second quartile indicate that there were multiple weeks where the number of firearm background checks was close to 1% of the population. 4.6 Discussion We document large increases in Oregon’s gun-purchasing background checks in response to the state’s restrictive firearm control law, Measure 114. We estimate a cumulative increase of approximately 150,000 firearm sales resulting from Measure 114—equivalent to 3% of the state’s population purchasing firearms. We find that the increase in background checks begins weeks ahead of Election Day—suggesting anticipatory behavior even before the outcome of the vote is known. The increase in background checks surges after the passage of Measure 114 and persists above baseline levels for months afterward. The extant literature document other cases where citizens have responded in anticipation of restrictive firearm laws (e.g., Balakrishna & Wilbur, 2022; Depetris-Chauvin, 2015; Iwama & McDevitt, 2021). The surge in firearm sales following the passage of Oregon’s Measure 114 dwarfs estimates from similar studies in duration and magnitude. Balakrishna and Wilbur (2022) documented increased firearm sales when a Massachusetts law was reinterpreted, leading to a ban on certain types of firearms. Iwama and McDevitt (2021) find that Massachusetts’ Gun Violence 120 Reduction Act (MGVR) of 2014 led to an uptick in firearms sales ahead of the law. While Balakrishna and Wilbur (2022) and Iwama and McDevitt (2021) find large increases in firearm sales in response to stricter laws, the duration of their periods of the increase are much shorter than in the case of Oregon’s Measure 114. Depetris-Chauvin (2015) documents an increase in firearms sales following President Obama’s election in 2008. While Depetris-Chauvin (2015) finds a long-term effect, Figures 10 and 14 suggest that the increase due to Oregon’s Measure 114 is considerably larger than the "Obama Effect" for Oregonians. We offer several possible explanations for Measure 114’s unmatched effect on firearm sales. First, in the cases of Balakrishna and Wilbur (2022) and Iwama and McDevitt (2021), they do not proxy for firearm sales. Given that Massachusetts law, the authors can take advantage of a direct measure of all firearm sales in the state. In contrast, we must resort to background checks as a proxy for firearms sales. Consequently, our measure of firearms sales could also be counting firearms-related background checks that are not new firearms sales (e.g., background checks may be run for “concealed-carry” permits). However, in our state-level analysis, we subset the FBI data to include just non- private firearms sales, and in our OSP-FBI data comparison exercise (see Figure 7), we find that the OSP data do not appear to include permit-related checks. It is also worth noting that not every background check run made to support a possible firearms actually sale results in a purchase, and some background checks can be shared across multiple purchases by the same individual. An argument thus can be made that our proxy is just as likely to undercount firearm sales as it is to overcount them. Another possible explanation for Measure 114’s effect is that it was more salient to individuals than the specific gun laws studied in other circumstances. Depetris- Chauvin finds evidence for two competing explanations for increased firearms sales: fear of firearms control, and racial prejudice. Importantly, an actual firearm control law 121 motivated neither of these mechanisms. Unsurprisingly, the actual passage of a specific firearms-related law is more salient to citizens than the fear of the election of a perceived anti-firearm politician. In contrast to Depetris-Chauvin (2015), Balakrishna and Wilbur (2022) and Iwama and McDevitt (2021) measure the impacts of actual firearm control laws. However, again, an argument can be made that Measure 114 is more salient to the affected citizens. Citizens did not have an opportunity to personally vote on the firearm laws that Balakrishna and Wilbur and Iwama and McDevitt studied. Balakrishna and Wilbur study an existing law that was unpredictably reinterpreted, and Iwama and McDevitt study a law introduced and passed by the state legislature, not by statewide referendum. Measure 114 is distinct from these laws because it began as a petition that gained enough support to warrant its inclusion on Oregon’s 2022 election ballot and then was voted on by Oregonians. Measure 114 was in the public eye for an extended period, required citizen participation, and was sufficiently controversial to be only narrowly approved by voters. The difficulty in implementing Measure 114 after its passage might explain the sustained surge in background checks. The ban studied by Balakrishna and Wilbur went into effect within 24 hours of the state government’s announcement of the ban, and "[T]he announcement was widely seen as a surprise. It was not preceded by public comment or debate." This element of surprise may account for why Balakrishna and Wilbur found no evidence of any anticipatory buying before the announcement, and following the ban, firearm sales decreased well below the pre-ban rate. Iwama and McDevitt find that Massachusetts’ Gun Violence Reduction Act (MGVR) of 2014 led to an uptick in firearm sales ahead of the law. However, the authors find, "As anticipated, [the association between the legislation passing and handgun sales] dissipated quickly over time." We find no such quick dissipation. During the time since its passage in November 2022, Measure 114 has been stuck in Oregon’s court system—challenged as violating the 2nd 122 Amendment. As a result, Measure 114 inspired anticipatory firearms purchases but is, as of this writing, still unable to increase firearms controls in the long run. As a thought exercise, we estimate how long it would take for Measure 114’s implementation to reverse the increase in firearms that it inadvertently induced. To do this, first, we calculate the weekly rate of background checks for Oregon before the “Measure 114” effect. We exclude the surge in background checks in March 2020 due to the onset of the CoViD-19 pandemic. In the end, we calculate the mean for weekly background checks in Oregon from February 2018 through September 2022 to be roughly 6,100 statewide checks. We then assume that the cumulative increase in firearm sales resulting from Measure 114 ends in March of 2023—which is unlikely, given the trend in the data. As indicated in Table 19, the cumulative increase is approximately 147,000 background checks. We then assume a reduction level in the case that Measure 114 was implemented. For the sake of argument, we choose an ambitious 20% reduction in firearm sales. In this scenario, it would take approximately 120 weeks—over two years—to achieve a cumulative reduction in firearms equivalent to the increase induced by Measure 114. Of course, if we choose a more realistic reduction in firearm sales, the timeline increases considerably. After documenting the large surge in background checks in Oregon, we turned to the task of disentangling this increase by examining heterogeneity across Oregon counties. Measure 114 was a highly contested bill—winning less than 51% of the popular vote. Most counties individually opposed the bill, which passed statewide due only to support from those counties containing the large urban centers. We explore the relationship between support for the bill and changes in background checks. In general, smaller, rural, and right-leaning counties were responsible for about half of the increase in background checks, despite making up only a small share of the state’s population. This dynamic is consistent with work exploring the relationship between political ideology and 123 firearm ownership (e.g., Burton et al., 2021; García-Montoya, Arjona, & Lacombe, 2022; Joslyn, Haider-Markel, Baggs, & Bilbo, 2017; Tatalovich & Haider-Markel, 2022; Warner & Ratcliff, 2021). Joslyn et al. (2017) argues that firearms ownership is becoming one of the strongest correlates for Republican ideologies. Table 20 indicates that the greatest per-capita increases in county-level background checks occurs in more-rural counties that oppose Measure 114 (e.g., Harney, Douglas, Union, and Tillamook Counties). However, there are several exceptions. For example, despite broadly supporting Measure 114 relative to other counties, Hood River saw one of the largest increases in background checks. Lake County voted strongly against the law but saw only a relatively modest increase in background checks. Part of the explanation for these contradictions could be attributed to the very small populations in some Oregon counties, and using whole states othe than Oregon to synthetically control for such small counties could lead to perplexing results. It is also possible that the relationship between firearm ownership, support for firearm control, and responses to imminent control is nuanced. While it is reasonably straightforward to determine who owns a firearm, the psychology and social attitude of firearm owners are complex (Burton et al., 2021; Schleimer et al., 2020). Panel 2 of Figures 15 and 17 demonstrate that background checks increased the most in counties with historically higher shares of background checks per capita. This result suggests that, to some degree, a larger share of the increase in firearms sales can be attributed to people or people with neighbors who already owned firearms before Measure 114. It is unclear what will be the long-term impact of Measure 114 and related Senate Bill 348 on firearms prevalence and overall welfare in Oregon. Measure 114 undoubtedly induced a substantial increase in firearm sales. Moreover, neither law has been successfully implemented. A robust body of work demonstrates that firearm ownership rates are often positively correlated with suicide and homicide rates (e.g., Braga, Griffiths, Sheppard, & Douglas, 2021; Cook & Donohue, 2017; Depetris-Chauvin, 124 2015; Lang, 2013; Siegel & Rothman, 2016). Additionally, research finds examples of stricter laws reducing suicide and homicide rates.7 A complementary body of work suggests that laxer laws lead to increased rates of homicide and suicide.8 However, reviews of the extant research find that the impact of firearms control laws is far from conclusive (Santaella-Tenorio, Cerdá, Villaveces, & Galea, 2016; Wintemute, 2015, 2019). The apparently variable impacts of firearms laws could be attributed to the implementation and enforcement of these different laws (Wintemute, 2019). For instance, there is evidence that individuals in states with Universal Background Check (UBC) laws do not always get background checks (e.g., Hepburn et al., 2022; Kravitz-Wirtz et al., 2021; Miller, Hepburn, & Azrael, 2017). Comparing the results of (Castillo-Carniglia et al., 2018, 2019) Castillo-Carniglia (2018, 2019) and (Kagawa et al., 2023) provides a good example of how non-compliance can obscure measurement of the efficacy of laws. Castillo-Carniglia et al. found that while the states of Colorado, Delaware, Oregon, and Washington passed UBC laws, background checks increased only in Delaware and Oregon—suggesting non-compliance. Kagawa et al. found that UBC laws in Colorado, Delaware, Oregon, and Washington do not unambiguously reduce suicide rates. Considering the apparent non-compliance in Colorado and Washington, it is difficult to know how to interpret Kagawa’s results. Balakrishna and Wilbur (2022) provide another example of non-compliance. They find that the sale of banned firearms decreased substantially after the ban, but did not decrease to zero. 7(e.g., Andrés & Hempstead, 2011; Anestis et al., 2015; Conner & Zhong, 2003; Cook & Ludwig, 2006; Crifasi et al., 2018; G. Edwards, Nesson, Robinson, & Vars, 2018; Irvin, Rhodes, Cheney, & Wiebe, 2014; Klarevas, Conner, & Hemenway, 2019; Knopov et al., 2019; G. Liu & Wiebe, 2019; Luca, Malhotra, & Poliquin, 2017; McCourt et al., 2020; Raissian, 2016; Rudolph et al., 2015; Siegel, Pahn, Xuan, Fleegler, & Hemenway, 2019; J. Smith & Spiegler, 2020; Tashiro, Lane, Blass, Perez, & Sola, 2016; D. W. Webster, McCourt, Crifasi, Booty, & Stuart, 2020). 8(e.g., Alban et al., 2018; Donohue, 2017; Doucette, Crifasi, & Frattaroli, 2019; Fleegler, Lee, Monuteaux, Hemenway, & Mannix, 2013; Reeping et al., 2022; Siegel et al., 2017; M. C. Williams, n.d.). 125 In addition to non-compliance, the heterogeneity in state laws may undermine the efficacy of state-level firearm control laws. Evidence shows that firearms tend to flow from states with laxer laws into states with stricter laws (e.g., Andrade et al., 2020; Knight, 2013; Takada et al., 2021). Concerningly, but perhaps not unsurprisingly, firearm homicides are more often connected to illegally acquired firearms (e.g., Braga, Brunson, Cook, Turchan, & Wade, 2021; Cook, 2018; Cook, Harris, Ludwig, & Pollack, 2015; Semenza, Stansfield, Steidley, & Mancik, 2023). Firearm violence is a "public health crisis" (Braga, 2022). Despite scattered minimal increases in firearms control, support for firearms control continues to increase (Crifasi et al., 2020). However, an effective way to reduce firearms violence through state-level laws remains elusive. Relaxing firearm controls leads to increased firearms sales, and tightening firearms controls—at least in the short run—also leads to increased firearms sales. Measure 114 is thus a cautionary tale for policymakers: A firearm control law will lead to an increase in firearms if it is not quickly and effectively passed and meticulously enforced. 126 4.7 Tables and Figures 127 128 Table 17. Firearm laws in 2023 State-level background check, permit-to-purchase, and high capacity magazine laws in 2023. Background Check Laws State What type of guns require BGC? When is the background check performed? High capacity magazines prohibited? Permit-to-purchase law? Alabama — — No — Alaska — — No — Arizona — — No — Arkansas — — No — California All firearms (handguns, rifles, and shotguns) Point-of-sale Yes — Colorado All firearms (handguns, rifles, and shotguns) Point-of-sale Yes — Connecticut All firearms (handguns, rifles, and shotguns) Permit to purchase and point-of-sale Yes Yes Delaware All firearms (handguns, rifles, and shotguns) Point-of-sale Yes — Florida — — No — Georgia — — No — Hawaii All firearms (handguns, rifles, and shotguns) Permit to purchase Yes Yes Idaho — — No — Illinois All firearms (handguns, rifles, and shotguns) Permit to purchase and point-of-sale Yes Yes Indiana — — No — Iowa — — No — Kansas — — No — Kentucky — — No — Louisiana — — No — Maine — — No — Continued on next page 129 Table 17 – continued from previous page Maryland All firearms (handguns, rifles, and shotguns) Permit to purchase (for handguns) Yes Yes and point-of-sale (for all guns) Massachusetts All firearms (handguns, rifles, and shotguns) Permit to purchase Yes Yes Michigan All firearms (handguns, rifles, and shotguns) Permit to purchase No Yes Minnesota Handguns Permit to purchase or point-of-sale No Optional and semiautomatic military-style assault weapons Mississippi — — No — Missouri — — No Repealed Montana — — No — Nebraska Handguns only Permit to purchase No Repealed Nevada All firearms (handguns, rifles, and shotguns) Point-of-sale No — New Hampshire — — No — New Jersey All firearms (handguns, rifles, and shotguns) Permit to purchase and point-of-sale Yes Yes New Mexico All firearms (handguns, rifles, and shotguns) Point-of-sale No — New York All firearms (handguns, rifles, and shotguns) Permit to purchase (for handguns and semiautomatic rifles) Yes Yes and point-of-sale (for all guns) North Carolina — — No Repealed North Dakota — — No — Ohio — — No — Oklahoma — — No — Oregon All firearms (handguns, rifles, and shotguns) Permit to purchase and point-of-sale Yes Yes Pennsylvania Handguns only Point-of-sale No — Rhode Island All firearms (handguns, rifles, and shotguns) Point-of-sale Yes Yes South Carolina — — No — South Dakota — — No — Continued on next page 130 Table 17 – continued from previous page Tennessee — — No — Texas — — No — Utah — — No — Vermont All firearms (handguns, rifles, and shotguns) Point-of-sale Yes — Virginia All firearms (handguns, rifles, and shotguns) Point-of-sale No — Washington All firearms (handguns, rifles, and shotguns) Point-of-sale Yes — West Virginia — — No — Wisconsin — — No — Wyoming — — No — Table 18. Change in Oregon Background Checks after Measure 114 Type of Check Estimate S.E. CI.lower CI.upper P.value Raw Count Handgun 21,500 1,800 18,000 25,000 <0.001 Long gun 7,700 1,600 4,600 10,800 <0.001 All non-private gun sales 28,700 2,800 23,300 34,100 <0.001 Per 100,000 residents Handgun 516 25 466 566 <0.001 Long gun 178 26 127 229 <0.001 Non-private gun sales 679 45 591 767 <0.001 Notes: Results from synthetic control difference-in-difference. Estimated using state-level FBI background checks from January, 2000 through March, 2023. Treatment spans from Nov. 2022 through Feb. 2023. 131 Table 19. Cummulative change in Oregon Background Checks after Measure 114 Type of Check Months after election Estimate S.E. CI.lower CI.upper P.value Raw Count Handgun 0 (Oct. 2022) 3,500 1,700 300 7,900 0.048 1 (Nov. 2022) 45,300 3,300 39,400 52,500 <0.001 2 (Dec. 2022) 72,800 5,400 61,400 84,600 <0.001 3 (Jan. 2023) 89,400 6,800 75,600 104,100 <0.001 4 (Feb. 2023) 97,800 8,600 79,300 115,700 <0.001 5 (Mar. 2023) 111,100 10,200 92,500 134,900 <0.001 Long gun 0 300 16,00 -3,100 3,000 0.628 1 16,000 3,100 8,700 20,900 0.012 2 24,700 5,100 12,500 33,000 0.012 3 30,400 6,500 13,800 41,300 0.014 4 32,900 7,700 13,900 45,500 0.016 5 38,700 9,100 14,700 54,300 0.016 Non-private gun sales 0 3,700 2,200 -1,300 8,300 0.098 1 60,900 4,300 51,300 68,800 <0.001 2 96,600 7,100 81,100 110,900 <0.001 3 118,400 9,200 99,500 138,000 <0.001 4 129,200 11,200 107,100 154,000 <0.001 5 147,300 14,600 113,200 177,400 <0.001 Per 100,000 residents Handgun 0 93 21 55 140 <0.001 1 1,079 48 990 1,192 <0.001 2 1,731 81 1,575 1,912 <0.001 3 2,143 101 1,941 2,376 <0.001 4 2,349 124 2,095 2,635 <0.001 5 26,73 145 2,392 3,013 <0.001 Long gun 0 -5 23 -51 46 0.906 1 362 49 253 468 <0.001 2 555 83 389 764 <0.001 3 695 106 488 961 <0.001 4 749 129 485 1,054 <0.001 5 883 149 580 1,206 <0.001 Non-private gun sales 0 84 38 14 166 0.020 1 1,427 81 1,259 1,610 <0.001 2 2,258 135 1,972 2,561 <0.001 3 2,792 172 2,437 3,180 <0.001 4 3,039 213 2,591 3,515 <0.001 5 3,479 252 2,960 4,036 <0.001 Notes: Results from synthetic control difference-in-difference. Estimated using state-level FBI background checks from January, 2000 through March, 2023. Treatment spans from Nov. 2022 through Feb. 2023. 132 Table 20. County-level changes in Oregon Background Checks after Measure 114 County Estimate S.E. CI.lower CI.upper p.value Baker 604.11 77.85 451.54 756.69 <0.001 Benton 293.05 46.07 202.76 383.34 <0.001 Clackamas 492.04 53.13 387.91 596.17 <0.001 Clatsop 290.70 45.87 200.79 380.60 <0.001 Columbia 644.16 53.13 540.04 748.29 <0.001 Coos 542.44 49.20 446.01 638.88 <0.001 Crook 567.42 56.92 455.86 678.98 <0.001 Curry 164.10 55.49 55.35 272.85 0.003 Deschutes 516.09 55.10 408.09 624.09 <0.001 Douglas 872.43 57.85 759.05 985.81 <0.001 Gilliam 286.34 56.27 176.05 396.63 <0.001 Grant 583.12 53.13 478.99 687.25 <0.001 Harney 1194.93 74.14 1049.61 1340.24 <0.001 Hood River 901.86 47.56 808.64 995.08 <0.001 Jackson 519.09 55.84 409.65 628.53 <0.001 Jefferson 238.37 53.19 134.13 342.62 <0.001 Josephine 464.66 54.97 356.91 572.40 <0.001 Klamath 539.01 50.77 439.50 638.52 <0.001 Lake 152.73 48.21 58.24 247.21 0.002 Lane 473.86 53.13 369.73 577.99 <0.001 Lincoln 578.81 48.00 484.73 672.88 <0.001 Linn 725.99 50.73 626.57 825.41 <0.001 Malheur -4.99 64.11 -130.65 120.66 0.938 Marion 620.44 54.65 513.33 727.56 <0.001 Morrow 174.87 52.16 72.65 277.09 0.001 Multnomah 164.59 55.31 56.19 273.00 0.003 Polk 264.62 42.28 181.75 347.49 <0.001 Sherman 581.36 55.78 472.04 690.69 <0.001 Tillamook 754.47 50.41 655.66 853.27 <0.001 Umatilla 545.84 49.04 449.71 641.96 <0.001 Union 759.19 56.22 649.00 869.38 <0.001 Wallowa 655.54 68.11 522.05 789.03 <0.001 Wasco 397.37 57.28 285.09 509.64 <0.001 Washington 368.40 53.13 264.27 472.53 <0.001 Wheeler -58.01 45.64 -147.46 31.43 0.204 Yamhill 735.33 54.49 628.53 842.12 <0.001 Notes: Results from synthetic control difference-in-difference. Treatment is for individual Oregon counties from Feb. 2018 through Feb. 2023. Controls estimated using state-level FBI background checks. 133 134 Figure 7. FBI vs. OSP data Comparing OSP data to different subsets of FBI data suggests that excluding permit-related background checks from the FBI data gives the closest match between datasets. Non-private gun sales is compromised of handgun, long gun, other, and multiple from the FBI data. Non-permit excludes permit and permit-recheck from the FBI data. 135 Figure 8. Google Trends search for Measure 114 We use Google Trends to examine Oregonians’ awareness of Measure 114. The observation period is from January of 2022 until April 2023. Google Trends observations are at the week level. The search index is calculated that the week with the highest number of searches in the obersation period is 100, and all other weeks are normalized to that week. 136 Figure 9. Synthetic control difference-in-difference for Oregon background checks (per capita) Background check data is Non-private gun sales subset of FBI data. Outcome variable, background checks, calculated as a per capita (100,000 state residents). The top panel confirms previous findings that Mass-shooting Events and fear of gun control that led to changes in gun purchasing behavior (e.g., Obama’s election in 2008). 137 Figure 10. Weekly timeseries (2018-2022) Background checks provided by Oregon State Police from Feb. 2018 through Feb. 2023. Data has been aggregated from the daily level to weekly checks. There are two significant spikes in checks: Covid lockdown begins and Measure 114 passes. 138 Figure 11. Weekly timeseries (2022) Background checks provided by Oregon State Police from Jan. 2022 through Feb. 2023. Data has been aggregated from the daily level to weekly checks. Anticipatory behavior appears to begin in October (marked in red). Increased sales persist through mid- January (marked in blue). 139 Figure 12. Daily timeseries Background checks provided by Oregon State Police from Aug. 2022 through Feb. 2023. Sunday and Monday of each week were dropped to improve interpretability. Anticipatory behavior appears to begin in October (marked in red). Increased sales persist through mid-January (marked in blue). The highest number of background checks are recorded on “Black Friday.” There were a significant number of background checks ( 2000) recorded on 12/13 and 12/14 but were attributed to an “Unknown” county and were dropped. Background checks decreased significantly near and on Christmas. 140 Figure 13. Event Study for OSP background checks (11-08) Background checks aggregated to the week. Event date is the week of Election Day (11/08). Point of reference: 11-08-22. We see consistently statistically significant negative observations preceding our event date, suggesting that background checks increased before our event date. 141 Figure 14. Event Study for OSP background checks (10-2) Background checks aggregated to the week. Event date is the week of (10/10). Point of reference: 10-02-22. We see consistently statistically significant negative observations preceding our event date, suggesting that background checks increased before our event date. 142 Figure 15. Timeseries for by majority vote share (2018-2022) Background checks aggregated to the week. Background checks are separated into two groups: counties that voted in majority for Measure 114 (blue) and counties that voted in majority against Measure 114 (red). The top panel measure total checks and the bottom panel measures checks per capita (100,000 residents). 143 Figure 16. Timeseries by majority vote share (2022) Background checks aggregated to the week. Background checks are separated into two groups: counties that voted in majority for Measure 114 (blue) and counties that voted in majority against Measure 114 (red). The top panel measure total checks and the bottom panel measures checks per capita (100,000 residents). 144 Figure 17. Timeseries by quartile vote share (2018-2022) Background checks aggregated to the week. Background checks are separated into quartiles (9 counties per quartile). Quartile is decided by votes for Measure 114. The first quartile for support for Measure 114 is [11%, 20.8%], the second quartile is (20.8%, 31%), the third quartile is (31%, 43.5%], and the fourth quartile is (43.5%, 74%]. The top panel measure total checks and the bottom panel measures checks per capita (100,000 residents). 145 Figure 18. Timeseries by quartile vote share (2022) Background checks aggregated to the week. Background checks are separated into quartiles (9 counties per quartile). Quartile is decided by votes for Measure 114. The first quartile for support for Measure 114 is [11%, 20.8%], the second quartile is (20.8%, 31%), the third quartile is (31%, 43.5%], and the fourth quartile is (43.5%, 74%]. The top panel measure total checks and the bottom panel measures checks per capita (100,000 residents). CHAPTER V CONCLUSION In this dissertation, I examine public goods in three contexts: local law enforcement behavior, climate change policy, and firearms control laws. I find that while public goods are designed to improve social welfare, the presence of heterogenous agents can lead to inequitable and unintended outcomes. In each context, I use a different methodological approach to identify some critical information needed to facilitate optimal provision of these public goods. In Chapter 2, I describe an audit study of local law enforcement agencies, designed to test for racial/ethnic and gender bias. I find causal evidence that local law enforcement agencies are substantially less likely to respond to requests from Black and Hispanic citizens who ask for help making a complaint against an officer. Interacting the race/ethnicity and gender of the requester provides even more insight into the disparate treatment individuals receive, on average, from law enforcement. White males are the most likely to receive assistance, and Black and Hispanic males are the least likely. The results from this study may help to explain the great disparity in relations between law enforcement and the different types of communities they serve. The results also indicate that accountability, in general may be an essential area of improvement for may law enforcement agencies in the United States. Chapter 3 uses a choice experiment to identify individuals’ preferences for a state- level carbon cap-and-trade program. Our survey design allows us to measure identify different dimensions of heterogeneity in preferences. We find that sociodemographic and ideological characteristics influence attitudes toward cap-and-trade programs and their many possible configurations. Among the various characteristics we examine, an individual’s attitude toward climate change is the strongest indicator of support for a program. Successful adoption of effective climate change policy may hinge on improving 146 society’s understanding of climate change. We also undertake an exercise in benefit- function transfer, exploiting heterogeneity in preferences across our representative sample of Oregonians. We explore the implications of our study for cap-and-trade program preferences at national level, as a function of heterogeneous local community characteristics across the rest of the United States. Finally, Chapter 4 examines the impact of Oregon’s recently passed firearms control law. We find a record increase in firearms purchased by Oregonians in response to the law. The intent of the law was to reduce firearms in the community, but the combination of forward-looking agents and the halted implementation of the law may have led to the opposite outcome. We christen this unintended effect the “Steel Paradox” after the well-known “Green Paradox” in environmental economics. The “Steel Paradox” is a cautionary tale for policymakers seeking to address the growing firearms crisis in the United States. 147 APPENDIX A CHAPTER 2 APPENDIX A.1 Appendix: Police Department Selection The selection process for police departments to be included in the study is as follows: 1. From the universe of governments provided by the U.S. Census, I create a list of possible jurisdictions that may have their own police departments. This list excludes state governments, counties, special districts and places with populations less than 7,500. 2. From the “possible department” list, I randomly a draw 1,000 jurisdiction names. The target number of departments is 2,000, but to streamline the process, I select possible department cities in 3 batches of 1,000. For each batch of jurisdictions, roughly 60% have a viable police department, police chief or alternate email addresses. 3. Email addresses are then collected from police department websites in these jurisdictions. – Governments without local police departments are dropped. – Police departments without publicly available email addresses are documented and dropped. – In the cases where there are multiple email addresses the prioritization is given first to (1) the email address for the department in general, and then to (2) the email address specifically for the police chief, (3) and finally to any possible contact (e.g., a community-affairs officer). I document the type of email address ultimately recorded in my database. 148 4. I repeat Steps 2 and 3 until 2,000 email addresses have been collected. Given that jurisdictions are selected in batches of 1,000, the final number of police department emails collected is 2,135. Randomization of the department selection process increases the external validity of the study. Requiring that populations served by these police departments are greater than 7,500 increases the plausibility of the existence of the purported email sender as a resident of their jurisdiction. A.1.1 Type of email collected. During the police department email address collection, the “type” of email address publicly available for collection varied from department to department. In this case, “type” refers to who is associated with the email address. For example, for Department X the only publicly available email address is for the chief of that department and for Department Y the only publicly available email address is for the shift-commander. In this example, I collect the email address for each department, and record that the email address type for Department X is chief and the email address type for Department Y is shift-commander. During the actual department email address collection, frequently departments had multiple email addresses publicly available.1 In the case of the existence of multiple publicly available email addresses, I used a consistent priority list to decide which email address to collect. Prioritization is as follows: 1. Top priority is given to a general department email address. This is done to get the most accurate representation of a department’s general behavior. 2. In the absence of a general department email address, priority is given to the chief of police. 1In other instances, there were no publicly available email addresses associated with a police department of interest. 149 3. In the absence of a general department email address or a chief email address, priority is given to the next highest in command officer. 4. In the absence of (1) (2) and (3), the email address for the records department is collected. 5. If none of the above email addresses are publicly available, any email address available on the the police department website is collected. 6. If there are no email addresses available on the department website, a cursory search is performed to find email contacts on other related websites (e.g., the website of the city that a department is located in or the official Facebook page for a department) 150 A.2 Appendix: Identity Construction Six different “identities” are used in this study: 1. Black Female 2. Black Male 3. White Female 4. White Male 5. Hispanic Female 6. Hispanic Male Consistent with standard practices in the correspondence study literature, identity (gender and race/ethnicity) of the email sender will be implied by name (first name and last name). Ten unique first names and six unique last names are chosen for each identity (60 unique name combinations for each identity). Using multiple names for each identity minimizes the importance of a specific name. – First names are selected from research done by Gaddis (Gaddis (2017a), Gaddis (2017b)). The top ten most racially identifiable first names (when coupled with last names), are chosen. – Last names are selected from the 2010 Census. Three criteria are used to select last names: 1. Percent of persons with that name having a specific race/ethnicity (e.g., White) 2. Percent of persons with that name having the other relevant race/ethnicity (e.g., Black or Hispanic) 3. The rank of the name (i.e. how common the last name is in the United States) 151 Name Search Equation: I selected surnames for this experiment that were both (1) racially distinctive and (2) commonly found. Priority was given to racially distinctive, because of the importance of race in the design of the experiment. However, I also wanted to avoid the scenario where police departments act differently if they see an exceedingly uncommon last name. In other words, I want race, and only race, to be communicated by the name of the identity. The three equations below reflect the priorities I used to select the names. I decided it was unnecessary to difference the Hispanic surnames with the other two groups because of how uncommon it was for Black and White people to have a surname commonly used by Hispanic people. – For Black Names: percent raceblack − percent racewhite)− .05× rankblack name – For White Names: percent racewhite − percent raceblack)− .05× rankwhite name – For Hispanic Names: percent racehispanic − .05× rankwhite name The full list of names can be inferred by the following two tables (there are 360 unique name combinations). Six high-profile recognizable celebrity names were omitted: Denzel Washington, Tyra Banks, DaShawn Jackson, Seth Meyer(s), Katelyn Olson and Pedro Martinez. These names have widespread recognition and during the testing process, respondents noted that they strongly associate these names with the celebrities having the same name. 152 Last Names White Black Hispanic Olson Washington Hernandez Schmidt Jefferson Gonzalez Meyer Jackson Rodriguez Snyder Joseph Ramirez Hansen Williams Martinez Larson Banks Lopez First Names White Male White Female Black Male Black Female Hispanic Male Hispanic Female Hunter Katelyn DaShawn Tanisha Alejandro Mariana Jake Claire Tremayne Lakisha Pedro Guadalupe Seth Laurie Jamal Janae Santiago Isabella Zachary Stephanie DaQuan Tamika Luis Esmeralda Todd Abigail DeAndre Latoya Esteban Jimena Matthew Megan Tyrone Tyra Pablo Alejandra Logan Kristen Keyshawn Ebony Rodrigo Valeria Ryan Emily Denzel Denisha Felipe Lucia Dustin Sarah Latrell Taniya Juan Florencia Brett Molly Jayvon Heaven Fernando Juanita As mentioned, the first names were selected from Gaddis (2017a) and Gaddis (2017b). In these studies, Gaddis analyzes the correlation between the average level of the mother’s education for a given first name and accuracy of perceived race and ethnicity of that name. For instance, Black names associated with lower education levels for mothers are more often perceived as Black than Black names associated with mothers with 153 higher average education levels. In my study, while creating the identities, the associated maternal education levels documented by Gaddis are recorded in my database. 154 A.3 Appendix: Email Account Creation To implement this study, sender email addresses had to be created for each putative identity. Ideally, each of the 360 identities would have a unique email address. During the pre-testing process, respondents suggested that “firstname.lastname.birthyear@mail.com” was the most realistic email address template. However, due to constraints from popular email servers (e.g., Yahoo), this was not feasible. Instead, a unique account was made for each last name (18 accounts in total). Due to availability, I had to be creative in creation of the email address. All of the addresses include some version of the relevant last name. Due to the prevalence of people with the last names chosen for the study, it was often difficult to find available addresses with the specific last name. As a result, I had to make creative decisions to create a plausible and name-relevant address. For example, “h3rnandez.1973@mail.com”. Often included is a birth year (e.g., Banksss.1991@mail.com). 155 A.4 Appendix: Example Email A.4.1 Email Text. The body of text for the email has been developed in consultation with other economists and a legal expert. The primary criterion in creating the right text for these emails concerned plausibility—i.e., I needed to create an email that sounded like a genuine request from a real citizen. Drafts of the email were sent to colleagues and police departments not selected for the correspondence study to assess the plausibility of the email. The body of the email message template reads as follows: Police Department Name, My name is first name and I am interested in filing a complaint against an officer in your department. I am not sure what to do, and would like to request information on how to make a complaint. Can you please send me this information? sign off full name Where full name includes a first and last name, and sign off is randomly assigned as “Thank you!” or “Sincerely,”. The decision to exclude a “Hi” or “Hello” was based on the increased likelihood of the email being filtered as spam during the preliminary testing process mentioned above. 2 2There is a small concern about this email being rejected as implausible. For example, a very small police department might know everyone with whom they have recently interacted and would be able to deduce, with little effort, that the email is fabricated. A small police department might also be more likely not to respond to an email because of staffing limitations. However, because assignment of treatment (see below) is balanced across departments, estimates should be remain unbiased. In future research, an alternative email to departments with a more innocuous inquiry (e.g., “Do you have a lost-and-found?”) could shed light on the matter. 156 Figure B1. Example email Figure B2. Example email 157 A.5 Appendix: Treatment Assignment Police departments are randomly assigned the sender identity they will see. The first step of treatment assignment was to balance the number of departments by state each week, so that every state received roughly the same number of emails each week. Next race and gender treatment are randomly assigned within state, with race and gender treatment levels balanced within each state. Given that assignment of emails to department by week within state was randomized, race and gender assignments are independent of week. Additionally, race and gender are roughly balanced across weeks— also as a result of the randomization of all treatment components. After week, gender and race are assigned, day of week is randomly assigned. Next, the sign off for each email is randomly assigned (the email sign off can be either “Thank you” or “Sincerely” followed by the sender’s name). The actual assignment of email sender first and last names to each department is randomized across all weeks and states. 158 159 Table B1. Distribution of Race, Ethnicity and Gender identity assignment by state State Putative Identities Response Statistics State Black White Hispanic Male Female Total Mean AK 3 2 2 4 3 5 0.714 AL 5 5 5 8 7 10 0.667 AR 6 4 6 7 9 8 0.500 AZ 6 5 4 9 6 12 0.800 CA 26 22 19 38 29 46 0.687 CO 7 7 5 10 9 14 0.737 CT 11 9 16 20 16 21 0.583 DE 2 1 1 1 3 2 0.500 FL 17 19 18 30 24 35 0.648 GA 9 10 6 10 15 14 0.560 IA 6 6 4 5 11 6 0.375 ID 3 3 5 6 5 6 0.545 IL 28 26 29 39 44 61 0.735 IN 10 10 11 16 15 19 0.613 KS 5 7 6 11 7 11 0.611 KY 3 5 4 7 5 6 0.500 LA 2 6 4 7 5 6 0.500 MA 16 17 19 28 24 31 0.596 MD 7 5 2 8 6 7 0.500 ME 6 5 6 8 9 10 0.588 MI 14 18 12 22 22 27 0.614 MN 12 8 11 13 18 24 0.774 MO 9 9 10 16 12 16 0.571 MS 6 5 5 7 9 4 0.250 MT 2 1 3 1 5 3 0.500 NC 10 6 11 12 15 17 0.630 ND 3 1 1 2 3 4 0.800 NE 4 6 2 5 7 10 0.833 NH 4 4 6 7 7 9 0.643 NJ 27 30 25 41 41 45 0.549 NM 5 3 4 7 5 3 0.250 NV 2 1 2 3 2 2 0.400 NY 14 18 19 26 25 31 0.608 OH 25 21 32 37 41 49 0.628 OK 6 7 3 8 8 10 0.625 OR 7 6 10 10 13 17 0.739 PA 25 22 21 33 35 44 0.647 RI 3 3 4 4 6 5 0.500 SC 5 6 4 10 5 7 0.467 SD 1 3 1 3 2 4 0.800 TN 6 6 6 10 8 12 0.667 TX 31 20 31 41 41 60 0.732 UT 6 4 2 8 4 9 0.750 VA 6 3 3 6 6 8 0.667 VT 2 3 3 3 5 6 0.750 WA 11 9 9 13 16 20 0.690 WI 10 10 11 17 14 26 0.839 WV 3 4 4 1670 4 4 0.364 WY 1 2 3 4 2 4 0.667 Figure B3. Emails sent by week. 161 Figure B4. Departments by state included in the study. 162 Table B2. Distribution of Race, Ethnicity and Gender identity assignment by week Week Putative Identities Response Statistics week Black White Hispanic Male Female Total Mean 1 75 73 68 107 109 139 0.644 2 74 79 63 103 113 141 0.653 3 59 68 77 103 101 132 0.647 4 74 60 76 119 91 123 0.586 5 77 62 78 104 113 144 0.664 6 79 71 68 112 106 131 0.601 A.6 Appendix: Experiment Implementation I created eighteen email accounts—one for each last name. The accounts were then linked to Mozilla’s Thunderbird mail application to help automate the emailing process.3 In Thunderbird, for each email address, 20 identities were created (10 females and 10 males). Although the email address that is seen by police departments cannot be arbitrarily manipulated, the “name” of the sender can be changed from message to message. For instance, an email can be sent as Claire Olson or Hunter Olson . This helps increase the salience of the putative identity and decrease attention to the less-specifc email address itself. Each department will receive just one email. Emails will be sent over a ten-week period. Spreading out the randomized controlled trial (RCT) over 10 weeks insures against the possibility that unique unanticipated current events could plausibly affect police department behavior (e.g.„ a high-profile regional or national incident involving the police). In the case of a high-profile policing incident, a weekly roll-out of the emails will 3I had originally intended to use the mailR package from R, but due to increased security policies with many popular email servers, that option is no longer as user friendly. To use mailR with, for example, Google, one needs to change the Google account settings to allow “less secure apps”. However, as of May 31st, this setting can no longer be adjusted. There are possible workarounds, but I decided to adopt an alternate strategy. 163 allow me to detect the possible effect of any such event on police departments’ responses to the emails. The timing of the roll-out is randomly selected using the following procedure. Police departments are randomly assigned to one of the ten weeks, while being stratified proportional to the total number of departments in each state. Each state’s total police departments (in my data set) are split into 10 equal groups and assigned to a week. In the event that, after the initial assignment, the number of departments by state are not divisible by 10, the remainder of the police departments are randomly assigned across the weeks. In the event that the total number of departments from a state is less than 10, departments are randomly assigned to the ten different weeks (with a maximum of one department per week). Each putative sender identity (i.e. email address) has the same probability of being assigned to any one of the 10 weeks. During each week, the emails are sent on Monday, Tuesday and Wednesday. Assignment of weekday is randomized. The decision to choose different days is largely motivated by an effort to improve the ease of implementation of the emailing process for the researcher. Each email must be sent individually, so it proved easier for me to monitor the email process by spreading out the emails over a few days (with roughly 70 emails being sent each day). All emails are sent at roughly 9 a.m. local time according to the time zone of the police department in question. However, if for a given week and given day, the same email sender address is being used for more than one police department (as dictated by the random assignment of race), a five-minute delay between each email from the same address, independent of first name, is employed. The strategy is adopted so that a single putative email account does not have to send more than one email at an exact time (i.e. at exactly 9 a.m.). 164 A.7 Appendix: Response Time and Word Count Not all responses are created equally: The current analysis of the data from this correspondence study designates the outcome variable to be a police department’s timely non-automated response to a request for help. Consequently, the results are a coarse reflection of the average department’s willingness to respond to a citizen’s request for help in making a complaint about an officer. However, the premise of biased policing refers to both the frequency of interaction between officers and citizens, as well as the conduct during the interaction. Even in the specific context of an email request for a complaint form, detecting and understanding potential differences in department behavior across different sender identities is worth exploring. For example, conditional on a department providing any response, do responses differ in their helpfulness and tone across identities and, if so, how do they differ? In some instances, scrutiny of verbatim department responses reveals that not all departments are willing to guide the citizen to the officer-complaint forms. In other instances, departments specifically advise against making a formal complaint. Responses also tend to reflect a wide range of sentiment. Some departments include an apology on behalf of the department, while others simply send a phone number with no other information—the assumed implication being that the complainant should call that number for assistance. To begin to answer the question of differential response conditional response, a cursory examination of heterogeneity of responses is performed. Table B3 reports the differentials of (1) the word count of emails from the departments and (2) the time it takes for a department respond between White male identities and the other five identities. Table B3 suggests that conditional on response, at least on the two specified dimensions, there does not seem to be any evidence of discrimination. There are a few reasons to not make strong conclusions about these null results. Most importantly, the analysis is subject to selection bias. These results are based 165 Table B3. Response time and word count of response measured across identities Dependent Variables: Word Count Response Time (hours) Model: (1) (2) Variables White × Female 0.2948 -1.653 (3.844) (2.573) Hispanic × Male 3.909 3.032 (2.533) (3.103) Hispanic × Female -3.051 -4.157 (3.937) (3.056) Black × Male 4.518 8.778 (7.728) (7.215) Black × Female -4.038 2.567 (2.849) (3.919) Fixed-effects Week Yes Yes State Yes Yes Fit statistics Observations 1,413 1,413 R2 0.08298 0.04623 Within R2 0.00291 0.00892 Clustered (week & dept_address_state) standard-errors in parentheses Signif. Codes: ***: 0.01, **: 0.05, *: 0.1 166 only on the departments that do respond, which are different than the departments that do not respond. Additionally, word count is a crude measure of helpfulness and sentiment. An email could be helpful, friendly and to the point, but still would reflect a word count similar to an email that is unhelpful and/or unfriendly. Time of response is a stronger indicator of helpfulness. However, a quick response could be the result of a department eager to help or a department being reactive to an accusation against one of their officers. To get a strong understanding of differences in the helpfulness and sentiment of responses would require selection correction and a more rigorous sentiment analysis. 167 A.8 Appendix: Summary Statistics Figure B5 and B6 depict the average response time for identities by week. The general average response time (66%) is depicted by the dotted black line. Reassuringly, no week in particular seems to drive the results Figure B5. Mean response rate by week by identities 1.0 0.9 0.8 0.7 0.6 0.5 0.4 1 2 3 4 5 6 7 8 9 10 Week email was sent The mean response rate by week. Mean response rate across all weeks and identities (66%) is depicted by dotted black line. 168 Response rate 169 Figure B6. Mean response rate by week for all putative identities Black female Black male 1.0 0.9 0.8 0.7 0.6 0.5 0.4 Hispanic female Hispanic male 1.0 0.9 0.8 0.7 0.6 0.5 0.4 White female White male 1.0 0.9 0.8 0.7 0.6 0.5 0.4 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 Week email was sent The mean response rate by week for all putative identities. Mean response rate across all weeks and identities (66%) is depicted by dotted black line. R e s p o n s e r a t e Response Rate differential from White grouped by local population size Group population size: 1 = [0 : 9,626], 2 = [9,632 : 14,188], 3 = [14,196 : 23,658], 4 = [23,663 : 47,408], 5 = [47,422 : 3,982,885] 0.05 0.00 -0.05 Putative Identity Black -0.10 Hispanic -0.15 -0.20 -0.25 1 2 3 4 5 Figure B7. Response rates differentials by local population size Response rates differentials for Black and Hispanic identities from White identities separated into bins determined by local population size. Comparing Columns 1 and 2 in Table 7 it is apparent that weighting by the population of the police department’s jurisdiction exacerbates the differences in response rates (with the exception of Hispanic male response rates). One possible explanation for these results is that departments serving larger populations are more likely to discriminate. To test this interpretation, departments are separated into five bins determined by the quintiles of the local populations of the departments included in the study. Model 1 from 6 is re-estimated, but this time interacting both Hispanic and Black with the five population bins. Figure B7 shows the results of this exercise. Figure B7 reveals no clear pattern in the relationship between population size and response rate. Black identity response rates are lowest for the largest and smallest quintiles, but the pattern does not hold for Hispanic identity response rates. The relationship between police department local population and response rate is ambiguous. Notwithstanding a clear relationship between local population and propensity 170 to discriminate, heterogeneous response rates across population sizes suggest that studies that restrict their area of focus to a limited number of local governments, or at least similarly sized populations, may not be able to extend their results to smaller (or larger) populations. Furthermore, the higher differentials in response rates between White and non-White response rates when weighting by population (Table 7) indicate, in a crude measure, that more people are discriminated against than is evident by finding the level of discrimination for the average police department.4 A variable strongly correlated with local population size is the number of employees working for a police department. Mechanically, as local populations increase, so do the sizes of departments. For the departments include in the present study there are on average 3 employees for every 1,000 residents. Results from Table 8 suggest that bigger departments are more likely to discriminate against Black identities and White female identities, and discriminate less against Hispanic identities. When the results are weighted by population in Columns 3 and 4 in 8, the discrepancies grow, which makes sense given that agency size is correlated with population size.5 To disentangle the effect of agency size and population size on response rate given their the two sizes correlations, the same exercise from Figure B7 is done. This time the quintiles are determined by the number of employees divided by local population size—a per capita measurement. Figure B8 displays the results of this exercise. No clear pattern emerges for Hispanic or Black identity response rates. Compared to Figure B7, Figure B8 has smaller differences for the first bin. However, in both figures 4The design of this study only sends one email to each department, so a department responding to a White email does not necessarily mean the same department would not respond to a non-White email. Conversely, a department not responding to a non-White email does not necessarily mean the same department would respond to a White email. However, higher levels of discrimination for departments serving larger populations do suggest that a larger share of the population might be discriminated against than is indicated by the unweighted results. 5One curious result of Table 8 is comparing the point estimate for Hispanic females from Column 2 and Column 4. 171 Response Rate differential from White grouped by dept. employee per 1,000 residents Group population size: 1 = [0 : 1.66], 2 = [1.66 : 2.03], 3 = [2.03 : 2.41], 4 = [2.42 : 2.95], 5 = [2.95 : 15.7] 0.05 0.00 Putative Identity -0.05 Black -0.10 Hispanic -0.15 -0.20 -0.25 1 2 3 4 5 Figure B8. Response rates differentials by local department size Response rates differentials for Black and Hispanic identities from White identities separated into bins determined by local department size. the biggest bin, bin 5, exhibits large differences for Hispanic and Black response rates. It is conceivable that the low response rates for departments with big populations might be attributed to departments being overextended and thus less capable of responding to requests for assistance. However, the highest employee per capita departments exhibiting the highest degree of discrimination runs counter to that argument. It is concerning that departments most capable, in terms of employees per capita, to respond to requests are most likely to discriminate. 172 APPENDIX B CHAPTER 3 APPENDIX 173 B.1 Appendix: Expanded Discussion of the Related Literature B.1.1 Context. While the United States has yet to adopt a federal carbon cap-and-trade program, regional carbon cap-and-trade programs have been adopted (Schmalensee and Stavins (2017)). Federal inaction may necessitate that regional coalitions and states to implement policies (Fullerton and Karney (2018); Peterson and Rose (2006)). Recently, Oregon attempted twice to adopt a carbon cap-and-trade program. In June of 2019, Oregon’s eleven Republican senators fled the state, preventing the passage of HB-2020, Oregon’s carbon “cap-and-trade” bill. In spring of 2020, SB- 1530, the proposed Oregon cap-and-trade program that was modified to be more palatable to rural Oregonians. Despite the modifications, like its predecessor, SB-1530 was again boycotted by Republicans and defeated. Despite substantial support for a carbon tax amongst economists (e.g., Metcalf (2009)), Oregon has been unable to pass legislation for a carbon cap-and-trade program. Oregon’s attempts and (current) failure to create such a program highlights the complicated and contentious political, environmental, and social intersection concerning environmental regulation. Successful passage of a carbon cap-and-trade in Oregon relies on understanding preferences for a number of key program attributes. B.1.2 Cap-and-Trade Attributes. Careful consideration was involved in the selection of program attributes to include for the carbon cap-and-trade programs we featured in our survey. 1 We ultimately chose to include nine total attributes for each program, which is clearly not an exhaustive list. It is typically necessary for choice experiments to avoid making unreasonable demands on the cognitive capacity of respondents. So our featured attributes were selected based on their importance to 1The attributes included are: the level of carbon emission reduction, the program’s impact on carbon- industry and green-industry jobs, the financial cost of the program to households, the program’s permit allocation system, the use of permit auction revenue, and the program’s inclusion of additional regulations. To see more details about our own survey instrument, see Appendix B.2 and Appendix B.3. 174 policy makers as well as the public. We will discuss briefly our motivation for including each feature, with the exception of the emissions reduction achieved (which seems self- explanatory). B.1.2.1 Jobs. Perhaps the biggest political impediment for carbon cap-and- trade programs, as for other environmental regulations, has been their potential adverse effects on the economy—namely eliminating jobs in certain industries (Coglianese et al. (2013)). Paramount to successful and equitable implementation of regulatory policy is ensuring that workers and communities are protected in the transition to a greener economy (Look, Raimi, Robertson, Higdon, and Propp (2021)). However, despite the political clamor about job losses, there has been relatively little systematic evidence that significant job losses accompany carbon pricing.2 Several studies have found a significant negative impact of environmental regulation on jobs, but the authors still argue in these cases that the benefits of the regulations (e.g. the Clean Air Act) have greatly outweighed the costs.3 A common finding for studies that identify job losses (e.g. Yamazaki (2017); Hafstead and Williams (2018)) is that the job losses occur in regulated sectors and are often accompanied by new jobs in clean industries. However, it should be stressed that it is difficult to assess substitutability among different types of jobs. Even in instances where job losses in some sectors are offset by job gains in other sectors, there can still be significant welfare impacts (Reed Walker (2013); ? Sovacool et al, 2021). Whether warranted or not, concerns about displaced workers remain a key obstacle to public acceptance of carbon emission regulations.4 2See Berman and Bui (2001), Deschênes (2012), Gray, Shadbegian, Wang, and Meral (2014), M. Liu, Tan, and Zhang (2021), Morgenstern, Pizer, and Shih (2002), Sheriff, Ferris, and Shadbegian (2019), and Yamazaki (2017). 3See Bartik (2013), Greenstone (2002), Reed Walker (2013) 4Some authors have questioned the validity of findings that show an impact of regulations on employment (? Belova et al, 2015; ? Hafstead and Williams, 2019). For example, inappropriate modeling assumptions common to most earlier research could lead to biased results (Hafstead, Williams III, and Chen (2018); Hafstead and Williams (2018). 175 B.1.2.2 Costs. Cap-and-trade programs are largely popular because of their promise of efficiency. However, the system’s inattention to distributional impacts have been a source of considerable concern.5. Fullerton (2011) points out that there are a number of ways in which distributional inequities can result from cap-and-trade programs. The most obvious is the upward pressure that cap-and-trade programs put on prices for carbon-intensive products. If the burden of these higher prices falls disproportionately on lower-income households, then the policy is considered regressive. This burden could be a result of lower-income households consuming relatively more carbon-intensive products (e.g. electric cars remain expensive) than higher-income households. Likewise, lower-income households may spend a larger portion of their income on carbon-intensive products (e.g. electricity bills). There is substantial evidence that carbon-pricing policies are regressive 6. However, there is also evidence for carbon pricing policies being progressive. 7 8 In any case, properly addressing the issue of a regressive carbon pricing scheme is challenging because of the distributional effects vary widely across communities and contexts 9. Conditional on the need for climate policy, it 5See (Buchs et al, 2011); Deryugina et al. (2019); Dorband, Jakob, Kalkuhl, and Steckel (2019); Farber (2012); Feger and Radulescu (2020); Fullerton and Muehlegger (2019); ? Goudler et al, 2019[?]; W. A. Pizer and Sexton (2019); Shammin and Bullard (2009); Wang, Hubacek, Feng, Wei, and Liang (2016); R. C. Williams, Gordon, Burtraw, Carbone, and Morgenstern (2014) Williams et al, 2015)[?] 6See Bento (2013), Buchs Barsely and Duwe, 2013; Burtraw et al. (2009); da Silva Freitas, de Santana Ribeiro, de Souza, and Hewings (2016); C. A. Grainger and Kolstad (2010); Kolstad (2014), Jorgenson, Goettle, Ho, Slesnick, and Wilcoxen (2013) Mathur and Morris (2014); Moz-Christofoletti and Pereda (2021); Wier, Birr-Pedersen, Jacobsen, and Klok (2005) 7Ohlendorf, Jakob, Minx, Schröder, and Steckel (2021) point out that, even in the case of a progressive policy, higher consumer prices still increase the risk of poverty for low-income households. 8See Beck, Rivers, Wigle, and Yonezawa (2015); Cronin, Fullerton, and Sexton (2019); Dorband et al. (2019); and Devarajan (2013). 9(See Rausch, Metcalf, and Reilly (2011); Ohlendorf et al. (2021); Fullerton, Heutel, and Metcalf (2012); Pashardes, Pashourtidou, and Zachariadis (2014); Fischer and Pizer (2019); Dorband et al. (2019); W. Pizer, Sanchirico, and Batz (2010); Jorgenson et al. (2013); W. A. Pizer and Sexton (2019); and Burtraw et al. (2009) 176 is possible that market mechanisms, while regressive, could be less regressive than other approaches to carbon management (e.g., see Borenstein and Davis (2016)). B.1.2.3 Permit Allocation. Permit allocation has proven itself to be one of the key design features cap-and-trade programs.10 Emission permits are allocated either based on historical output, through a government-coordinate auction, or a combination of the two Fischer and Fox (2007) [?](Fischer and Fox, 2007). The benefit of basing permit allocation on historical output is primary attributed to improving political feasibility and reducing the economic burden on firms. However, there is considerable criticism of this allocation system. ([?][Huber 2013], Vesterdal and Svendsen (2004) and Mackenzie, Hanley, and Kornienko (2008)). While politically less-palatable, permit auctioning is argued to be the welfare improving option (Cramton and Kerr (2002), Betz, Seifert, Cramton, and Kerr (2010), [?] [Belifiori, 2017], and Farber (2012).) B.1.2.4 Permit auction revenue use. A decision for the government to auction emission permits will lead to a new source of revenue for the government. This revenue could be recycled to ameliorate the distributional issues. 11 Three particular uses are modeled in this study: subsidizing emission-reducing equipment for firms and households, financing adversely affected workers and communities transition, and reducing state taxes. Research has found that revenue recycling and providing the public with tangible public benefits could significantly improve support for carbon pricing (Amdur et al. (2014); Beiser-McGrath and Bernauer (2019); Raymond (2019)). One 10Price collars and permit banking are also important considerations of the permit system (Fell, Burtraw, Morgenstern, and Palmer (2012), Fuss et al. (2018), Metcalf (2009), Murray, Newell, and Pizer (2009), Hasegawa and Salant (2015), Burtraw, Evans, Krupnick, Palmer, and Toth (2005), and Metcalf (2009) 11See Boyce (2018); [??] Buchs Barsely and Duwe, 2013; [??] Metcalf 2008; Dinan and Rogers (2002); Wang et al. (2016); Bento (2013); Goulder et al. (2019); Farber (2012); [??]Williams et al 2015; Parry and Williams (2013); Feger and Radulescu (2020); C. A. Grainger and Kolstad (2010); Moz-Christofoletti and Pereda (2021); Aubert and Chiroleu-Assouline (2019); W. A. Pizer and Sexton (2019). 177 caveat to this work is noted by ?? Sallee (2019), who argues that using revenue recycling cannot achieve a Pareto improvement. B.1.2.5 Additional Regulations. Climate Change’s impacts epitomize the issues of the environmental justice movement.12 However, despite being one type of a climate change mitigation policy, carbon cap-and-trade programs also raise environmental justice concerns (Farber (2012), Fowlie et al. (2020), Kaswan (2008)). The primary concern is that a market-based system will inevitably lead to a disproportionate accumulation pollution (i.e., “hotspots”) in marginalized communities, while the benefits of the program are enjoyed by higher-income and more-empowered communities (Fowlie and Muller (2019). Research has turned up little evidence to substantiate this concern (C. M. Anderson et al. (2018), Corburn (2001), Farber (2012), Fowlie et al. (2012), and Fowlie et al. (2020). Hernandez-Cortes and Meng (2020) used a dispersion model and find that, under California’s carbon cap-and-trade program, environmental equity has actually improved.13 One hypothesis is that the dirtiest places are also the cheapest to clean up (Currie, Voorheis, and Walker (2020)). On the other hand, C. Grainger and Ruangmas (2018), contrary to Hernandez-Cortes and Meng (2020), use a dispersion model to demonstrate that low-income communities have been exposed to more pollution. [??] Chan and Morrow (2019) find, however, that under the Regional Greenhouse Gas Initiative in the northeastern United States, electricity generation has shifted relatively towards areas with higher marginal damages from SO2 emissions. Some argue that carbon cap-and-trade programs are not the tool to address other pollutants and should only be relied upon only to address carbon emissions (e.g., Fowlie et al. (2020); Dulaney, Greenbaum, Hunt, and Manya (2017)). Like other distributional issues, others contend 12Mohai, Pellow, and Roberts (2009) give an excellent overview of Environmental Justice and Banzhaf, Ma, and Timmins (2019) provide another good overview more specifically prepared for economists. 13Shapiro and Walker (2021) similarly find that offsets do not seem to have created hotspots in California in relation to race or class. 178 that despite their challenges, properly designed cap-and-trade programs can mitigate most threats that these programs pose to environmental justice (e.g. Farber (2012); Kaswan (2008). B.1.3 Political Obstacles. Notwithstanding the considerable economic challenges of designing an optimal carbon cap-and-trade program, perhaps the biggest impediment to policy adoption in the United States is political obstacles (Goulder and Parry (2008); Berrens et al. (2004); Klenert et al. (2018). And at the end of the day, a policy’s economic implications are irrelevant if the policy is not passed by lawmakers. Indeed, economic analysis of environmental regulations frequently has little bearing on political popularity (Gillingham et al. (2018)). In part, this is due to compromised government integrity (Baranzini and Carattini (2017); Convery and Redmond (2013)). However, the primary reason carbon cap-and-trade programs and other carbon pricing policies remain largely politically infeasible is due to the public’s lack of support (Levi, Flachsland, and Jakob (2020)). Public support for a policy is highly dependent on the narrative surrounding the policy, as well as the framing of the policy (Alló and Loureiro (2014); Carattini et al. (2018); Bilandzic, Kalch, and Soentgen (2017), Bushell, Buisson, Workman, and Colley (2017); Dickinson, Crain, Yalowitz, and Cherry (2013); Terzi (2020)). A considerable amount of effort on the part of energy-intensive corporations has gone toward fostering opposition to climate-change-related policies through misinformation campaigns (Egan and Mullin (2017), Farrell (2016), Westervelt (2018)). Additionally, conservative regulation-opposing politicians have used public concerns about the economy to establish an anti-environmental regulation narrative (Coglianese et al. (2013); Egan and Mullin (2017)). The effects of these campaigns can be seen in research which reveals that the more people understand the impacts of climate change, the more they are willing to participate in (or bear the costs of) climate-change mitigation measures (Bord, O’Connor, 179 and Fisher (2000); Videras, Owen, Conover, and Wu (2012), Spence et al. (2011), Scannell and Gifford (2013), and Bain et al. (2012)). In the United States, political party affiliation is often the primary determinant in attitudes about climate change and mitigation policies. A review of climate change opinion surveys in the United States finds that not only is partisanship the paramount driver in support for policy, but that the gap between the Republicans and Democrats has become even more pronounced in recent years (Egan and Mullin (2017)). This assessment has been corroborated using revealed preference studies as well. For example, S. Anderson et al. (2019) use voting data from two failed carbon tax bills in Washington State and find that political party affiliation is by far the biggest indicator of support or opposition to the policies. In their study, political ideology accounts for 91% of the variation in vote shares across precincts. 180 B.2 Appendix: Structure of the Survey B.2.1 Demographic Questions for Screening. The first section of the survey collects five basic pieces of information about the respondent: place of residence (state), age, gender, race, and income. A sixth question asks the respondent to report their Oregon zip code. Their response is cross-referenced against an exhaustive list of Oregon’s residential zip codes. What we call “Demographic Questions for Screening,” serves multiple essential purposes. First, in order to measure heterogeneous willingness to pay (WTP) measures across demographics it is essential that we in fact know what are those demographics. Second, we would like our 1000 observations to be as representative of Oregon, at least on those five categories, as possible. If the respondent answers the six questions (e.g. White male, age 25-34, with income $20,000 - $24,999 a year, and lives in the zip code 97219) and we observe that we already have “enough” respondents fitting that description, then we excuse the respondent from the rest of the survey. Finally, the preliminary demographic section allows for sample selection correction. Before any information alluding to the content of the survey is revealed (i.e. that the survey is asking about WTP for a carbon cap-and-trade program), the respondent must answer those five basic questions. If the respondent chooses “Prefer not to say” for any of the categories then they are excused from the survey. If the respondent answers the demographic questions and we deem them eligible for the survey (see the second point above) then we introduce the topic of the survey. At this point, some respondents will drop out of the survey. However, because we have already gathered the demographic information about these dropouts we are able to observe what relationship, if any, exists between willingness to take the survey and the aforementioned demographic 181 characteristics. In other words, we are able to observe if sample selection is occurring and correct for it.14 B.2.2 Intro Questions. After completing the initial demographic information, the respondent is sent to the “Basic Information and Consent” page. It is at this point that the topic of the survey is revealed. The respondent is asked to consent to taking the survey. A followup question asks the respondent to confirm that they will provide “thoughtful and honest answers” as recommended by Johnston et al (2017). A “No” to either of these questions results in termination of the survey. B.2.3 Background Information. The next section of the survey provides the respondents with some basic information that is relevant to carbon cap-and-trade (CAT) programs. We begin with a brief explanation of climate change, carbon emissions, and the relationship between the two. We then explain the motivation of understanding Oregonians’ preferences concerning a CAT program. Namely, that Oregon legislators have attempted to pass a CAT in Oregon but have failed to do so in large part because specifics of the program were not agreeable. At this point we briefly explain the mechanics of a CAT. The respondent is then prompted with “How familiar are you with carbon cap and trade programs?” The respondent receives more information about how a CAT works if they answer “I should probably review the basics” or “Not familiar at all.” The respondent skips this further explanation if they answer “Quite familiar.” After the explaining the broad strokes of CAT programs, the survey elucidates which companies will likely be targeted in Oregon. While we cannot know what form policy will take in reality, it is important that the respondents have a similar idea of who is regulated while answering the survey. Because the hypothetical CAT programs our survey asks respondents to make choices about include specific rules about permits15 we provide a 14See Appendix B.4 for explanation of the selection correction procedure. 15See Choice Scenarios below. 182 cursory explanation of permits that complements the earlier explanation of carbon cap- and-trade programs. The respondents are able to pursue a more detailed explanation of permits if they feel inclined. The more detailed explanation does not include unique information, but rather provides a fuller explanation that might be more accessible should the cursory explanation leave respondents confused. The next two pages of the survey describe respectively penitential benefits (global and local) and costs (to households, to businesses, and concerns about equity). The final page of this section asks the respondent which county they live in. Using that response we are able to scale the values of the choice tasks to the county level the individual respondent lives in. B.2.4 Tutorial on Program Attributes. The next section of the survey’s purpose is to prepare respondents for the choice tasks. Each choice task presents a hypothetical program in the form of a table. Due to the complexity of the table it is appropriate to send some time explaining all the moving pieces of the choice task.16 The table is codified into five “Feature Groups”: Results, Carbon Permits, rules, Auction Revenue Uses, Additional Regulations, and Cost to your household. Each group has between 1 and 3 different features for a total of nine program features for respondents to consider. The nine program features are discussed below. The tutorial walks respondents through each feature of the table. A brief explanation of the feature is provided as well as a graphic that helps explain and provides relief from too much text. In addition to the explanation in the tutorial, the respondents are instructed that throughout the choice tasks a abbreviated description of any specific feature can be called up by clicking on that feature in the table. The values used in the tutorial section are the same values used in the first choice task. This is done to minimize mental effort on the respondent’s behalf and to help connect the tutorial with the choice task section. Before the feature-by-feature 16Pretesting found that thorough explanation was often necessary to convey the parameters of the hypothetical situation we were asking respondents to consider. 183 portion of the tutorial begins, respondents are instructed that the every program displayed in the choice task will begin January 1, 2023. Consequently, the various effects of the program will also begin to accrue January 1, 2023. Results The first feature group, Results, has three attributes: Carbon emission reduction, Carbon industry jobs lost, and Green industry jobs gained. The first feature, carbon emission reduction, refers to the percent reduction of total annual carbon emissions in Oregon by the year 2050 (relative to current emission levels). By clicking on a link, respondents are able to observe current emission levels in Oregon, which provides context to the reduction goal. The next feature is carbon industry jobs lost. The value presented in the table is not a percent of jobs lost, but rather a sum (e.g. 2,000 jobs). This sum is based in part on the county that the respondent indicated they lived in earlier in the survey.17 The respondents are instructed that they should imagine this job loss would occur over the next 30 years–consistent with the carbon emission reduction by 2050. Respondents are able to see how many current18 carbon industry jobs are in the respondent’s own county of residence. Respondents are also shown the total number of carbon industry jobs in Oregon.A link is included that clarifies what the survey means by “carbon industries.” Green industry jobs gained follows the same format as carbon industry jobs lost with the obvious exception that values are based off of green industry jobs, as defined by the BLS. Carbon Permits, rules 17We take the current level of carbon industry jobs in a county (e.g. 40,000) and then scale that number by a randomly generated percent (e.g. 5%) and present that number (e.g. 2000) as the number of carbon industry jobs lost. 18Calculations use data from 2019 for the Bureau of Labor Statistics 184 Carbon Permits, rules has only one feature: Share of permits auctioned. This value indicates the percent of the total cap set by the cap-and-trade program that is auctioned. The respondent is reminded that the rest of the permits are allocated for free. The survey does not specify the allocation process, but examples are provided for respondents who are curious earlier in the survey. Respondents have already been provided with information about permits earlier in the survey, but it is important that respondents clearly understand the costs and benefits of this feature. In the pretest, initially testers were often unclear about how this feature worked. The survey includes a detailed visualization of the permit rule system to aid in the explanation. Auction Revenue Uses The next feature group is Auction revenue uses. One of the primary issues to political feasibility of cap-and-trade programs is concern over some groups (e.g. coal miners or low-income households) being economically devastated by shrinking industries or higher costs of goods. A popular, in theory, way to address this heterogeneous burden is by using the funds from the auctioned permits to target groups or sectors in need of assistance. After consideration, this survey asks respondents to consider three possible ways to spend the auction funds: fund new equipment, support communities/workers, and Oregon tax relief. There are a multitude of other ways this money could be spent, but we found these categories to be a good balance of both encompassing most of the ways as well as specific enough for respondents to be able to consider clearly. The values presented are in percent terms (summing to %100 across the three uses). Because no specific dollar value is given in terms of total money raised by the auction, respondents similarly do not see a specific amount of money being allocated to these three uses. Fund new equipment refers to revenue spent to partially or entirely subsidize the purchase of emission reducing equipment for firms or households. In practice, the more 185 general label for spending of this nature might be referred to as “funding green projects.” However, that label is fairly vague and we believe that being more specific would result in more thoughtful responses. Support communities/workers refers to revenue spent on communities or workers in certain industries that bear a relatively heavier burden of the costs of the cap-and-trade program. Often referred to as a Just Transition, an essential component of a politically feasible program includes a safety plan for those hurt by the program. Some examples are given, for instance “communities with a lot of carbon-intensive jobs.” However, no specific group is explicitly stated as receiving these funds. Furthermore, the vehicle in which the funds are delivered are also left open-ended. Oregon tax relief refers to revenue spent on reducing the Oregon state taxes. A considerable amount of literature explores the various intricacies using taxes to counteract the costs of carbon pricing. However, in the survey we keep the idea simple to reduce mental effort on the behalf of the respondent. In short, the higher the value this feature takes on the lower “taxes” will be for Oregonians. Additional Regulations Additional Regulations has only one feature limit other pollutants. This feature is included ad hoc to address environmental justice concerns about unintended hot spots of carbon emission co-pollutants. Under a standard cap-and-trade program, it is possible that certain firms will actually increase emissions and thus increase co-pollutants (e.g. NOx or PM2.5). The limit other pollutants feature takes on two values, “YES” and “NO.” Respondents are instructed of the potential issue of unintended co-pollutant hot spots under a carbon cap-and-trade program and then this feature indicates whether some form of “additional regulation” would be part of the program. The details of the regulation are left vague, and in practice there are many ways to address the co-pollutant issue (e.g. 186 pollution standard, trading ratio or zonal trading). However, we believe this approach to be a good compromise of an accessible idea for respondents that addresses a key issue in carbon cap-and-trade programs. Cost to your household The final feature of the table is the cost to households in Dollars per month. Dollars per month is explained as the “average monthly costs your household would bear if the program is adopted.” We use the per month unit of time because energy bills, as well as consumption budgets, are frequently considered on a monthly scale. For many households these will be the primary sources of program costs. The respondents are instructed that these costs are unavoidable. A change in energy use or not working in a carbon-intensive industry does not absolve the respondent’s household from incurring the cost. B.2.5 Choice Scenarios. Each respondent is asked to perform six choice tasks (Program A through Program F). In each choice task they are presented with a hypothetical program in the form of a table (with the features mentioned above) and asked if they would prefer the program or no program. Each program is identical in display and content with the exception of the values that each feature takes on. To reduce mental effort on behalf of the respondent, Program A uses the values displayed in the program attribute tutorial. The choice is presented as a “vote.” The idea being, it is quite conceivable that a cap-and-trade program could be on a future ballot in Oregon so the survey does its best to replicate that scenario. This is one way in which the survey addresses the cheap talk issue that is common in contingent valuation methodology. In a preamble to the first program (Program A) the respondents receive a few additional instructions. First, they are told that the labels of the programs (A through F) are arbitrary and have no relation to the quality of the program. The preamble also clearly 187 states that in for any given program the respondent should only consider this program and no program at all. In other words, we do not want respondents voting against, for instance Program C, because they prefer Program B or some other program that they have conjured up in their head. This point is belabored by making the voting choices for each program “Program X to begin January 1, 2023” and “No program at all.” The respondents are instructed that voting against a program is a valid choice and that they should act freely since the researchers will not learn their identity. In an additional effort to address cheap talk respondents are instructed that, “In hypothetical choices such as these, people sometimes do not think carefully enough about what they would have to give up to be able to pay the monthly cost of the program. Please consider what your household would have to sacrifice, if the proposed cap-and-trade program were adopted.”. Finally, respondents are reminded that they are able to review explanations for a program feature by clicking on it in the table. The survey is designed such that the entire program table is visible on a screen– this includes cell-phone screens. We presume that many respondents will use cell-phones to take the survey and in order to consider all the features of a program the respondent should be able to see them all at the same time. Below each program table the respondent is prompted with, “If Program X were the only program to be put to a vote, I would vote for:” followed by the two aforementioned choices. After the respondent votes on Program A they are taken to different pages depending on their choice. If the respondent votes for Program A they are sent to the next choice task. If the respondent votes for no program they are asked to indicate from a menu of options all the reasons they voted against the program: – Too much emission reduction – Too little emission reduction – The economic impacts were too costly 188 – Did not approve of the auction revenue use – Too many permits were auctioned – Too few permits were auctioned – Did not approve of the Additional Regulations on other pollutants – The benefits of Oregon or the World do not justify ANY cost – Program A did not seem believable – Some other reason We are able to deduce from the respondents choices here whether they have valid economic reasons for voting no, or whether their no vote signals scenario rejection. 19 From this page, people who voted no on Program A are then asked if they would vote for any carbon cap-and-trade program. More specifically they are asked to choose from three options: “I did not like Program A, but there might be some type of program, at some cost low enough for me, for which I could possibly vote ’Yes”’, “Carbon cap- and-trade programs are a BAD idea. The government should not interfere with the free market. I would vote “No” for ANY carbon cap-and-trade program!” and “Something needs to be done about carbon emissions, but a carbon cap-and-trade program is not the solution.” If the respondent indicates that there is a program they could conceivably vote for they are sent to the Program B choice task. If at this point the respondent indicates they would not vote for any carbon cap-and-trade program they are skipped through the choice tasks to the next section of the survey–this saves the respondent unnecessary effort. If the respondent indicates that something needs to be done about carbon emissions, but the solution is not cap-and-trade they are sent to a further clarifying page. 19An example of a valid reason would be “The economic impacts were too costly.” An example of an invalid reason would be “Too little emission reduction.” We consider this an invalid reason, because the alternative to Program A is no emission reduction. So this answer indicates that the respondent is not operating in the framework of the hypothetical scenario. 189 If a respondent votes against Program A and indicates that carbon cap-and-trade programs are not the appropriate policy response they are asked if they would prefer a carbon cap-and-trade program to no policy at all. If at this point they indicate that they would prefer a cap-and-trade program to nothing at all they are sent to the Program B choice task. If they are staunchly opposed to cap-and-trade programs they are skipped through the choice tasks. Respondents who end up at this page are asked what policy they would prefer to a cap-and-trade program. This is asked after completing the choices tasks or after being skipped through the choice tasks as determined by their answers. After completing the choice tasks, respondents are asked to indicate which of the program features were most important to them. For respondents who made it through all choice tasks but voted for no in all six choice tasks, they are asked to explain the reasons they did so. They choose from a menu of options: – I am not convinced that climate change is actually happening – Even if climate change is actually happening, I don’t believe that anything we do (or don’t do) will make any real difference – I don’t think Oregon produces enough carbon emissions to matter. Instead, states and countries with more heavy industries should be required to cut back – I would be hurt by the effect of the program on my livelihood or the cost of things I buy – I would be hurt by the effect of the program on the cost of transportation – These choice tasks were just too difficult for me to process – Some other reason (Please specify) In this way we are able to gain a better understanding of what features are important to them, despite having no “choices.” B.2.6 Follow-up Questions. In the final section of the survey the respondent is asked a serious of additional socio-demographic questions. Some of these questions 190 would be quite informative if asked to ask in initial demographic question section. For instance, political ideology and political party affiliation are likely determinants in a respondent’s propensity to dropout of the survey. However, due to IRB restrictions this is not possible. Years in Oregon Two questions are asked concerning respondents’ residence in Oregon. First we ask respondents “How many years have you lived in Oregon” (with a sliding scale from 0 - 100), and then we follow up with “How many more years do you expect to keep living in Oregon?” (with a sliding scale from 0 - 100). These questions are included for two separate purposes. The first is to test the hypothesis: Are people who feel more connected to a geographic area (in this case, Oregon) more likely to support long term environmental policy (in this case, carbon cap-and-trade) in that area? The second purpose of this particular pair of questions is to screen out inattentive/careless respondents and bots. Both can potentially be detected by illogical uses of the sliders–especially when cross- referenced against the age question asked at the beginning of the survey. For instance, bots are often programmed to max out sliders. In this case answering 100 years to both years lived so far and future years lived in Oregon will raise a flag.20 General Socio-demographic Questions We include a series of additional basic socio-demographic questions. The questions are multiple/single choice, with the inclusion of “Prefer not to say” for each question, unless stated otherwise. We force a response to the questions but a respondent’s 20A very detailed assessment of invalid responses in an online survey panel was provided by Robert Johnston in a session entitled “Contemporary Guidance for Stated Preference Studies: An Update (roundtable)” at the 2021 Annual Conference of the Society for Benefit-Cost Analysis. 191 choice of “Prefer not to say” does not end the survey for them at this point. The questions in the order that they appear are: – What is your ethnicity?. Choices include: “Hispanic,” “Non-Hispanic,” or “Other.” – Which industries provide a significant amount of your household’s income?. We provide a menu of standard NAICS categories with specifically relevant subsectors broken out separately (e.g. Wood Product Manufacturing and Forestry/Logging are separate categories). – Politically, do you consider yourself to be:. Choices range from “Strongly conservative” to “Strongly liberal.” – What political party do you most strongly identify with?. Choices include: “Republican,” “Democrat,” and “Independent.” – What is your highest level of education?. Choices include: “Less than high school,” “High school graduate,” “Some college,” “Bachelor’s degree,” “Master’s degree,” “Doctoral degree,” and “Trade or technical school.” – Which best describes your current employment status?. Choices include: “Self- employed or small business owner,” “Employee, working full-time,” “Employee, working part-time,” “Not employed, looking for work,” “Not employed, NOT looking for work,” “Retired,” “Disabled, not able to work,” “Full-time student,” “Student with part-time work,” and “Other.” Attitudes about Climate Change Respondents are asked a serious of questions pertaining to their attitudes about climate change. Recall that in the background information section of the survey climate 192 change was briefly discussed. The survey takes the stance that climate change is real, which can be, unfortunately, read as a political stance by some Americans. It is likely that people willing to participate in the survey will need to have some basic acceptance that climate change is real. However, it is informative to have a more nuanced understanding of respondents’ attitudes. Respondents are prompted with, Climate change is real, and is a serious threat to humanity. Response options include: “Strongly agree,” “Agree,” “Neutral,” “Disagree,” and “Strongly Disagree.” If the respondent selects “Strongly disagree” they are skipped past the remainder of the climate change questions. If they choose any other option respondents are prompted with, Climate change is the result of human activity. Response options include: “Strongly agree,” “Agree,” “Neutral,” “Disagree,” and “Strongly Disagree.” If the respondent selects “Strongly disagree” or “Disagree” they are skipped past the remainder of the climate change questions. Finally, respondents are asked, Who is most responsible for slowing or preventing climate change? (Select all that apply).Response options include: “Local governments,” “The Federal government,” “Households,” “Companies,” “People who are wealthier,” “People who are responsible for more emissions,” “Other” and “Everyone equally.” Generational Questions Following the climate change attitude portion of the survey, respondents are then asked two generational questions. First they are asked, Do you have any of the following? (Check all that apply) with response options of: “Children,” “Grandchildren,” “Great- grandchildren,” “Other descendants (please specify),” “None of the above,” and “Don’t know / not sure.” They are then asked, How many generations back can you trace at least some of your ancestors? (Check the greatest number) with response options ranging from 193 “1 generation (i.e. just your parent(s))” to “7 or more generations.” The hypothesis is that people who see themselves as being remembered as well as they remember their ancestors might be more concerned about climate change and what they do about it now. While the questions perhaps appear non sequitur, being too direct in testing this hypothesis would potentially lead to social desirability bias.21 Energy Use Questions Respondent’s are asked two questions concerning fuel-type. First respondents are asked, What is the primary fuel you use to heat your dwelling?. Response options include: “Natural gas,” “Electricity from a conventional power plant,” “Electricity from solar panels or wind power,” “Electricity (unsure about source),” “Wood or wood pellets,” “Passive solar (heated water),” “Other (please specify),” “I don’t heat my dwelling,” and “Don’t know / not sure.” Respondents are also asked to indicate What are your most common forms of transportation? (Check as many as apply). Response options include: “Personal vehicle (gasoline or diesel),” “Personal vehicle (hybrid),” “Personal vehicle (electric),” “Public transportation (bus or train),” “Taxi or ride-sharing (e.g., Uber or Lyft),” “Bicycle,” “Walking,” and “Other (please specify).” It is likely that a large portion of the average costs to households resulting from a carbon cap-and-trade program will come through the form of higher energy prices. While we have already managed to glean information about respondents’ attitudes concerning higher energy costs it is informative to be able to connect a respondent’s attitude about higher costs and their actual energy use. While in the choice task we instruct respondents to assume costs are unavoidable, it is likely that higher energy costs would be more salient for those that use natural gas to heat their homes or always commute via a personal 21Respondents answer how they would like to be seen rather than how they actually feel. While an online survey format helps mitigate this, a loaded question like, “Do you care about future generations?” could lead to untruthful responses. 194 vehicle.22 Energy-type use also helps identify which individuals are willing (and able) to make personal efforts to mitigate climate change. Identifying this preference is informative in our WTP analysis. Feedback Questions We give respondents three separate prompts to provide feedback on the survey. The feedback, in addition to informing future studies, also aids in our data collection. The first feedback prompt is concerned with research team bias. There are two versions (version A and version B) of the prompt page that are randomly assigned to individual respondents. Both version A and B of the prompt page read, It seemed like the research team wanted me to:. The difference in the versions is that the order in which the menu of response choices is inverted fro version B. The response choices, in the order that they appear in version A, include: “definitely vote AGAINST a carbon cap-and-trade program,” “probably vote AGAINST a carbon cap-and-trade program,” “vote according to my own beliefs,” “probably vote FOR a carbon cap-and-trade program,” and “definitely vote FOR a carbon cap-and-trade program.” To reiterate, version B has the same choices but an inverted order. The randomized inversion of the response choices is to test for response order bias from the respondents. An ideal survey would result in all respondents choosing “vote according to my own beliefs.” However, non-systematic even distributions “probably vote AGAINST a carbon cap-and-trade program,” “vote according to my own beliefs,” and “probably vote FOR a carbon cap- and-trade program” are also acceptable. It is possible that due to the politically charged nature of climate change and related policies, that simply asking about attitudes toward 22It should also be noted that higher energy costs are also more salient for low-income households, because a larger portion of their income goes towards energy use. Recall, income level is one of the demographic pieces of information that is required in order to be eligible for the survey. 195 a carbon cap=and-trade program will lead the respondent to assume the research team is biased in favor of a program. The second feedback prompt asks the respondent to rate the survey using a five star rating system on four categories: “Understandable,” “Relevant to you,” “Interesting,” and “Informative.” In addition to providing useful feedback to future studies, this prompt also serves two additional purposes. First, low scores, especially for “Understandable,” give us an idea about the quality of a particular respondent’s submission. If a respondent only gives the survey 1 out of 5 stars for “Understandable” then we know that the choice tasks were likely not performed with full comprehension. Second, this star system provides another chance to draw attention to respondents (or bots) who are speeding through the surveys. Finally, respondents are given an open-ended prompt for feedback. The primary goal of this prompt is to detect respondents who are speeding through the survey or bots. Bots in particular will often enter gibberish or nonsensical answers. Of course, it always encouraging to read thoughtful comments from respondents. 196 B.3 Appendix: One Instance of the Survey (Screenshots) B.3.1 State of residence. The screening questions on the first 8 pages of this survey of Qualtrics panelists are completed before the page that contains the Consent to Participate in the actual survey. The Consent page is where the respondent first learns about the topic of the survey. This ordering is crucial to any ability to model systematic response/non-response (attrition from the random sample of Qualtrics panelists). Qualtrics prefers that respondents who are ineligible because quotas have already been met should be apprised of this fact before they get too far into a survey for which their answers are not needed. We require that potential respondents, still naïve about the topic of the survey, should at least be willing to supply their state of residence (always Oregon for this study) their age, gender, race, and household income bracket. These are the quota criteria for inclusion. However, we also require that they enter their ZIP code at the end of the screening section. Our overall target sample (1000) is not large enough to warrant quotas by ZIP code within Oregon, but we need this information to permit us to link every one of these screened and eligible respondents to external auxiliary data that can be geocoded to ZIP codes. For this climate-related study, these external data sources include Census ZCTA data, NOAA climate division data, 2020 Presidential election data by county, state legislative district voting data for Oregon for 2016, along with spatial data on the recent history of wildfires and drought levels. These neighborhood/county characteristics can be used as proxies for the salience of climate change problems to people who live in the same area as the eligible respondent. The partisan reactions to Oregon’s actual proposed carbon cap-and-trade programs in recent years suggests that political ideologies in the respondent’s neighborhood may make programs to reduce carbon emissions either very attractive or readily dismissed. For the latter group, we expect a lower likelihood of continuing with the survey to completion, after the topic of the survey is revealed. 197 Qualtrics can target Oregon in issuing survey invitations. However, to prevent non-Oregonians from pretending to be from Oregon, we first check to see whether respondents choose "Oregon" when given an opportunity to choose their state. (If they don’t choose Oregon, they are given one opportunity to choose again, and are terminated if they don’t choose "Oregon" at least on the second try. The follow-up double-check page is not shown here.) B.3.2 Age. B.3.3 Gender. 198 B.3.4 Race. B.3.5 Household income. 199 B.3.6 ZIP code. We validate the respondent’s zip code entry by checking it against a list of “standard” Oregon ZIP codes—namely ZIP codes that are not for post-office boxes and are not “unique” (typically for government agencies or other large institutions that have their own internal mail systems). Unique ZIP codes are more likely to be workplaces. 200 B.3.7 Check ZIP code. B.3.8 Confirm standard ZIP code. B.3.9 Consent to Participate. This page is where respondents first learn the topic of the survey. We expect that upon learning that the survey will be about carbon cap-and-trade programs to deal with climate change, some share of respondents to lose interest, while others will find the topic especially salient. The questions asked prior to this page allow us to assemble ZIP-code-level, county-level or other geo-coded variables that can help us identify the respondent’s community and how its characteristics may be systematically different than the neighborhoods of other potential respondents who either do, or do not, continue to complete the entire survey. 201 B.3.10 Oath. B.3.11 Introduction to climate change. 202 B.3.12 Introduction to carbon emissions. 203 B.3.13 Introduction to controversy of cap-and-trade in Oregon. 204 B.3.14 Introduction to cap-and-trade programs. 205 B.3.15 Introduction to program coverage. 206 B.3.16 Introduction to program coverage, continued. B.3.17 Introduction to permit auctions. 207 B.3.18 Introduction to grandfathering process. 208 B.3.19 Introduction to revenue distribution. B.3.20 Introduction to benefits of carbon emissions reductions. 209 210 B.3.21 Introduction to possible distributional concerns. 211 B.3.22 Oregon county of residence. The respondent selects an Oregon county from the drop-down list provided. B.3.23 Confirm county of residence. If this confirmation page shows that the respondent checked the wrong county, they get to choose again, after which we just have to assume that they have the right county. This page runs some javascript in the background to associate a variety of county variables with the respondent’s own county, and these variables are quoted later in the survey as the effects of each proposed cap-and- trade program are described in quantitative terms. 212 B.3.24 Introduce program summary tables. B.3.25 Explain Feature Group 1. 213 214 215 B.3.26 Explain Feature Group 2. 216 B.3.27 Explain Feature Group 3. 217 B.3.28 Explain Feature Group 4. 218 B.3.29 Explain Feature Group 5. 219 220 B.3.30 Program A choice. 221 B.3.31 Follow-up to “No” vote: Reasons for vote. 222 B.3.32 Follow-up to “No” vote: Will you always vote no?. 223 B.3.33 Program B choice. 224 B.3.34 Program C choice. 225 B.3.35 Program D choice. 226 B.3.36 Program E choice. 227 B.3.37 Program F choice. 228 B.3.38 Follow-up to “No” vote. For any of Program A through Program F, if the respondent votes “No” on that program, the survey follows up with the list shown on this page, of potential reasons for voting against these programs. We opted to show this follow-up question only in association with Program F, to avoid repeating the identical screenshot every time. B.3.39 Preferences for policies other than cap-and-trade. 229 B.3.40 Most important attributes. B.3.41 Attachment to Oregon. 230 B.3.42 Ethnicity. B.3.43 Sectors providing household income. 231 B.3.44 Political ideology. 232 B.3.45 Political party identification. B.3.46 Educational Attainment. B.3.47 Employment status. 233 B.3.48 Attitude: climate change real and serious. B.3.49 Attitude: climate change human-caused. 234 B.3.50 Attitude: responsibility to fix climate. B.3.51 Inter-generational concern: descendants. B.3.52 Inter-generational concern: ancestors. 235 B.3.53 Primary heating fuel used. B.3.54 Usual forms of transportation. 236 B.3.55 Perception of researcher bias. This question about research bias was randomly presented in one of two orders for the answers. Half of respondents saw a list that put “definitely vote FOR a carbon cap-and-trad program” at the top. The other half saw the same list, but in reverse order. This is intended to minimize any systematic order effects if the order is not included as an explanatory variable. B.3.56 Attitude: Respondent’s experience with survey. 237 B.3.57 Feedback text message. 238 B.4 Appendix: Selection Model and Selection Correction For this paper, we correct for sample selection in terms of the willingness of individuals to complete a carbon cap-and-trade survey. However, another potential type of sample selection remains unavoidable: people who are willing to take an (internet) survey upon an invitation from Qualtrics may not be representative of the general population. While this other form of sample selection is perhaps of lesser concern, it exists nonetheless for every consumer research panel and is difficult to address. B.4.1 Variable Selection. B.4.1.1 Inventory of candidate explanatory variables. To correct for sample selection, we use a probit model to estimate an invited participants propensity to complete the survey. We have relatively little information for every invited participant beyond their age, sex, race and income, but we also elicited the ZIP code where they live (prior to these potential respondents learning the topic of our survey). Thus, for a large share of these invited participants, we can merge in ZIP-code-level neighborhood characteristics as potential determinants of their interest in participating in a survey about climate policy. When creating the zip-code-level profiles, we cast a wide net to find candidate explanatory variables. The external data sets we employed to create profiles of each respondent’s neighborhood include: the American Community Survey 5-year ZCTA-level data (2014-2019), the MIT Election Data and Science Lab’s County Presidential Election Returns (2020), Oregon State Office Returns for 2016 state legislative district votes (by major party), drought data from National Drought Mitigation Center, and wildfire data from Wildland Fire Decision Support System. We create population proportion sociodemographics (e.g. share of the population for each ZIP code that has access to the internet) as well as a number of climate-related statistics for each zip code (e.g. Drought Monitor rating, or distance to the nearest wildfire in 2020). The zip-code-level profiles are constructed to capture political ideologies, salience of climate change, and 239 other sociodemographics that we hypothesize may impact each potential respondent’s propensity to continue with the survey, to completion, after learning its topic. It is not feasible to include, simultaneously, the full set of variables (self-reported screening sociodemographic variables from the survey and all the available zip code variables, and their interactions) in a single probit model. This is far too large a list of variables, given that we have only 1630 invited participants and 1050 completed survey questionnaires. Thus we resort to LASSO methods, in R, to pair down the list of potentially useful explanatory variables. We use all of our available data from panelist profiles (gender, age, race, and income bracket) as well as all our assembled geo-coded data linked to each invited participant by their zip code (where zip code is available), as potential regressors for our selection model.23 If a respondent drops out of the survey at any point after the consent page, they are considered non-respondents and get the value of 0 for the indicator variable Got to end of survey. In addition to being zero for cases of attrition, the variable Got to end of survey will take on the value of 0 if the survey response is deemed to be of insufficient quality (e.g. where the respondent gave nonsense answers to questions that required a typed response, or when the total time to complete the survey was too little to have permitted anyone to have read the questions with sufficient attention to provide an informed response (i.e. less than 7.5 minutes to complete the survey). Complete, the dependent variable for the LASSO equation, is a binary indicator variable that takes on the value of 0 if the respondent does not complete the survey or if they complete the survey in less than 7.5 minutes. Our response rate, complete surveystotal surveys , is 64%.25 23We conduct our LASSO estimation using the “glmnet‘’ package as implemented in R.24. 25The average completion time in the soft launch of the survey was 15 minutes. Anyone who completed the survey in less than half of the average completion time (i.e. 7.5 minutes) was determined a "speeder" and their response was considered invalid on the grounds that they were not providing thoughtful responses. 240 Table D1 shows the descriptive statistics for the available explanatory variables that LASSO methods determine to be relevant in predicting people’s propensities to respond to the survey. The variables shown were retained by LASSO estimation either individually or as part of a pairwise interaction. The first row, Outcome: 1=Got to the end of the survey, is the response rate for this survey. This relatively high response rate, about 65%, may reflect Oregonians’ strong feelings (both positive and negative) toward climate policy. Rows 1=Gender female through 1=Used a mobile device represent information collected by the survey for all individuals who make it through the screening process. The variables in the subsequent rows are for the respondent’s ZIP code (Census ZCTA), county, or other geographic proximity. Table D1. Descriptive statistics: response/non-response models) Individual explanatory variables used in response/non- response models (retained by LASSO estimation either individually or as part of a pairwise interaction term mean sd Outcome: 1=Got to end of survey 0.644 0.479 1=Gender:Not male 0.547 0.498 1=Gender:Male 0.453 0.498 1=Own age:18 to 24 0.129 0.336 1=Own age:25 to 34 0.247 0.431 1=Own age:35 to 44 0.174 0.379 1=Own age:45 to 54 0.112 0.315 1=Own age:55 to 64 0.132 0.338 1=Own age:65 and up 0.206 0.405 1=Own age:75 and up 0.058 0.234 1=Own hhld inc:100-125K 0.121 0.327 1=Own hhld inc:150-175K 0.029 0.167 1=Own hhld inc:175-200K 0.020 0.139 1=Own hhld inc:220K up 0.040 0.197 1=Own hhld inc:30-50K 0.127 0.333 Continued on next page 241 Table D1 – continued from previous page 1=Own hhld inc:50-75K 0.204 0.403 1=Own hhld inc:75-100K 0.133 0.339 1=Own hhld inc:lt 20K 0.142 0.349 1=Race:Black 0.041 0.199 1=Race:White 0.834 0.372 1=Started survey on Fri 0.119 0.324 1=Started survey on Mon 0.187 0.390 1=Started survey on Sat 0.183 0.387 1=Started survey on Sun 0.177 0.382 1=Started survey on Thu 0.102 0.303 1=Started survey on Wed 0.121 0.327 1=Used a mobile device 0.678 0.467 CDC SVI cnty-Hsg type, transp. 0.747 0.213 County pr: Dial-up internet 0.003 0.002 County pr: Internet w/o subsc. 0.025 0.012 County pr: No internet 0.065 0.033 County pr: Other internet 0.010 0.011 County pr: Satellite internet 0.047 0.026 Dist. nearest wildfire 2010-19 93608.087 51393.059 Dist. nearest wildfire 2020 0.268 0.179 Sq. miles; nearest wildfire 2020 71.629 118.484 ZCTA pr:Asian 4.554 4.938 ZCTA pr:Black 1.978 2.547 ZCTA pr:Other race 3.014 2.855 ZCTA pr:Two or more races 4.913 1.813 ZCTA pr:Below poverty line 0.127 0.093 ZCTA pr:Inc gt 1.5 pov. level 0.780 0.089 ZCTA pr:Income 35 to 50K 0.126 0.021 ZCTA pr:Indus:Arts/ent. 9.926 3.490 ZCTA pr:Indus:Profsci. 11.099 4.197 ZCTA pr:Indus:Publ. adm. 13.902 5.188 ZCTA pr:Indus:Transp. 4.392 1.848 ZCTA pr:Indus:Wholes. 2.663 1.245 ZCTA pr:Internet access 86.081 5.328 ZCTA pr:Live in rural area 0.155 0.238 Observations 1630 B.4.1.2 Probit selection model. LASSO methods are used to identify a set of regressors that have good out-of-sample predictive ability. We use these variables in 242 a probit model to estimate response propensities for a model that uses the explanatory variables (levels and interactions) that are retained by our LASSO models, where these variables are summarized in Table D1). B.4.2 Selection Correction Strategy. Our sample-selection correction method is an ad hoc two-step model that does not attempt to take into account any additional unobserved heterogeneity that may account for non-random selection into the respondent sample. We use an ordinary binary probit specification, with our full sample of 1630 invited participants, to explain whether each invited participant submits a completed survey. The fitted probit “index” (the linear combination of estimated probit parameters and explanatory variables) can be interpreted as a latent respondent characteristic called “propensity to complete this survey." We calculate the fitted completion propensity for every invited participant and calculate the mean fitted response propensity across all of these invitees. We then subtracting this mean from each fitted individual propensity. The “de-meaned response propensity” (dm : R̂Pi) will therefore be zero among the full set of 1630 invited participants. Suppose the set of respondents who complete the survey are all as likely to provide a a complete and valid response as the overall pool of invited participants. Then we would expect that the mean response propensity among those who complete the survey would also be zero. However, in this case, the average person contributing a completed survey has a greater response propensity than the average in the pool of invited participants. It is possible that people who are more likely to respond to a survey about cap-and-trade programs for carbon emissions are simultaneously likely to be willing to pay more for these programs. If this is the case, our models will tend to overestimate average WTP in the general population. If we assume that preferences across cap-and-trade programs can differ systematically according to response propensities, we can simply allow the fitted response 243 propensity to shift each marginal utility parameter in the program choice model.26 We can then simulate what the preference parameters would have been, had every person among the invited participants been equally likely to complete a survey, with a response propensity equal to the average response propensity among all invited participants. Specifically, we simulate the preference parameters that would obtain if the demeaned response propensity for each person in the sample of completed responses had been zero. Unlike a conventional Heckman two-step correction method, this ad hoc procedure relies upon a wide variety of observable characteristics of the set of invited participants (or their neighborhoods) to explain response propensities. Unlike the Heckman correction method, there is no assumption of a truncated bivariate normal joint distribution for errors in the response/non-response model and the outcome model. The variables selected by LASSO are thus used to estimate the probit model shown in Table D2, and the fitted model is used to calculate a response propensity for each person in the sample of eligible respondents (recall those who cleared the screening process to the point of encountering the subject of the survey are eligible. We subtract from each individual’s response propensities our estimate of the mean response propensity in the eligible group. In the Figure D1, we plot the distribution of these response propensities for the entire set of invited participants and for the subset of invitees who completed the survey with responses that passed our basic quality assessment. In the kernel density for each group, we include vertical lines at zero for the “eligible” group, and at the mean (0.073) of the de-meaned response propensity when the sample is limited to just the respondent group. 26As in two-stage-least-squares estimation using least squares methods, it is important that there be at least some “instruments” for response propensity that do not also enter directly into the utility function being estimated at the core of the analysis. 244 Table D2. Binary probit parameter estimates for selection model Regressors selected by LASSO methods from a much larger inventory of potential explanatory variable at the individual, ZCTA, and county level; lambda = min Estimate Outcome: 1=Got to end of survey (1=Gender:Not male) × (1=Race:Black) -0.502∗∗ (0.211) (1=Gender:Not male) × (1=Started survey on Fri) -0.354∗∗ (0.149) (1=Own age:18 to 24) × (1=Gender:Male) -0.318∗∗ (0.154) (1=Own age:18 to 24) × (ZCTA pr:Asian ) -0.0380∗∗∗ (0.0131) (1=Own age:25 to 34) × (1=Gender:Not male) -0.0280 (0.115) (1=Own age:25 to 34) × (1=Started survey on Sun) -0.133 (0.181) (1=Own age:25 to 34) × (1=Started survey on Thu) -0.189 (0.213) (1=Own age:25 to 34) × (ZCTA pr:Black ) -0.0135 (0.0252) (1=Own age:35 to 44) × (1=Started survey on Fri) 0.0902 (0.229) (1=Own age:55 to 64) × (1=Started survey on Thu) 0.113 (0.306) (1=Own age:55 to 64) × (1=Started survey on Wed) -1.010∗∗∗ (0.361) (1=Own age:55 to 64) × (Dist. nearest wildfire 2020) -0.524 (0.388) (1=Own age:55 to 64) × (ZCTA pr:Other race) 0.0440 (0.0361) (1=Own age:65 and up ) × (1=Started survey on Wed) -0.631∗∗∗ (0.242) (1=Own age:75 and up ) × (Sq. miles; nearest wildfire 2020) -0.00164 (0.00110) (1=Own hhld inc:100-125K) × (1=Own age:25 to 34) -0.342∗ (0.207) (1=Own hhld inc:150-175K) × (1=Started survey on Fri) 0.0138 (0.449) (1=Own hhld inc:150-175K) × (1=Started survey on Mon) 0.422 (0.747) (1=Own hhld inc:150-175K) × (1=Started survey on Sat) -0.596 (0.909) (1=Own hhld inc:175-200K) × (1=Started survey on Wed) -0.286 (0.780) (1=Own hhld inc:220K up) × (1=Own age:65 and up ) -0.204 (0.361) (1=Own hhld inc:30-50K) × (1=Own age:35 to 44) -0.0979 (0.246) (1=Own hhld inc:30-50K) × (1=Started survey on Sat) 0.0588 (0.211) (1=Own hhld inc:30-50K) × (1=Started survey on Thu) 0.0933 (0.289) (1=Own hhld inc:30-50K) × (ZCTA pr:Indus:Profsci.) -0.0118 (0.0103) (1=Own hhld inc:50-75K) × (1=Started survey on Wed) 0.443 (0.280) (1=Own hhld inc:75-100K) × (1=Own age:45 to 54) 0.149 (0.240) (1=Own hhld inc:75-100K) × (ZCTA pr:Below poverty line) -1.129∗ (0.603) (1=Own hhld inc:lt 20K) × (1=Started survey on Wed) -0.412∗ (0.214) (1=Race:White) × (1=Used a mobile device) -0.0247 (0.0868) (1=Started survey on Mon) × (ZCTA pr:Indus:Arts/ent.) -0.00482 (0.0154) (1=Started survey on Mon) × (ZCTA pr:Indus:Publ. adm.) -0.00749 (0.0110) (County pr: Internet w/o subsc.) × (CDC SVI cnty-Hsg type, transp.) -3.052 (4.030) (Dist. nearest wildfire 2010-19) × (County pr: No internet) 0.0000111 (0.0000265) (Dist. nearest wildfire 2010-19) × (County pr: Satellite internet) 0.0000324 (0.0000335) (Dist. nearest wildfire 2020) × (County pr: Dial-up internet) 51.65 (74.24) (ZCTA pr:Black ) × (1=Started survey on Fri) 0.0207 (0.0351) Continued on next page 245 Figure D1. Demeaned response propensities Table D2 – continued from previous page (ZCTA pr:Black ) × (ZCTA pr:Live in rural area) 0.683∗∗∗ (0.238) (ZCTA pr:Inc gt 1.5 pov. level) × (ZCTA pr:Income 35 to 50K) 1.861 (4.859) (ZCTA pr:Income 35 to 50K) × (ZCTA pr:Internet access) -0.0205 (0.0514) (ZCTA pr:Income 35 to 50K) × (ZCTA pr:Two or more races) 0.0268 (0.159) (ZCTA pr:Indus:Transp.) × (County pr: Other internet) 0.708 (0.891) (ZCTA pr:Indus:Wholes.) × (1=Used a mobile device) -0.106∗∗∗ (0.0268) Constant 0.563∗∗ (0.221) Max. log-likelihood -1002.23 No. respondents 1630 Note: Standard errors in parentheses; ∗ p < 0.05, ∗∗ p < 0.01, ∗∗∗ p < 0.001 246 B.5 Appendix: Choice Experiment Randomizations Unique Choice Tasks Each choice task table (e.g. Program A) is populated with values randomly generated according to a set of parameters. There is no correlation in values between any of the individual surveys or across any of the six choice tasks within each survey. With 1000 respondents and 6 choice tasks per respondent there are a total of 6000 independently generated choice task tables. Value Generation Parameters We populate the choice task tables (hypothetical carbon cap-and-trade programs) with values from a structured randomized data generation process. This process emphasizes both realistic hypothetical carbon cap-and-trade programs and a distribution of values with enough granularity and orthogonality to allow for precise estimation. All 6,000 programs in the survey (6 programs per respondent times 1000 respondents) are independently generated. There are nine program features and each of these nine features is included in every program. Individual features are determined according to their own specific process. Some of the processes are interdependent, but a degree of random noise of independence is present in each process to avoid too much collinearity across variables. The only attribute values that are specific to Oregon in the choice tasks are the carbon jobs lost and carbon jobs gained, and these values can be fairly easily modified for other geographic locations. The processes can be seen below. 1. Carbon Reduction Values: Carbon reduction values are independently drawn with replacement from a uniform distribution: 10, 20, 30, 40, 50, 60, 70, 80. We believe that this distribution gives us enough granularity for estimation while also allowing respondents to easily 247 understand the amount of carbon reduction a program will accomplish. These values are presented as percent reductions achieved by 2050 relative to current emission levels. 2. Jobs Values Carbon Jobs Lost and Green Jobs Gained were independently drawn from the following distribution: Value ∈ {0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.08, 0.10, 0.15, 0.25, 0.30} Probability ∈ {0.02, 0.10, 0.11, 0.13, 0.20, 0.15, 0.11, 0.09, 0.05, 0.03, 0.01} Where value indicates the value drawn from the distribution and probability indicates the value of drawing the corresponding value. For example, there is a 13% chance of drawing 0.04. The draws for both types of job (carbon and green) were drawn independently and with replacement. These values are generated to reflect percent of jobs lost (gained) as the result of a cap-and-trade program. These values are then uploaded into the survey. However, the values that populate the choice task tables that respondents see are modified. The values that actually populate the tables are the product of these randomly generated values and the current level of jobs in the county in which the respondent resides. For instance, if for a respondent i Program A’s green jobs gained draw took on the value of 0.06 and respondent i indicated they lived in Multnomah County (327,000 current green jobs) then the value respondent i would see in the Program A table for green jobs gained would be 0.06∗327,000 = 19,620 jobs. For respondent i Program B’s green jobs gained draw took on the value of 0.01 then the value respondent i would see in 248 Figure E1. Distribution of job loss randomizations the Program B table for green jobs gained would be 0.01 ∗ 327,000 = 3,270 jobs. Below is the distribution of generated values used in the survey with N = 10,000. 3. Share of Permits Auctioned Values: The share of permits auctioned are independently drawn with replacement from a uniform distribution: 10, 20, 30, 40, 50, 60, 70, 80. These values are presented as the percent of total permits in the cap-and-trade program that would be auctioned, with the remainder being given away to firms at no cost. In some cap-and-trade programs the percent of permits auctioned will increase overtime. However, including this dynamic process in the survey is likely to overwhelm respondents so we simplified the process by suggesting the percent of permits auctioned is static throughout the existence of the program. The lower bound of 10% was chosen so that the revenue auction uses are always relevant. The upper bound of 80% was 249 chosen to reflect the reality of how cap-and-trade programs have played out so far. For instance, there is precedent in the Small Business Regulatory Enforcement Fairness Act (SBREFA) that some of the permits must be allocated free of charge, so as not to cause excessive financial burden. 4. Permit Auction Revenue Uses: There are three potential uses for auction revenues: purchase or subsidize cleaner equipment for households and firms, support workers and communities that are disproportionately burdened by program costs, and reduced Oregon taxes. (a) Clean equipment: values are drawn independently from uniform distribution: equipmenti ∈ {0,10,20,30,40,50,60,70}. (b) Support for workers and communities: values are drawn independently from uniform distribution: supporti ∈ {0,10,20,30,40,50,60,70}. (c) Tax Reduction: the values for tax reduction are determined by the values for the preceding two revenue use values. Tax value for instance i is calculated by 100 − equipmenti − supporti. If this value is negative for a generated program that program is filtered out of the randomizations pool. The process guarantees that the three auction revenue uses sum to 100 as well as non- negative values for the tax feature. Because the equipment values and support values both have a mean of value of 35 the tax values have a mean of 30. We are willing to accept this slight imbalance to guarantee respondents see clean round numbers in the program tables. 5. Additional Regulations On Other Pollutants Values: 250 Additional regulation on other pollutants can take on two values: "YES" and "NO." These values are drawn independently with a 50% chance of either value being realized. The values are completely orthogonal to all other values in a program. 6. Cost Values: Cost values are generated according to the following formula: [ costi =K ∗ 55+βi ∗bene f iti +αi ∗aucti]onedi − γ1i ∗ equipmenti − γ2i ∗ supporti − γ3i ∗ taxesi +Hi β ∈ {0.25, 0.5, 1, 2, 3} α ∈ {0, 1, 2} γ1 ∈ {0, 0.25, 0.5, 0.75} γ2 ∈ {0, 0.25, 0.5, 0.85} γ3 ∈ {0, 0.25, 0.5, 0.85} H ∈ {−50,150,20} K = 1.08 Values for cost of the program were expressed in $ per month. As indicated by the subscripts costi is generated specific to the other relevant values for programi and these values are scaled by values randomly generated for each programi. In other words, all 11 variables are regenerated for each program randomization. Logically, the higher the target emission reduction (bene f it) the higher the cost of the program so β takes on a positive value. α also takes on a positive value under the rationale that if permits are auctioned it will increase production costs for firms. This will 251 Figure E2. Distribution of program cost randomizations raise consumer prices as well as lead to more job loss in carbon-intensive industries. All of the permit auction revenue use variables (buying new equipment, support for communities and workers, and reducing taxes) would likely lead to directly or indirectly to lower costs to individuals. The term H enters the equation as a random shock to cost as a way to increase orthogonality across programs. K was used to scale the cost variable up so that respondents were voting affirmative on programs approximately half of the time. We set a price floor of $20 for every program so that if the data generation process resulted in any cost > $20 it was filtered from the randomizations pool. Below is the distribution of generated values used in the survey with N = 10,000. 252 B.6 Appendix: Descriptive Statistics for Some Basic Variable Relationships Given the randomizations of the attributes of the cap-and-trade programs offered to different respondents, it is informative to look at the joint distributions among a variety of respondent characteristics and some key relationships between these and the way they interact with the survey. Figure F1. Descriptive Statistics: Bias by ideology B.6.1 Relationships among respondent characteristics and attitudes. B.6.2 Share of "YES" votes for program by category of respondent. 253 Figure F2. Descriptive Statistics: Votes by age 254 Figure F3. Descriptive Statistics: Votes by income 255 Figure F4. Descriptive Statistics: Votes by ideology 256 Figure F5. Descriptive Statistics: Votes by education 257 Figure F6. Descriptive Statistics: Votes by employment 258 Figure F7. Descriptive Statistics: Votes by climate 1 259 Figure F8. Descriptive Statistics: Votes by climate 2 260 Figure F9. Descriptive Statistics: Votes by ancestors 261 Figure F10. Descriptive Statistics: Votes by cost decile 262 Figure F11. Descriptive Statistics: Votes by benefits 263 B.6.3 Share of "YES" votes by aspect of choice task. Recall that each respondent is asked to consider six different carbon cap-and-trade programs, Program A through Program F. All choice experiments that offer respondents more than just a single choice must contend with what happens as the respondent works through a sequence of choices. The attributes of Program A, unique to each individual, are incorporated into that individual’s tutorial material as we explain the different groups of program features. Despite this, respondents may require a “burn-in” period as they get their bearings with these choice tasks (and potentially develop some personal choice heuristics). As the respondent works through the six choice tasks, they may also begin to experience fatigue, which may deplete their cognitive resources. Sometimes, respondents may rally as they realize they have reached the final choice task. In this section, all of the figures show the progression in the specified variable as the individual works through Programs A through F. All of the attributes are randomized, as explained in Appendix ??. Thus it is something that is going on with the respondent’s engagement as they work through the six choices that leads to changes in voting patterns, not any systematic difference in the attributes of the cap-and-trade programs they are asked to consider. Attitudes toward climate change policy are known to differ substantially along partisan lines. Thus we break out these voting patterns across choice tasks by party affiliation. 264 Figure F12. Descriptive Statistics: Votes by task 265 Figure F13. Descriptive Statistics: Party votes by task 266 Figure F14. Descriptive Statistics: Time on task Figure F15. Descriptive Statistics: Touches per task 267 Figure F16. Descriptive Statistics: Clicks per task 268 Figure F17. Descriptive Statistics: Cost per task B.6.4 Identical distributions of program attributes across tasks?. 269 Figure F18. Descriptive Statistics: Benefits per task Figure F19. Descriptive Statistics: Carbon jobs per task 270 Figure F20. Descriptive Statistics: Green jobs per task Figure F21. Descriptive Statistics: Auction per task 271 Figure F22. Descriptive Statistics: Equipment per task Figure F23. Descriptive Statistics: Workers per task 272 Figure F24. Descriptive Statistics: Relief per task Figure F25. Descriptive Statistics: Regulations per task 273 Figure F26. Descriptive Statistics: Votes by sector B.6.5 Votes as a function of non-mutually exclusive categories. 274 Figure F27. Descriptive Statistics: Votes by responsibility Figure F28. Descriptive Statistics: Votes by descendants 275 Figure F29. Descriptive Statistics: Votes by transportation 276 APPENDIX C CHAPTER 4 APPENDIX C.1 Online Appendix: Sensitivity Analysis 277 278 Figure A1. Synthetic control difference-in-difference for Oregon background checks (raw count) Background check data is Non-private gun sales subset of FBI data. Events that led to changes in gun purchasing behavior (e.g., Obama’s election in 2008), are indicated. REFERENCES CITED Alban, R. F., Nuno, M., Ko, A., Barmparas, G., Lewis, A. V., & Margulies, D. R. (2018, January). Weaker gun state laws are associated with higher rates of suicide secondary to firearms. Journal of Surgical Research, 221, 135–142. Retrieved 2023-04-11, from https://linkinghub.elsevier.com/retrieve/pii/S0022480417305401 (Place: San Diego Publisher: Academic Press Inc Elsevier Science WOS:000417703100020) doi: 10.1016/j.jss.2017.08.027 Alberini, A., Ščasný, M., & Bigano, A. (2018, oct). Policy- v. individual heterogeneity in the benefits of climate change mitigation: Evidence from a stated-preference survey. Energy Policy, 121, 565–575. doi: 10.1016/j.enpol.2018.07.008 Aldy, J. E., Kotchen, M. J., & Leiserowitz, A. A. (2012, aug). Willingness to pay and political support for a US national clean energy standard. Nature Climate Change, 2(8), 596–599. Retrieved from www.nature.com/natureclimatechange doi: 10.1038/nclimate1527 Aldy, J. E., Kotchen, M. J., Stavins, R. N., & Stock, J. H. (2021, aug). Keep climate policy focused on the social cost of carbon. Science, 373(6557), 850–852. Retrieved from https://www.sciencemag.org/lookup/doi/10.1126/science.abi7813 doi: 10.1126/science.abi7813 Aldy, J. E., & Pizer, W. A. (2009). Issues in designing U.S. climate change policy. Energy Journal, 30(3), 179–210. doi: 10.5547/ISSN0195-6574-EJ-Vol30-No3-9 Alló, M., & Loureiro, M. L. (2014, oct). The role of social norms on preferences towards climate change policies: A meta-analysis. Energy Policy, 73, 563–574. doi: 10.1016/j.enpol.2014.04.042 Amdur, D., Rabe, B. G., & Borick, C. (2014). Public Views on a Carbon Tax Depend on the Proposed Use of Revenue: A report from the National Surveys on Energy and Environment. Issues in Energy and Environmental Policy, 13(July 2014). Retrieved from http://ssrn.com/abstract=2652403 Anderson, C. M., Kissel, K. A., Field, C. B., & Mach, K. J. (2018, sep). Climate Change Mitigation, Air Pollution, and Environmental Justice in California. Environmental Science and Technology, 52(18), 10829–10838. Retrieved from https://pubs.acs.org/sharingguidelines doi: 10.1021/acs.est.8b00908 279 Anderson, S., Marinescu, I. E., & Shor, B. (2019, jun). Can Pigou at the Polls Stop US Melting the Poles? SSRN Electronic Journal. Retrieved from https://papers.ssrn.com/abstract=3400772 doi: 10.2139/ssrn.3400772 Andrade, E. G., Hoofnagle, M. H., Kaufman, E., Seamon, M. J., Pah, A. R., & Morrison, C. N. (2020, June). Firearm Laws and Illegal Firearm Flow between US States. The journal of trauma and acute care surgery, 88(6), 752–759. Retrieved 2023-04-12, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7799862/ doi: 10.1097/TA.0000000000002642 Andrés, A. R., & Hempstead, K. (2011, June). Gun control and suicide: The impact of state firearm regulations in the United States, 1995–2004. Health Policy, 101(1), 95–103. Retrieved 2023-04-24, from https://www.sciencedirect.com/ science/article/pii/S016885101000299X doi: 10.1016/j.healthpol.2010.10.005 Anestis, M. D., Khazem, L. R., Law, K. C., Houtsma, C., LeTard, R., Moberg, F., & Martin, R. (2015, October). The Association Between State Laws Regulating Handgun Ownership and Statewide Suicide Rates. American Journal of Public Health, 105(10), 2059–2067. Retrieved 2023-04-13, from https:// ajph.aphapublications.org/doi/full/10.2105/AJPH.2014.302465 doi: 10.2105/AJPH.2014.302465 Antonovics, K., & Knight, B. G. (2009). A new Look at racial profiling: Evidence from the Boston Police Department. Review of Economics and Statistics, 91(1), 163–177. doi: 10.1162/rest.91.1.163 Arkhangelsky, D., Athey, S., Hirshberg, D. A., Imbens, G. W., & Wager, S. (2021). Synthetic difference-in-differences. American Economic Review, 111(12), 4088–4118. Ashenfelter, O., & Card, D. (1985). Using the longitudinal structure of earnings to estimate the effect of training programs. The Review of Economics and Statistics, 67(4), 648–660. Aubert, D., & Chiroleu-Assouline, M. (2019, jul). Environmental tax reform and income distribution with imperfect heterogeneous labour markets. European Economic Review, 116, 60–82. doi: 10.1016/j.euroecorev.2019.03.006 Ba, B. (2016, September). How Far Are You Willing to Go against the Police? Evaluating the Effects of Citizen Affidavits in Chicago [SSRN Scholarly Paper]. Rochester, NY. Retrieved 2022-12-09, from https://papers.ssrn.com/abstract=2897063 doi: 10.2139/ssrn.2897063 280 Bain, P. G., Hornsey, M. J., Bongiorno, R., & Jeffries, C. (2012, aug). Promoting pro-environmental action in climate change deniers. Nature Climate Change, 2(8), 600–603. doi: 10.1038/nclimate1532 Balakrishna, M., & Wilbur, K. C. (2022). Do Firearm Markets Comply with Firearm Restrictions? How the Massachusetts Assault Weapons Ban Enforcement Notice Changed Registered Firearm Sales. Journal of Empirical Legal Studies, 19(1), 60–89. Retrieved 2023-02-21, from https://onlinelibrary.wiley.com/doi/abs/10.1111/jels.12307 (_eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/jels.12307) doi: 10.1111/jels.12307 Banzhaf, H. S., Ma, L., & Timmins, C. (2019). Environmental Justice: Establishing Causal Relationships (Vol. 11). Retrieved from https://doi.org/10.1146/annurev-resource-100518- doi: 10.1146/annurev-resource-100518-094131 Baranzini, A., Borzykowski, N., & Carattini, S. (2018, aug). Carbon offsets out of the woods? Acceptability of domestic vs. international reforestation programmes in the lab. Journal of Forest Economics, 32(1), 1–12. Retrieved from https://doi.org/10.1016/j.jfe.2018.02.004 doi: 10.1016/j.jfe.2018.02.004 Baranzini, A., & Carattini, S. (2017, jan). Effectiveness, earmarking and labeling: testing the acceptability of carbon taxes with survey data. Environmental Economics and Policy Studies, 19(1), 197–227. doi: 10.1007/s10018-016-0144-7 Bartik, T. J. (2013). Social Costs of Jobs Lost Due to Environmental Regulations. SSRN Electronic Journal. Retrieved from https://research.upjohn.org/up{_}workingpapers doi: 10.2139/ssrn.2241065 Beck, M., Rivers, N., Wigle, R., & Yonezawa, H. (2015, aug). Carbon tax and revenue recycling: Impacts on households in British Columbia. Resource and Energy Economics, 41, 40–69. doi: 10.1016/j.reseneeco.2015.04.005 Becker, G. S., Grossman, M., & Murphy, K. M. (1994). An empirical analysis of cigarette addiction. The American Economic Review, 396–418. Beiser-McGrath, L. F., & Bernauer, T. (2019, sep). Could revenue recycling make effective carbon taxation politically feasible? Science Advances, 5(9). doi: 10.1126/sciadv.aax3323 281 Bento, A. M. (2013, jun). Equity impacts of environmental policy (Vol. 5) (No. 1). Annual Reviews. Retrieved from http://www.annualreviews.org/doi/ 10.1146/annurev-resource-091912-151925 doi: 10.1146/annurev-resource-091912-151925 Berman, E., & Bui, L. T. (2001, feb). Environmental regulation and labor demand: Evidence from the South Coast Air Basin. Journal of Public Economics, 79(2), 265–295. doi: 10.1016/S0047-2727(99)00101-2 Berrens, R. P., Bohara, A. K., Jenkins-Smith, H. C., Silva, C. L., & Weimer, D. L. (2004). Information and effort in contingent valuation surveys: Application to global climate change using national internet samples. Journal of Environmental Economics and Management, 47(2), 331–363. doi: 10.1016/S0095-0696(03)00094-9 Bertrand, M., & Duflo, E. (2017, January). Chapter 8 - Field Experiments on Discrimination. In A. V. Banerjee & E. Duflo (Eds.), Handbook of Economic Field Experiments (Vol. 1, pp. 309–393). North-Holland. Retrieved from https://www.sciencedirect.com/science/article/pii/ S2214658X1630006X doi: 10.1016/bs.hefe.2016.08.004 Bertrand, M., & Mullainathan, S. (2004). Are Emily and Greg More Employable than Lakisha and Jamal? A Field Experiment on Labor Market Discrimination. American Economic Review, 94(4), 991–1013. doi: 10.4324/9781003071709-20 Betz, R., Seifert, S., Cramton, P., & Kerr, S. (2010, apr). Auctioning greenhouse gas emissions permits in Australia. Australian Journal of Agricultural and Resource Economics, 54(2), 219–238. doi: 10.1111/j.1467-8489.2010.00490.x Bilandzic, H., Kalch, A., & Soentgen, J. (2017, aug). Effects of Goal Framing and Emotions on Perceived Threat and Willingness to Sacrifice for Climate Change. Science Communication, 39(4), 466–491. doi: 10.1177/1075547017718553 Billings, S. B. (n.d.). Smoking gun? linking gun ownership to crime victimization. Working Paper. Böhringer, C., Cantner, U., Costard, J., Kramkowski, L. V., Gatzen, C., & Pietsch, S. (2020, sep). Innovation for the German energy transition - Insights from an expert survey. Energy Policy, 144. doi: 10.1016/j.enpol.2020.111611 Bord, R. J., O’Connor, R. E., & Fisher, A. (2000). In what sense does the public need to understand global climate change? Public Understanding of Science, 9(3), 205–218. doi: 10.1088/0963-6625/9/3/301 282 Borenstein, S., & Davis, L. W. (2016, jan). The distributional effects of US clean energy tax credits. Tax Policy and the Economy, 30(1), 191–234. Retrieved from https://www.journals.uchicago.edu/doi/10.1086/685597 doi: 10.1086/685597 Boyce, J. K. (2018, aug). Carbon Pricing: Effectiveness and Equity (Vol. 150). Elsevier B.V. doi: 10.1016/j.ecolecon.2018.03.030 Braga, A. A. (2022). Gun violence is a public health crisis that needs more applied criminologists. Criminology & Public Policy, 21(4), 811–837. Retrieved 2023-04-11, from https://onlinelibrary.wiley.com/doi/abs/10.1111/1745-9133.12608 (_eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/1745-9133.12608) doi: 10.1111/1745-9133.12608 Braga, A. A., Brunson, R. K., Cook, P. J., Turchan, B., & Wade, B. (2021, October). Underground Gun Markets and the Flow of Illegal Guns into the Bronx and Brooklyn: A Mixed Methods Analysis. Journal of Urban Health, 98(5), 596–608. Retrieved 2023-04-12, from https://link.springer.com/10.1007/s11524-020-00477-z doi: 10.1007/s11524-020-00477-z Braga, A. A., Griffiths, E., Sheppard, K., & Douglas, S. (2021, January). Firearm Instrumentality: Do Guns Make Violent Situations More Lethal? Annual Review of Criminology, 4(1), 147–164. Retrieved 2023-04-13, from https:// www.annualreviews.org/doi/10.1146/annurev-criminol-061020-021528 doi: 10.1146/annurev-criminol-061020-021528 Brannlund, R., & Persson, L. (2012). To tax, or not to tax: Preferences for climate policy attributes. Climate Policy, 12(6), 704–721. Retrieved from http://dx.doi.org/10.1080/14693062.2012.675732 doi: 10.1080/14693062.2012.675732 Browne, I., & Misra, J. (2003). The Intersection of Gender and Race in the Labor Market. Annual Review of Sociology, 29, 487–513. doi: 10.1146/ANNUREV.SOC.29.010202.100016 Bulman, G. (2019). LAW ENFORCEMENT LEADERS AND THE RACIAL COMPOSITION OF ARRESTS. Economic Inquiry, 57(4), 1842–1858. (Publisher: Blackwell Publishing Inc.) doi: 10.1111/ecin.12800 Burkhardt, J., & Chan, N. W. (2017, jun). The dollars and sense of ballot propositions: Estimating willingness to pay for public goods using aggregate voting data. Journal of the Association of Environmental and Resource Economists, 4(2), 479–503. doi: 10.1086/691592 283 Burton, A. L., Logan, M. W., Pickett, J. T., Cullen, F. T., Jonson, C. L., & Burton Jr., V. S. (2021). Gun Owners and Gun Control: Shared Status, Divergent Opinions. Sociological Inquiry, 91(2), 347–366. Retrieved 2023-04-13, from https://onlinelibrary.wiley.com/doi/abs/10.1111/soin.12413 (_eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/soin.12413) doi: 10.1111/soin.12413 Burtraw, D., Evans, D. A., Krupnick, A., Palmer, K., & Toth, R. (2005, nov). Economics of pollution trading for so 2 and NO x (Vol. 30) (No. 1). Annual Reviews. Retrieved from http://www.annualreviews.org/doi/10.1146/ annurev.energy.30.081804.121028 doi: 10.1146/annurev.energy.30.081804.121028 Burtraw, D., Sweeney, R., & Walls, M. (2009). The incidence of U.S. climate policy: Alternative uses of revenues from a cap-and-trade auction. National Tax Journal, 62(3), 497–518. doi: 10.17310/ntj.2009.3.09 Bushell, S., Buisson, G. S., Workman, M., & Colley, T. (2017, jun). Strategic narratives in climate change: Towards a unifying narrative to address the action gap on climate change. Energy Research and Social Science, 28, 39–49. doi: 10.1016/j.erss.2017.04.001 Butler, D. M., & Broockman, D. E. (2011, July). Do politicians racially discriminate against constituents? A field experiment on state legislators. American Journal of Political Science, 55(3), 463–477. doi: 10.1111/j.1540-5907.2011.00515.x Cai, B., Cameron, T. A., & Gerdes, G. R. (2010). Distributional preferences and the incidence of costs and benefits in climate change policy. Environmental and Resource Economics, 46(4), 429–458. doi: 10.1007/s10640-010-9348-7 Carattini, S., Baranzini, A., Thalmann, P., Varone, F., & Vöhringer, F. (2017, sep). Green Taxes in a Post-Paris World: Are Millions of Nays Inevitable? Environmental and Resource Economics, 68(1), 97–128. Retrieved from https://link.springer.com/article/10.1007/s10640-017-0133-8 doi: 10.1007/s10640-017-0133-8 Carattini, S., Carvalho, M., & Fankhauser, S. (2018, sep). Overcoming public resistance to carbon taxes (Vol. 9) (No. 5). Wiley-Blackwell. Retrieved from http://doi.wiley.com/10.1002/wcc.531 doi: 10.1002/wcc.531 Carlson, J. (2020, October). Gun Studies and the Politics of Evidence. Annual Review of Law and Social Science, 16(1), 183–202. Retrieved 2023-04-24, from https://www.annualreviews.org/doi/10.1146/ annurev-lawsocsci-020620-111332 doi: 10.1146/annurev-lawsocsci-020620-111332 284 Castillo-Carniglia, A., Kagawa, R. M. C., Webster, D. W., Vernick, J. S., Cerdá, M., & Wintemute, G. J. (2018, December). Comprehensive background check policy and firearm background checks in three US states. Injury Prevention, 24(6), 431–436. Retrieved 2023-02-06, from https://injuryprevention.bmj.com/ lookup/doi/10.1136/injuryprev-2017-042475 doi: 10.1136/injuryprev-2017-042475 Castillo-Carniglia, A., Webster, D. W., & Wintemute, G. J. (2019, December). Effect on background checks of newly-enacted comprehensive background check policies in Oregon and Washington: a synthetic control approach. Injury Epidemiology, 6(1), 45. Retrieved 2023-04-13, from https://injepijournal.biomedcentral.com/articles/10.1186/ s40621-019-0225-8 doi: 10.1186/s40621-019-0225-8 Cerqueira, D., Coelho, D., Fernandes, M., & Junior, J. P. (2018, July). Guns and Suicides. The American Statistician, 72(3), 289–294. Retrieved 2023-04-12, from https://www.tandfonline.com/doi/full/10.1080/ 00031305.2017.1419144 doi: 10.1080/00031305.2017.1419144 Chalfin, A., Hansen, B., Weisburst, E., & Williams, M. (2021). Police Force Size and Civilian Race. SSRN Electronic Journal. Retrieved 2022-06-21, from https://news.gallup.com/poll/317135/ amid-pandemic-confidence-key-institutions-surges.aspx doi: 10.2139/ssrn.3753107 Chalfin, A., & McCrary, J. (2018). Are U.S. cities underpoliced? Theory and evidence. Review of Economics and Statistics, 100(1), 167–186. Retrieved 2022-06-21, from http://www.mitpress doi: 10.1162/REST_a_00694 Chen, M. K., Christensen, K. L., John, E., Owens, E., & Zhuo, Y. (2021, September). Smartphone Data Reveal Neighborhood-Level Racial Disparities in Police Presence. Retrieved 2022-12-09, from https://arxiv.org/abs/2109.12491v2 doi: 10.48550/arXiv.2109.12491 Chen, Y., & Hafstead, M. A. (2019). USING A CARBON TAX TO MEET US INTERNATIONAL CLIMATE PLEDGES. Climate Change Economics, 10(1). Retrieved from www.rff.org/carbontax. doi: 10.1142/S2010007819500027 Choi, A. S., Gössling, S., & Ritchie, B. W. (2018, jul). Flying with climate liability? Economic valuation of voluntary carbon offsets using forced choices. Transportation Research Part D: Transport and Environment, 62, 225–235. doi: 10.1016/j.trd.2018.02.018 Coglianese, C., Finkel, A. M., & Carrigan, C. (2013). Does regulation kill jobs? University of Pennsylvania Press. doi: 10.5860/choice.52-1107 285 Conner, K. R., & Zhong, Y. Y. (2003, November). State firearm laws and rates of suicide in men and women. American Journal of Preventive Medicine, 25(4), 320–324. Retrieved 2023-04-11, from https://linkinghub.elsevier.com/retrieve/pii/S0749379703002125 (Place: New York Publisher: Elsevier Science Inc WOS:000186048300009) doi: 10.1016/S0749-3797(03)00212-5 Convery, F. J., & Redmond, L. (2013, jun). The european union emissions trading scheme: Issues in allowance price support and linkage. Annual Review of Resource Economics, 5(1), 301–324. Retrieved from http:// www.annualreviews.org/doi/10.1146/annurev-resource-091912-151827 doi: 10.1146/annurev-resource-091912-151827 Cook, P. J. (2018). Gun Markets. In J. Petersilia & R. J. Sampson (Eds.), Annual Review of Criminology, Vol 1 (Vol. 1, pp. 359–377). Palo Alto: Annual Reviews. Retrieved 2023-04-24, from https://www.annualreviews.org/doi/10.1146/ annurev-criminol-032317-092149 (ISSN: 2572-4568 WOS:000438239100017) doi: 10.1146/annurev-criminol-032317-092149 Cook, P. J., & Donohue, J. J. (2017, December). Saving lives by regulating guns: Evidence for policy. Science, 358(6368), 1259–1261. Retrieved 2023-02-21, from https://www.science.org/doi/full/10.1126/science.aar3067 (Publisher: American Association for the Advancement of Science) doi: 10.1126/science.aar3067 Cook, P. J., Harris, R. J., Ludwig, J., & Pollack, H. A. (2015). Some Sources of Crime Guns in Chicago: Dirty Dealers, Straw Purchasers, and Traffickers. Journal of Criminal Law & Criminology, 104(4), 717–759. Retrieved 2023-04-12, from http://libproxy.uoregon.edu/login?url=https:// search.ebscohost.com/login.aspx?direct=true&db=aph&AN= 110668169&site=ehost-live&scope=site (Publisher: Northwestern University School of Law) Cook, P. J., & Ludwig, J. (2006). The social costs of gun ownership. Journal of Public Economics, 90(1-2), 379–391. Cook, P. J., & Ludwig, J. (2019, January). The social costs of gun ownership: a reply to Hayo, Neumeier, and Westphal. Empirical Economics, 56(1), 13–22. Retrieved 2023-04-12, from http://link.springer.com/10.1007/s00181-018-1497-5 doi: 10.1007/s00181-018-1497-5 Corburn, J. (2001). Emissions trading and environmental justice: Distributive fairness and the USA’s Acid Rain Programme. Environmental Conservation, 28(4), 323–332. Retrieved from https://doi.org/10.1017/S0376892901000352 doi: 10.1017/S0376892901000352 286 Cramton, P., & Kerr, S. (2002, mar). Tradeable carbon permit auctions how and why to auction not grandfather. Energy Policy, 30(4), 333–345. doi: 10.1016/S0301-4215(01)00100-8 Crifasi, C. K., Merrill-Francis, M., McCourt, A., Vernick, J. S., Wintemute, G. J., & Webster, D. W. (2018, June). Association between Firearm Laws and Homicide in Urban Counties. Journal of Urban Health-Bulletin of the New York Academy of Medicine, 95(3), 383–390. Retrieved 2023-04-11, from https://link.springer.com/article/10.1007/s11524-018-0273-3 (Place: New York Publisher: Springer WOS:000434909700010) doi: 10.1007/s11524-018-0273-3 Crifasi, C. K., Meyers, J. S., Vernick, J. S., & Webster, D. W. (2015). Effects of changes in permit-to-purchase handgun laws in connecticut and missouri on suicide rates. Preventive medicine, 79, 43–49. Crifasi, C. K., Stone, E. M., McGinty, B., Vernick, J. S., Barry, C. L., & Webster, D. W. (2020, February). Differences in public support for handgun purchaser licensing. Injury Prevention, 26(1), 93–95. Retrieved 2023-04-13, from https://injuryprevention.bmj.com/content/26/1/93 (Place: London Publisher: Bmj Publishing Group WOS:000514665600016) doi: 10.1136/injuryprev-2019-043405 Cronin, J. A., Fullerton, D., & Sexton, S. (2019, mar). Vertical and Horizontal Redistributions from a Carbon Tax and Rebate. Journal of the Association of Environmental and Resource Economists, 6(S1), S169–S208. doi: 10.1086/701191 Currie, J., Voorheis, J., & Walker, R. (2020). What Caused Racial Disparities in Particulate Exposure to Fall? New Evidence from the Clean Air Act and Satellite-Based Measures of Air Quality. NBER Working Paper. Retrieved from http://www.nber.org/papers/w26659https://www.nber.org/papers/ w26682{%}0Ahttps://www.nber.org/papers/w26659 da Silva Freitas, L. F., de Santana Ribeiro, L. C., de Souza, K. B., & Hewings, G. J. D. (2016, sep). The distributional effects of emissions taxation in Brazil and their implications for climate policy. Energy Economics, 59, 37–44. doi: 10.1016/j.eneco.2016.07.021 Daziano, R., Waygood, E. O., Patterson, Z., Feinberg, M., & Wang, B. (2021, jan). Reframing greenhouse gas emissions information presentation on the Environmental Protection Agency’s new-vehicle labels to increase willingness to pay. Journal of Cleaner Production, 279. doi: 10.1016/j.jclepro.2020.123669 287 DeAngelo, G., & Hansen, B. (2014). Life and death in the fast lane: Police enforcement and traffic fatalities. American Economic Journal: Economic Policy, 6(2), 231–257. doi: 10.1257/pol.6.2.231 Depetris-Chauvin, E. (2015). Fear of obama: An empirical study of the demand for guns and the us 2008 presidential election. Journal of Public Economics, 130, 66–79. Deryugina, T., Fullerton, D., & Pizer, W. A. (2019, mar). An Introduction to Energy Policy Trade-Offs between Economic Efficiency and Distributional Equity. Journal of the Association of Environmental and Resource Economists, 6(S1), S1–S6. doi: 10.1086/701515 Deschênes, O. (2012). 2 Climate Policy and Labor Markets (Tech. Rep.). Devarajan, S. (2013). Comment on ‘distributional implications of alternative U.S. greenhouse gas control measures’. In Distributional aspects of energy and climate policies (pp. 235–237). doi: 10.4337/9781783470273.00017 Dickinson, J. L., Crain, R., Yalowitz, S., & Cherry, T. M. (2013, jan). How framing climate change influences citizen scientists intentions to do something about it. Journal of Environmental Education, 44(3), 145–158. doi: 10.1080/00958964.2012.742032 Dinan, T., & Rogers, D. L. (2002). Distributional effects of carbon allowance trading: How government decisions determine winners and losers. National Tax Journal, 55(2), 199–221. doi: 10.17310/ntj.2002.2.01 Doerner, J. K., & Demuth, S. (2010, February). The independent and joint effects of race/ethnicity, gender, and age on sentencing outcomes in U.S. federal courts. Justice Quarterly, 27(1), 1–27. doi: 10.1080/07418820902926197 Donohue, J. J. (2017, December). Laws Facilitating Gun Carrying and Homicide. American Journal of Public Health, 107(12), 1864–1865. Retrieved 2023-04-11, from https://ajph.aphapublications.org/doi/full/10.2105/ AJPH.2017.304144 doi: 10.2105/AJPH.2017.304144 Dorband, I. I., Jakob, M., Kalkuhl, M., & Steckel, J. C. (2019, mar). Poverty and distributional effects of carbon pricing in low- and middle-income countries – A global comparative analysis. World Development, 115, 246–257. doi: 10.1016/j.worlddev.2018.11.015 Doucette, M. L., Crifasi, C. K., & Frattaroli, S. (2019, December). Right-to-Carry Laws and Firearm Workplace Homicides: A Longitudinal Analysis (1992–2017). American Journal of Public Health, 109(12), 1747–1753. Retrieved 2023-02-21, from https://ajph.aphapublications.org/doi/full/10.2105/ AJPH.2019.305307 doi: 10.2105/AJPH.2019.305307 288 Duan, H. X., Lü, Y. L., & Li, Y. (2014). Chinese public’s willingness to pay for CO2 emissions reductions: A case study from four provinces/cities. Advances in Climate Change Research, 5(2), 100–110. Retrieved from www.climatechange.cn doi: 10.3724/SP.J.1248.2014.100 Duggan, M. (2001). More guns, more crime. Journal of political Economy, 109(5), 1086–1114. Dulaney, S., Greenbaum, E., Hunt, J., & Manya, D. (2017). Carbon Offsets and Health Co-Benefits Assessing the Capacity of the Offset Program to Provide Health Co-benefits to California’s Disadvantaged Communities California Air Resources Board Member, Dean Florez (Tech. Rep.). Edwards, F., Lee, H., & Esposito, M. (2019, August). Risk of being killed by police use of force in the United States by age, race–ethnicity, and sex. Proceedings of the National Academy of Sciences of the United States of America, 116(34), 16793–16798. (Publisher: National Academy of Sciences) doi: 10.1073/pnas.1821204116 Edwards, G., Nesson, E., Robinson, J. J., & Vars, F. (2018, December). Looking down the Barrel of a Loaded Gun: The Effect of Mandatory Handgun Purchase Delays on Homicide and Suicide. Economic Journal, 128(616), 3117–3140. Retrieved 2023-04-11, from https://academic.oup.com/ej/article-abstract/128/ 616/3117/5251684?redirectedFrom=fulltext (Place: Oxford Publisher: Oxford Univ Press WOS:000503174000005) doi: 10.1111/ecoj.12567 Egan, P. J., & Mullin, M. (2017). Climate Change: US Public Opinion. Annual Review of Political Science, 20, 209–227. Retrieved from https://doi.org/10.1146/annurev-polisci-051215- doi: 10.1146/annurev-polisci-051215-022857 Einstein, K. L., & Glick, D. M. (2017, January). Does Race Affect Access to Government Services? An Experiment Exploring Street-Level Bureaucrats and Access to Public Housing. American Journal of Political Science, 61(1), 100–116. (Publisher: Blackwell Publishing Ltd) doi: 10.1111/ajps.12252 Farber, D. A. (2012). Pollution markets and social equity: Analyzing the fairness of cap and trade. Ecology Law Quarterly, 39(1), 1–56. Retrieved from http://digitalcommons.law.umaryland.edu/ cgilviewcontent.cgi?article=1860{&}context-facpubs. doi: 10.15779/Z38G263 Farrell, J. (2016). Corporate funding and ideological polarization about climate change. Proceedings of the National Academy of Sciences of the United States of America, 113(1), 92–97. doi: 10.1073/pnas.1509433112 289 Feger, F., & Radulescu, D. (2020, aug). When environmental and redistribution concerns collide: The case of electricity pricing. Energy Economics, 90, 104828. doi: 10.1016/j.eneco.2020.104828 Fell, H., Burtraw, D., Morgenstern, R. D., & Palmer, K. L. (2012, sep). Soft and hard price collars in a cap-and-trade system: A comparative analysis. Journal of Environmental Economics and Management, 64(2), 183–198. doi: 10.1016/j.jeem.2011.11.004 Fischer, C., & Fox, A. K. (2007). Output-based allocation of emissions permits for mitigating tax and trade interactions. Land Economics, 83(4), 575–599. Retrieved from https://about.jstor.org/terms doi: 10.3368/le.83.4.575 Fischer, C., & Pizer, W. A. (2019, mar). Horizontal Equity Effects in Energy Regulation. Journal of the Association of Environmental and Resource Economists, 6(S1), S209–S237. doi: 10.1086/701192 Fleegler, E. W., Lee, L. K., Monuteaux, M. C., Hemenway, D., & Mannix, R. (2013, May). Firearm Legislation and Firearm-Related Fatalities in the United States. JAMA Internal Medicine, 173(9), 732. Retrieved 2023-04-20, from http://archinte.jamanetwork.com/article.aspx?doi=10.1001/ jamainternmed.2013.1286 doi: 10.1001/jamainternmed.2013.1286 Fowlie, M., Holland, S. P., & Mansur, E. T. (2012). What do emissions markets deliver and to whom? Evidence from Southern California’s NO X trading program. American Economic Review, 102(2), 965–993. Retrieved from http://dx.doi.org/10.1257/aer.102.2.965 doi: 10.1257/aer.102.2.965 Fowlie, M., & Muller, N. (2019). Market-based emissions regulation when damages vary across sources: What are the gains from differentiation? Journal of the Association of Environmental and Resource Economists, 6(3), 593–632. doi: 10.1086/702852 Fowlie, M., Walker, R., & Wooley, D. (2020). Climate policy, environmental justice, and local air pollution (Tech. Rep. No. October). Retrieved from https://www.brookings.edu Fridell, L. A. (2017, September). Explaining the Disparity in Results Across Studies Assessing Racial Disparity in Police Use of Force: A Research Note. American Journal of Criminal Justice, 42(3), 502–513. Retrieved 2022-07-25, from https://link.springer.com/article/10.1007/s12103-016-9378-y (Publisher: Springer New York LLC) doi: 10.1007/s12103-016-9378-y Fryer, R. G. (2020). An empirical analysis of racial differences in police use of force: a response. Journal of Political Economy, 128(10), 4003–4008. Retrieved 2022-07-25, from http://www.fatalencounters.org doi: 10.1086/710977 290 Fullerton, D. (2011). Six Distributional Effects of Environmental Policy. Risk Analysis, 31(6), 923–929. Retrieved from http://www.nber.org/papers/w16703 Fullerton, D., Heutel, G., & Metcalf, G. E. (2012). Does the indexing of government transfers make carbon pricing progressive? In American journal of agricultural economics (Vol. 94, pp. 347–353). doi: 10.1093/ajae/aar096 Fullerton, D., & Karney, D. H. (2018, jan). Potential state-level carbon revenue under the clean power plan. Contemporary Economic Policy, 36(1), 149–166. doi: 10.1111/coep.12221 Fullerton, D., & Muehlegger, E. (2019). Who Bears the Economic Burdens of Environmental Regulations? Review of Environmental Economics and Policy, 13(1), 62–82. doi: 10.1093/reep/rey023 Fuss, S., Flachsland, C., Koch, N., Kornek, U., Knopf, B., & Edenhofer, O. (2018). A framework for assessing the performance of cap-and-trade systems: Insights from the European Union Emissions Trading System. Review of Environmental Economics and Policy, 12(2), 220–241. doi: 10.1093/reep/rey010 Gaddis, S. M. (2017a, September). How black are Lakisha and Jamal? Racial perceptions from names used in correspondence audit studies. Sociological Science, 4, 469–489. (Publisher: Society for Sociological Science) doi: 10.15195/v4.a19 Gaddis, S. M. (2017b, October). Racial/Ethnic Perceptions from Hispanic Names: Selecting Names to Test for Discrimination. Socius: Sociological Research for a Dynamic World, 3, 237802311773719. Retrieved 2022-03-09, from https://journals.sagepub.com/doi/10.1177/2378023117737193 (Publisher: SAGE PublicationsSage CA: Los Angeles, CA) doi: 10.1177/2378023117737193 García-Montoya, L., Arjona, A., & Lacombe, M. (2022, August). Violence and Voting in the United States: How School Shootings Affect Elections. American Political Science Review, 116(3), 807–826. Retrieved 2023-04-13, from https://www.cambridge.org/core/product/identifier/ S0003055421001179/type/journal_article doi: 10.1017/S0003055421001179 Gelman, A., Fagan, J., & Kiss, A. (2007). An analysis of the New York City police department’s "stop-and- frisk" policy in the context of claims of racial bias. Journal of the American Statistical Association, 102(479), 813–823. doi: 10.1198/016214506000001040 Gevrek, Z. E., & Uyduranoglu, A. (2015). Public preferences for carbon tax attributes. Ecological Economics, 118, 186–197. doi: 10.1016/j.ecolecon.2015.07.020 291 Gillingham, K., Nordhaus, W., Anthoff, D., Blanford, G., Bosetti, V., Christensen, P., . . . Reilly, J. (2018, oct). Modeling uncertainty in integrated assessment of climate change: A multimodel comparison. Journal of the Association of Environmental and Resource Economists, 5(4), 791–826. Retrieved from https://www.journals.uchicago.edu/doi/10.1086/698910 doi: 10.1086/698910 Giulietti, C., Tonin, M., & Vlassopoulos, M. (2019). Racial discrimination in local public services: A field experiment in the United States. Journal of the European Economic Association, 17(1), 165–204. doi: 10.1093/jeea/jvx045 Goncalves, F., & Mello, S. (2021, May). A few bad apples? racial bias in policing. American Economic Review, 111(5), 1406–1441. (Publisher: American Economic Association) doi: 10.1257/AER.20181607 Goulder, L. H., Hafstead, M. A., Kim, G. R., & Long, X. (2019). Impacts of a carbon tax across US household income groups: What are the equity-efficiency trade-offs? Journal of Public Economics, 175, 44–64. doi: 10.1016/j.jpubeco.2019.04.002 Goulder, L. H., & Parry, I. W. (2008, jun). Instrument choice in environmental policy. Review of Environmental Economics and Policy, 2(2), 152–174. doi: 10.1093/reep/ren005 Grainger, C., & Ruangmas, T. (2018, nov). Who Wins from Emissions Trading? Evidence from California. Environmental and Resource Economics, 71(3), 703–727. Retrieved from https://doi.org/10.1007/s10640-017-0180-1 doi: 10.1007/s10640-017-0180-1 Grainger, C. A., & Kolstad, C. D. (2010, mar). Who pays a price on carbon? Environmental and Resource Economics, 46(3), 359–376. Retrieved from https://link.springer.com/article/10.1007/s10640-010-9345-x doi: 10.1007/s10640-010-9345-x Gramlich, J. (2023). What the data says about gun deaths in the u.s. Retrieved from https://www.pewresearch.org/short-reads/2023/04/26/ what-the-data-says-about-gun-deaths-in-the-u-s/ Gray, W. B., Shadbegian, R. J., Wang, C., & Meral, M. (2014, jul). Do EPA regulations affect labor demand? Evidence from the pulp and paper industry. Journal of Environmental Economics and Management, 68(1), 188–202. doi: 10.1016/j.jeem.2014.06.002 Greenstone, M. (2002). The impacts of environmental regulations on industrial activity: Evidence from the 1970 and 1977 Clean Air Act Amendments and the Census of Manufactures. Journal of Political Economy, 110(6), 1175–1219. doi: 10.1086/342808 292 Hafstead, M. A., & Williams, R. C. (2018, apr). Unemployment and environmental regulation in general equilibrium. Journal of Public Economics, 160, 50–65. doi: 10.1016/j.jpubeco.2018.01.013 Hafstead, M. A., Williams III, R. C., & Chen, Y. (2018). Environmental Policy, Full-Employment Models, and Employment: A Critical Analysis. Resources for the Future. Retrieved from www.rff.org Hasegawa, M., & Salant, S. (2015, oct). The dynamics of pollution permits (Vol. 7) (No. 1). Annual Reviews Inc. doi: 10.1146/annurev-resource-100913-012507 Hepburn, L., Azrael, D., & Miller, M. (2022, February). Firearm Background Checks in States With and Without Background Check Laws. American Journal of Preventive Medicine, 62(2), 227–233. Retrieved 2023-04-24, from https://linkinghub.elsevier.com/retrieve/pii/S074937972100489X doi: 10.1016/j.amepre.2021.08.013 Hernandez-Cortes, D., & Meng, K. (2020). Do Environmental Markets Cause Environmental Injustice? Evidence from California’s Carbon Market. National Bureau of Economic Research(May), 1–32. Retrieved from http://www.nber.org/papers/w27205 Hole, A. R. (2007). A comparison of approaches to estimating confidence intervals for willingness to pay measures [Journal Article]. Health Economics, 16(8), 827-840. Retrieved from https://onlinelibrary.wiley.com/doi/abs/10.1002/hec.1197 doi: https://doi.org/10.1002/hec.1197 Holian, M. J., & Kahn, M. E. (2015, jun). Household demand for low carbon policies: Evidence from california. Journal of the Association of Environmental and Resource Economists, 2(2), 205–234. doi: 10.1086/680663 Ifatunji, M. A., & Harnois, C. E. (2016, July). An Explanation for the Gender Gap in Perceptions of Discrimination among African Americans: Considering the Role of Gender Bias in Measurement. Sociology of Race and Ethnicity, 2(3), 263–288. (Publisher: SAGE Publications Inc.) doi: 10.1177/2332649215613532 Intergovernmental Panel on Climate Change (IPCC). (2018). Global warming of 1.5°C. An IPCC Special Report on the impacts of global warming of 1.5°C above pre-industrial levels and related global greenhouse gas emission pathways, in the context of strengthening the global response to the threat of climate change,. Retrieved from www.ipcc.ch 293 Intergovernmental Panel on Climate Change (IPCC). (2021). Climate Change 2021: The Physical Science Basis. Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change. Retrieved from www.ipcc.ch Irvin, N., Rhodes, K., Cheney, R., & Wiebe, D. (2014, August). Evaluating the Effect of State Regulation of Federally Licensed Firearm Dealers on Firearm Homicide. American Journal of Public Health, 104(8), 1384–1386. Retrieved 2023-04-13, from https://ajph.aphapublications.org/doi/full/10.2105/ AJPH.2014.301999 doi: 10.2105/AJPH.2014.301999 Iwama, J., & McDevitt, J. (2021, February). Rising Gun Sales in the Wake of Mass Shootings and Gun Legislation. Journal of Primary Prevention, 42(1), 27–42. Retrieved 2023-04-13, from https://www.proquest.com/docview/ 2490900327/citation/77965D268DB40C2PQ/1 (Num Pages: 27-42 Place: New York, Netherlands Publisher: Springer Nature B.V.) doi: 10.1007/s10935-021-00622-7 Jorgenson, D. W., Goettle, R., Ho, M. S., Slesnick, D. T., & Wilcoxen, P. J. (2013). The distributional impact of climate policy. In Distributional aspects of energy and climate policies (pp. 238–265). Retrieved from http://www.epa.gov/ climatechange/economics/modeling.html{#}intertemporal doi: 10.4337/9781783470273.00018 Joslyn, M. R., Haider-Markel, D. P., Baggs, M., & Bilbo, A. (2017). Emerging Political Identities? Gun Ownership and Voting in Presidential Elections*. Social Science Quarterly, 98(2), 382–396. Retrieved 2023-04-13, from https://onlinelibrary.wiley.com/doi/abs/10.1111/ssqu.12421 (_eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/ssqu.12421) doi: 10.1111/ssqu.12421 Kagawa, R., Charbonneau, A., McCort, C., McCourt, A., Vernick, J., Webster, D., & Wintemute, G. (2023, April). Effects of Comprehensive Background-Check Policies on Firearm Fatalities in 4 States. American Journal of Epidemiology, 192(4), 539–548. Retrieved 2023-04-13, from https://academic.oup.com/aje/article/192/4/539/6969413 doi: 10.1093/aje/kwac222 Kaplan, J. (2021). Jacob Kaplan’s Concatenated Files: Uniform Crime Reporting Program Data: Law Enforcement Officers Killed and Assaulted (LEOKA) 1960-2020. Kaswan, A. (2008). Environmental Justice and Domestic Climate Change Policy. Environmental Law Reporter, 5(February), 10287–10315. Retrieved from http://www.ipcc.ch/pdf/assessment-report/ar4/syr/ar4{_}syr{_} 294 Kim, J. J., & Wilbur, K. C. (2022, September). Proxies for legal firearm prevalence. Quantitative Marketing and Economics, 20(3), 239–273. Retrieved 2023-02-06, from https://link.springer.com/10.1007/s11129-022-09251-8 doi: 10.1007/s11129-022-09251-8 Klarevas, L., Conner, A., & Hemenway, D. (2019, December). The Effect of Large-Capacity Magazine Bans on High-Fatality Mass Shootings, 1990–2017. American Journal of Public Health, 109(12), 1754–1761. Retrieved 2023-04-13, from https://ajph.aphapublications.org/doi/full/10.2105/ AJPH.2019.305311 doi: 10.2105/AJPH.2019.305311 Kleck, G. (2004, February). Measures of Gun Ownership Levels for Macro-Level Crime and Violence Research. Journal of Research in Crime and Delinquency, 41(1), 3–36. Retrieved 2023-04-13, from http://journals.sagepub.com/doi/10.1177/0022427803256229 doi: 10.1177/0022427803256229 Klenert, D., Mattauch, L., Combet, E., Edenhofer, O., Hepburn, C., Rafaty, R., & Stern, N. (2018, aug). Making carbon pricing work for citizens. Nature Climate Change, 8(8), 669–677. doi: 10.1038/s41558-018-0201-2 Knight, B. (2013, November). State Gun Policy and Cross-State Externalities: Evidence from Crime Gun Tracing. American Economic Journal: Economic Policy, 5(4), 200–229. Retrieved 2023-04-11, from http://libproxy.uoregon.edu/ login?url=https://search.ebscohost.com/login.aspx?direct= true&db=eoh&AN=1397269&site=ehost-live&scope=site Knopov, A., Siegel, M., Xuan, Z., Rothman, E. F., Cronin, S. W., & Hemenway, D. (2019, October). The Impact of State Firearm Laws on Homicide Rates among Black and White Populations in the United States, 1991–2016. Health & Social Work, 44(4), 232–240. Retrieved 2023-04-11, from https://academic.oup.com/hsw/article/44/4/232/5610107 doi: 10.1093/hsw/hlz024 Knox, D., Lowe, W., & Mummolo, J. (2020). Administrative Records Mask Racially Biased Policing. American Political Science Review, 114(3), 619–637. doi: 10.1017/S0003055420000039 Kolstad, C. D. (2014, jan). Who pays for climate regulation. SIEPR Policy Brief (January), 1–8. Retrieved from http://siepr.stanford.edu Kotchen, M. J., Boyle, K. J., & Leiserowitz, A. A. (2013). Willingness-to-pay and policy-instrument choice for climate-change policy in the United States. Energy Policy, 55, 617–625. Retrieved from http://dx.doi.org/10.1016/j.enpol.2012.12.058 doi: 10.1016/j.enpol.2012.12.058 295 Kotchen, M. J., Turk, Z. M., & Leiserowitz, A. A. (2017, sep). Public willingness to pay for a US carbon tax and preferences for spending the revenue. Environmental Research Letters, 12(9), 94012. Retrieved from https://doi.org/10.1088/1748-9326/aa822a doi: 10.1088/1748-9326/aa822a Kravitz-Wirtz, N., Pallin, R., Kagawa, R. M., Miller, M., Azrael, D., & Wintemute, G. J. (2021, April). Firearm purchases without background checks in California. Preventive Medicine, 145, 106414. Retrieved 2023-04-20, from https://linkinghub.elsevier.com/retrieve/pii/S009174352030445X doi: 10.1016/j.ypmed.2020.106414 Lang, M. (2013). Firearm background checks and suicide. The Economic Journal, 123(573), 1085–1099. Lee, C. Y., & Heo, H. (2016, jul). Estimating willingness to pay for renewable energy in South Korea using the contingent valuation method. Energy Policy, 94, 150–156. doi: 10.1016/j.enpol.2016.03.051 Lee, J. J., & Cameron, T. A. (2008, feb). Popular support for climate change mitigation: Evidence from a general population mail survey. Environmental and Resource Economics, 41(2), 223–248. Retrieved from https://link.springer.com/article/10.1007/s10640-007-9189-1 doi: 10.1007/s10640-007-9189-1 Lesko, M., Silverman, S., & Troup, C. (2021, April 23). Police Departments by State and County. Retrieved from https://www.openpolice.org/list-all-departments Levi, S., Flachsland, C., & Jakob, M. (2020). Political economy determinants of carbon pricing. Global Environmental Politics, 20(2), 128–156. doi: 10.1162/glep_a_00549 Levine, P. B., & McKnight, R. (2017). Firearms and accidental deaths: Evidence from the aftermath of the sandy hook school shooting. Science, 358(6368), 1324–1328. Li, W., Long, R., Chen, H., Yang, M., Chen, F., Zheng, X., & Li, C. (2019, oct). Would personal carbon trading enhance individual adopting intention of battery electric vehicles more effectively than a carbon tax? Resources, Conservation and Recycling, 149, 638–645. doi: 10.1016/j.resconrec.2019.06.035 Liu, G., & Wiebe, D. J. (2019, April). A Time-Series Analysis of Firearm Purchasing After Mass Shooting Events in the United States. JAMA Network Open, 2(4), e191736. Retrieved 2023-04-12, from http://jamanetworkopen.jamanetwork.com/article.aspx?doi=10.1001/ jamanetworkopen.2019.1736 doi: 10.1001/jamanetworkopen.2019.1736 296 Liu, M., Tan, R., & Zhang, B. (2021, feb). The costs of “blue sky”: Environmental regulation, technology upgrading, and labor demand in China. Journal of Development Economics, 150, 102610. doi: 10.1016/j.jdeveco.2020.102610 Look, W., Raimi, D., Robertson, M., Higdon, J., & Propp, D. (2021). Enabling Fairness for Energy Workers and Communities in Transition Enabling Fairness for Energy Workers and Communities in Transition A Review of Federal Policy Options and Principles for a Just Transition in the United States (Tech. Rep.). Retrieved from www.rff.org/fairness-for-workers-and-communities Luca, M., Malhotra, D., & Poliquin, C. (2017, November). Handgun waiting periods reduce gun deaths. Proceedings of the National Academy of Sciences of the United States of America, 114(46), 12162–12165. Retrieved 2023-04-24, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5699026/ doi: 10.1073/pnas.1619896114 Luh, E. (2019). Not So Black and White: Uncovering Racial Bias from Systematically Masked Police Reports. SSRN Electronic Journal. Retrieved 2022-07-25, from https://cdn1.sph.harvard.edu/wp-content/uploads/sites/94/2018/ 01/NPR-RWJF-HSPH- doi: 10.2139/ssrn.3357063 Mackenzie, I. A., Hanley, N., & Kornienko, T. (2008, mar). The optimal initial allocation of pollution permits: A relative performance approach. Environmental and Resource Economics, 39(3), 265–282. doi: 10.1007/s10640-007-9125-4 Makowsky, M. D., Stratmann, T., & Tabarrok, A. (2019, January). To serve and collect: The fiscal and racial determinants of law enforcement. Journal of Legal Studies, 48(1), 189–216. (Publisher: University of Chicago Press) doi: 10.1086/700589 Mathur, A., & Morris, A. C. (2014, mar). Distributional effects of a carbon tax in broader U.S. fiscal reform. Energy Policy, 66, 326–334. doi: 10.1016/j.enpol.2013.11.047 McCourt, A. D., Crifasi, C. K., Stuart, E. A., Vernick, J. S., Kagawa, R. M., Wintemute, G. J., & Webster, D. W. (2020, October). Purchaser Licensing, Point-of-Sale Background Check Laws, and Firearm Homicide and Suicide in 4 US States, 1985–2017. American Journal of Public Health, 110(10), 1546–1552. Retrieved 2023-04-11, from https://ajph.aphapublications.org/doi/full/ 10.2105/AJPH.2020.305822 doi: 10.2105/AJPH.2020.305822 Mello, S. (2019, April). More COPS, less crime. Journal of Public Economics, 172, 174–200. (Publisher: Elsevier B.V.) doi: 10.1016/j.jpubeco.2018.12.003 297 Metcalf, G. E. (2009, oct). Designing a carbon tax to reduce U.S. greenhouse gas emissions. In Review of environmental economics and policy (Vol. 3, pp. 63–83). Cambridge, MA. Retrieved from http://www.nber.org/papers/w14375.pdf doi: 10.1093/reep/ren015 Miller, M., Hepburn, L., & Azrael, D. (2017, February). Firearm Acquisition Without Background Checks: Results of a National Survey. Annals of Internal Medicine, 166(4), 233. Retrieved 2023-04-22, from http://annals.org/article.aspx?doi=10.7326/M16-1590 doi: 10.7326/M16-1590 Mohai, P., Pellow, D., & Roberts, J. T. (2009, oct). Environmental justice. Annual Review of Environment and Resources, 34, 405–430. Retrieved from www.annualreviews.org doi: 10.1146/annurev-environ-082508-094348 Morgenstern, R. D., Pizer, W. A., & Shih, J. S. (2002). Jobs versus the environment: An industry-level perspective. Journal of Environmental Economics and Management, 43(3), 412–436. doi: 10.1006/jeem.2001.1191 Moz-Christofoletti, M. A., & Pereda, P. C. (2021, may). Winners and losers: the distributional impacts of a carbon tax in Brazil. Ecological Economics, 183. doi: 10.1016/j.ecolecon.2021.106945 Murray, B. C., Newell, R. G., & Pizer, W. A. (2009). Balancing cost and emissions certainty: An allowance reserve for cap-and-trade. In Review of environmental economics and policy (Vol. 3, pp. 84–103). doi: 10.1093/reep/ren016 Newell, R. G., Pizer, W. A., & Raimi, D. (2014). Carbon Markets: Past, Present, and Future (Vol. 6; Tech. Rep.). Retrieved from https://about.jstor.org/terms Nix, J., Campbell, B. A., Byers, E. H., & Alpert, G. P. (2017). A Bird’s Eye View of Civilians Killed by Police in 2015: Further Evidence of Implicit Bias. Criminology and Public Policy, 16(1), 309–340. doi: 10.1111/1745-9133.12269 Oberfield, Z. W., & Incantalupo, M. B. (2021, November). Racial Discrimination and Street-Level Managers: Performance, Publicness, and Group Bias. Public Administration Review, 81(6), 1055–1070. (Publisher: John Wiley and Sons Inc) doi: 10.1111/puar.13376 Ohlendorf, N., Jakob, M., Minx, J. C., Schröder, C., & Steckel, J. C. (2021, jan). Distributional Impacts of Carbon Pricing: A Meta-Analysis. Environmental and Resource Economics, 78(1), 1–42. Retrieved from https://doi.org/10.1007/s10640-020-00521-1 doi: 10.1007/s10640-020-00521-1 298 Parry, I. W., & Williams, R. C. (2013). What are the costs of meeting distributional objectives for climate policy? In Distributional aspects of energy and climate policies (pp. 149–183). Retrieved from http://www.nber.org/papers/w16486 doi: 10.4337/9781783470273.00014 Pashardes, P., Pashourtidou, N., & Zachariadis, T. (2014, mar). Estimating welfare aspects of changes in energy prices from preference heterogeneity. Energy Economics, 42, 58–66. doi: 10.1016/j.eneco.2013.12.002 Peterson, T. D., & Rose, A. Z. (2006, mar). Reducing conflicts between climate policy and energy policy in the US: The important role of the states. Energy Policy, 34(5), 619–631. doi: 10.1016/j.enpol.2005.11.014 Pierson, E., Simoiu, C., Overgoor, J., Corbett-Davies, S., Jenson, D., Shoemaker, A., . . . Goel, S. (2020, May). A large-scale analysis of racial disparities in police stops across the United States. Nature Human Behaviour, 4(7), 736–745. Retrieved 2022-02-02, from https://www.nature.com/articles/s41562-020-0858-1 (arXiv: 1706.05678 Publisher: Nature Publishing Group) doi: 10.1038/s41562-020-0858-1 Pindyck, R. S. (2019, mar). The social cost of carbon revisited. Journal of Environmental Economics and Management, 94, 140–160. doi: 10.1016/j.jeem.2019.02.003 Pizer, W., Sanchirico, J. N., & Batz, M. (2010, feb). Regional patterns of U.S. household carbon emissions. Climatic Change, 99(1), 47–63. Retrieved from www.eia.doe.gov doi: 10.1007/s10584-009-9637-8 Pizer, W. A., & Sexton, S. (2019). The Distributional Impacts of Energy Taxes. Review of Environmental Economics and Policy, 13(1), 104–123. doi: 10.1093/reep/rey021 Raissian, K. M. (2016). Hold Your Fire: Did the 1996 Federal Gun Control Act Expansion Reduce Domestic Homicides? Journal of Policy Analysis and Management, 35(1), 67–93. Retrieved 2023-02-21, from https://onlinelibrary.wiley.com/doi/abs/10.1002/pam.21857 (_eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1002/pam.21857) doi: 10.1002/pam.21857 Rausch, S., Metcalf, G. E., & Reilly, J. M. (2011, dec). Distributional impacts of carbon pricing: A general equilibrium approach with micro-data for households. Energy Economics, 33(SUPPL. 1). doi: 10.1016/j.eneco.2011.07.023 Raux, C., Croissant, Y., & Pons, D. (2015, mar). Would personal carbon trading reduce travel emissions more effectively than a carbon tax? Transportation Research Part D: Transport and Environment, 35, 72–83. doi: 10.1016/j.trd.2014.11.008 299 Raymond, L. (2019, nov). Policy perspective:Building political support for carbon pricing—Lessons from cap-and-trade policies. Energy Policy, 134. doi: 10.1016/j.enpol.2019.110986 Reed Walker, W. (2013). The transitional costs of sectoral reallocation: Evidence from the clean air act and the workforce. Quarterly Journal of Economics, 128(4), 1787–1835. doi: 10.1093/qje/qjt022 Reeping, P. M., Klarevas, L., Rajan, S., Rowhani-Rahbar, A., Heinze, J., Zeoli, A. M., . . . Branas, C. C. (2022, April). State Firearm Laws, Gun Ownership, and K-12 School Shootings: Implications for School Safety. Journal of School Violence, 21(2), 132–146. Retrieved 2023-04-11, from https://www.tandfonline.com/ doi/full/10.1080/15388220.2021.2018332 (Place: Abingdon Publisher: Routledge Journals, Taylor & Francis Ltd WOS:000740563500001) doi: 10.1080/15388220.2021.2018332 Ross, C. T. (2015, November). A multi-level Bayesian analysis of racial Bias in police shootings at the county-level in the United States, 2011-2014. PLoS ONE, 10(11), e0141854. Retrieved 2022-07-25, from https://journals.plos.org/ plosone/article?id=10.1371/journal.pone.0141854 (Publisher: Public Library of Science) doi: 10.1371/journal.pone.0141854 Ross, C. T., Winterhalder, B., & McElreath, R. (2018, June). Resolution of apparent paradoxes in the race-specific frequency of use-of-force by police. Palgrave Communications, 4(1), 1–9. Retrieved 2022-07-25, from https://www.nature.com/articles/s41599-018-0110-z (Publisher: Palgrave) doi: 10.1057/s41599-018-0110-z Rotaris, L., & Danielis, R. (2019, feb). The willingness to pay for a carbon tax in Italy. Transportation Research Part D: Transport and Environment, 67, 659–673. doi: 10.1016/j.trd.2019.01.001 Rudolph, K. E., Stuart, E. A., Vernick, J. S., & Webster, D. W. (2015). Association between connecticut’s permit-to-purchase handgun law and homicides. American journal of public health, 105(8), e49–e54. Sances, M. W., & You, H. Y. (2017). Who pays for government? descriptive representationb and exploitative revenue sources. Journal of Politics, 79(3), 1090–1094. doi: 10.1086/691354 Santaella-Tenorio, J., Cerdá, M., Villaveces, A., & Galea, S. (2016, January). What Do We Know About the Association Between Firearm Legislation and Firearm-Related Injuries? Epidemiologic Reviews, 38(1), 140–157. Retrieved 2023-04-11, from https://academic.oup.com/epirev/article/38/1/140/2754868 doi: 10.1093/epirev/mxv012 300 Scannell, L., & Gifford, R. (2013, jan). Personally Relevant Climate Change: The Role of Place Attachment and Local Versus Global Message Framing in Engagement. Environment and Behavior, 45(1), 60–85. doi: 10.1177/0013916511421196 Schleimer, J. P., Kravitz-Wirtz, N., Pallin, R., Charbonneau, A. K., Buggs, S. A., & Wintemute, G. J. (2020, October). Firearm ownership in California: A latent class analysis. Injury Prevention, 26(5), 456–462. Retrieved 2023-04-13, from https://injuryprevention.bmj.com/lookup/doi/10.1136/ injuryprev-2019-043412 doi: 10.1136/injuryprev-2019-043412 Schmalensee, R., & Stavins, R. N. (2017). Lessons learned from three decades of experience with cap and trade. Review of Environmental Economics and Policy, 11(1), 59–79. doi: 10.1093/reep/rew017 Semenza, D. C., Stansfield, R., Steidley, T., & Mancik, A. M. (2023, May). Firearm Availability, Homicide, and the Context of Structural Disadvantage. Homicide Studies, 27(2), 208–228. Retrieved 2023-04-12, from http://journals.sagepub.com/doi/10.1177/10887679211043806 doi: 10.1177/10887679211043806 Shammin, M. R., & Bullard, C. W. (2009, jun). Impact of cap-and-trade policies for reducing greenhouse gas emissions on U.S. households. Ecological Economics, 68(8-9), 2432–2438. doi: 10.1016/j.ecolecon.2009.03.024 Shapiro, J. S., & Walker, R. (2021). Where is pollution moving? Environmental markets and environmental justice. NBER WORKING PAPER SERIES. Retrieved from http://www.nber.org/papers/w28389 Sheriff, G., Ferris, A. E., & Shadbegian, R. J. (2019, jan). How did air quality standards affect employment at US power plants? The importance of timing, geography, and stringency. Journal of the Association of Environmental and Resource Economists, 6(1), 111–149. Retrieved from https://www.journals.uchicago.edu/doi/10.1086/700929 doi: 10.1086/700929 Siegel, M., Pahn, M., Xuan, Z., Fleegler, E., & Hemenway, D. (2019, October). The Impact of State Firearm Laws on Homicide and Suicide Deaths in the USA, 1991–2016: a Panel Study. Journal of General Internal Medicine, 34(10), 2021–2028. Retrieved 2023-04-11, from http://link.springer.com/10.1007/s11606-019-04922-x doi: 10.1007/s11606-019-04922-x 301 Siegel, M., & Rothman, E. F. (2016, July). Firearm Ownership and Suicide Rates Among US Men and Women, 1981–2013. American Journal of Public Health, 106(7), 1316–1322. Retrieved 2023-04-13, from https:// ajph.aphapublications.org/doi/full/10.2105/AJPH.2016.303182 doi: 10.2105/AJPH.2016.303182 Siegel, M., Xuan, Z., Ross, C. S., Galea, S., Kalesan, B., Fleegler, E., & Goss, K. A. (2017, December). Easiness of Legal Access to Concealed Firearm Permits and Homicide Rates in the United States. American Journal of Public Health, 107(12), 1923–1929. Retrieved 2023-04-11, from https:// ajph.aphapublications.org/doi/full/10.2105/AJPH.2017.304057 doi: 10.2105/AJPH.2017.304057 Sinn, H.-W. (2015). Introductory comment–the green paradox: a supply-side view of the climate problem. Review of Environmental Economics and Policy. Smith, J., & Spiegler, J. (2020). Explaining Gun Deaths: Gun Control, Mental Illness, and Policymaking in the American States. Policy Studies Journal, 48(1), 235–256. Retrieved 2023-04-12, from https://onlinelibrary.wiley.com/doi/abs/10.1111/psj.12242 (_eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/psj.12242) doi: 10.1111/psj.12242 Smith, M. R., Rojek, J. J., Petrocelli, M., & Withrow, B. (2017). Measuring disparities in police activities: a state of the art review. Policing, 40(2), 166–183. Retrieved from https://heinonline.org/HOL/License doi: 10.1108/PIJPSM-06-2016-0074 Spence, A., Poortinga, W., Butler, C., & Pidgeon, N. F. (2011, apr). Perceptions of climate change and willingness to save energy related to flood experience. Nature Climate Change, 1(1), 46–49. doi: 10.1038/nclimate1059 Steffensmeier, D., Painter-Davis, N., & Ulmer, J. (2017, August). Intersectionality of Race, Ethnicity, Gender, and Age on Criminal Punishment. Sociological Perspectives, 60(4), 810–833. (Publisher: SAGE Publications Inc.) doi: 10.1177/0731121416679371 Steffensmeier, D., Ulmer, J., & Kramer, J. (1998). The interaction of race, gender, and age in criminal sentencing: The punishment cost of being young, black, and male. Criminology, 36(4), 763–798. (Publisher: American Society of Criminology) doi: 10.1111/J.1745-9125.1998.TB01265.X 302 Steidley, T., & Yamane, D. (2022, February). Special Issue Editors’ Introduction: A Sociology of Firearms for the Twenty-First Century. Sociological Perspectives, 65(1), 5–11. Retrieved 2023-04-12, from https://doi.org/10.1177/07311214211040933 (Publisher: SAGE Publications Inc) doi: 10.1177/07311214211040933 Stroube, B. K. (2021, September). Using allegations to understand selection bias in organizations: Misconduct in the Chicago Police Department. Organizational Behavior and Human Decision Processes, 166, 149–165. Retrieved 2022-12-09, from https://www.sciencedirect.com/science/article/pii/ S074959781830339X doi: 10.1016/j.obhdp.2020.03.003 Takada, S., Choi, K. R., Natsui, S., Saadi, A., Buchbinder, L., Easterlin, M., & Zimmerman, F. J. (2021, December). Firearm laws and the network of firearm movement among US states. BMC Public Health, 21(1), 1803. Retrieved 2023-04-13, from https://bmcpublichealth.biomedcentral.com/ articles/10.1186/s12889-021-11772-y doi: 10.1186/s12889-021-11772-y Tashiro, J., Lane, R. S., Blass, L. W., Perez, E. A., & Sola, J. E. (2016, October). The effect of gun control laws on hospital admissions for children in the United States. Journal of Trauma and Acute Care Surgery, 81(4), S54–S60. Retrieved 2023-04-17, from https://journals.lww.com/01586154-201610001-00011 doi: 10.1097/TA.0000000000001177 Tatalovich, R., & Haider-Markel, D. P. (2022). Voting on gun rights: Mapping the electoral scope of the pro-gun constituency in America. Social Science Quarterly, 103(6), 1359–1370. Retrieved 2023-04-13, from https://onlinelibrary.wiley.com/doi/abs/10.1111/ssqu.13192 (_eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/ssqu.13192) doi: 10.1111/ssqu.13192 Terzi, A. (2020, dec). Crafting an effective narrative on the green transition. Energy Policy, 147, 111883. doi: 10.1016/j.enpol.2020.111883 Tilcsik, A. (2021). Statistical discrimination and the rationalization of stereotypes. American Sociological Review, 86(1), 93-122. Retrieved from https://doi.org/10.1177/0003122420969399 doi: 10.1177/0003122420969399 Tvinnereim, E., Fløttum, K., Gjerstad, Ø., Johannesson, M. P., & Nordø, Å. D. (2017, sep). Citizens’ preferences for tackling climate change. Quantitative and qualitative analyses of their freely formulated solutions. Global Environmental Change, 46, 34–41. doi: 10.1016/j.gloenvcha.2017.06.005 303 United States, & Bureau of Justice Statistics. (2012). Law Enforcement Agency Identifiers Crosswalk, United States. Retrieved from https://doi.org/10.3886/ICPSR35158.v2 Urbatsch, R. (2019, June). Gun-shy: Refusal to answer questions about firearm ownership. Social Science Journal, 56(2), 189–195. Retrieved 2023-04-11, from https://www.tandfonline.com/doi/full/10.1016/ j.soscij.2018.04.003 (Place: Amsterdam Publisher: Elsevier Science Bv WOS:000468189900005) doi: 10.1016/j.soscij.2018.04.003 U.S. Census Bureau. (2019). Selected Housing Characteristics 20105-2019 American Community Survey 5-year estimates. Retrieved from https://www.census.gov/ data/developers/data-sets/acs-5year/2019.html U.S. Census Bureau. (2021). U.S. Population Estimates 2010-2020. Retrieved from https://www2.census.gov/programs-surveys/popest/datasets/ 2010-2020/cities/ Vesterdal, M., & Svendsen, G. T. (2004). How should greenhouse gas permits be allocated in the EU? Energy Policy, 32(8), 961–968. doi: 10.1016/S0301-4215(03)00019-3 Videras, J., Owen, A. L., Conover, E., & Wu, S. (2012, jan). The influence of social relationships on pro-environment behaviors. Journal of Environmental Economics and Management, 63(1), 35–50. doi: 10.1016/j.jeem.2011.07.006 Vizzard, W. J. (2015). The Current and Future State of Gun Policy in the United States. Journal of Criminal Law & Criminology, 104(4), 879–904. Retrieved 2023-04-12, from http://libproxy.uoregon.edu/login?url=https:// search.ebscohost.com/login.aspx?direct=true&db=aph&AN= 110668173&site=ehost-live&scope=site (Publisher: Northwestern University School of Law) Wagner, G., Anthoff, D., Cropper, M., Dietz, S., Gillingham, K. T., Groom, B., . . . Stock, J. H. (2021, feb). Eight priorities for calculating the social cost of carbon. Nature, 590(7847), 548–550. Retrieved from http://www.nature.com/articles/d41586-021-00441-0 doi: 10.1038/d41586-021-00441-0 Wang, Q., Hubacek, K., Feng, K., Wei, Y. M., & Liang, Q. M. (2016, dec). Distributional effects of carbon taxation. Applied Energy, 184, 1123–1131. doi: 10.1016/j.apenergy.2016.06.083 304 Warner, T. D., & Ratcliff, S. (2021). What Guns Mean: Who Sees Guns as Important, Essential, and Empowering (and Why)? Sociological Inquiry, 91(2), 313–346. Retrieved 2023-04-12, from https://onlinelibrary.wiley.com/doi/abs/10.1111/soin.12408 (_eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/soin.12408) doi: 10.1111/soin.12408 Webster, D., Crifasi, C. K., & Vernick, J. S. (2014). Effects of the repeal of missouri’s handgun purchaser licensing law on homicides. Journal of Urban Health, 91, 293–302. Webster, D. W., McCourt, A. D., Crifasi, C. K., Booty, M. D., & Stuart, E. A. (2020). Evidence concerning the regulation of firearms design, sale, and carrying on fatal mass shootings in the United States. Criminology & Public Policy, 19(1), 171–212. Retrieved 2023-02-21, from https://onlinelibrary.wiley.com/doi/abs/10.1111/1745-9133.12487 (_eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/1745-9133.12487) doi: 10.1111/1745-9133.12487 Weisburd, S. (2021, May). Police presence, rapid response rates, and crime prevention. Review of Economics and Statistics, 103(2), 280–293. (Publisher: MIT Press Journals) doi: 10.1162/rest_a_00889 Weisburst, E. K. (2019a). Police Use of Force as an Extension of Arrests: Examining Disparities across Civilian and Officer Race. AEA Papers and Proceedings, 109, 152–156. doi: 10.1257/pandp.20191028 Weisburst, E. K. (2019b, May). Safety in police numbers: Evidence of police effectiveness from federal cops grant applications. American Law and Economics Review, 21(1), 81–109. (Publisher: Oxford University Press) doi: 10.1093/aler/ahy010 Weitzer, R. (2014, September). The puzzling neglect of Hispanic Americans in research on police–citizen relations. Ethnic and Racial Studies, 37(11), 1995–2013. Retrieved 2022-07-26, from /record/2014-36832-003 (Publisher: Informa UK Limited) doi: 10.1080/01419870.2013.790984 West, J. (2018). Racial Bias in Police Investigations. (October), 1–36. Westervelt, A. (2018). Drilled. Retrieved from https://drillednews.com/podcast-2/ 305 White, A. R., Nathan, N. L., & Faller, J. K. (2015). What Do I Need to Vote? Bureaucratic Discretion and Discrimination by Local Election Officials. American Political Science Review, 109(1), 129–142. Retrieved 2022-07-22, from https://doi.org/10.1017/S0003055414000562 doi: 10.1017/S0003055414000562 Wier, M., Birr-Pedersen, K., Jacobsen, H. K., & Klok, J. (2005, jan). Are CO2 taxes regressive? Evidence from the Danish experience. Ecological Economics, 52(2), 239–251. doi: 10.1016/j.ecolecon.2004.08.005 Williams, M. C. (n.d.). Gun Violence in Black and White: Evidence from Policy Reform in Missouri. Williams, R. C., Gordon, H., Burtraw, D., Carbone, J. C., & Morgenstern, R. D. (2014). The initial incidence of a carbon tax across U.S. States. National Tax Journal, 67(4), 807–829. doi: 10.17310/ntj.2014.4.03 Williams Jr, M. C. (2020). Gun violence in black and white: Evidence from policy reform in missouri. Unpublished Manuscript, NYU. Wintemute, G. J. (2015, March). The Epidemiology of Firearm Violence in the Twenty-First Century United States. Annual Review of Public Health, 36(1), 5–19. Retrieved 2023-04-11, from https://www.annualreviews.org/doi/ 10.1146/annurev-publhealth-031914-122535 doi: 10.1146/annurev-publhealth-031914-122535 Wintemute, G. J. (2019, October). Background Checks For Firearm Purchases: Problem Areas And Recommendations To Improve Effectiveness. Health Affairs, 38(10), 1702–1710. Retrieved 2023-04-13, from http://www.healthaffairs.org/doi/10.1377/hlthaff.2019.00671 doi: 10.1377/hlthaff.2019.00671 Yamazaki, A. (2017). Jobs and climate policy: Evidence from British Columbia’s revenue-neutral carbon tax. Journal of Environmental Economics and Management, 83, 197–216. doi: 10.1016/j.jeem.2017.03.003 Yang, J., Zou, L., Lin, T., Wu, Y., & Wang, H. (2014, dec). Public willingness to pay for CO2 mitigation and the determinants under climate change: A case study of Suzhou, China. Journal of Environmental Management, 146, 1–8. Retrieved from https://pubmed.ncbi.nlm.nih.gov/25151109/ doi: 10.1016/j.jenvman.2014.07.015 306