Techniques that Combine Random Sampling with Random Assignment
De som köpt den här boken har ofta också köpt Fostering Sustainable Behavior av Doug McKenzie-Mohr (häftad).
Köp båda 2 för 1722 krPaul J. Lavrakas, PhD, is Senior Fellow at the NORC at the University of Chicago, Adjunct Professor at University of Illinois-Chicago, Senior Methodologist at the Social Research Centre of Australian National University and at the Office for Survey Research at Michigan State University. Michael W. Traugott, PhD, is Research Professor in the Institute for Social Research at the University of Michigan. Courtney Kennedy, PhD, is Director of Survey Research at Pew Research Center in Washington, DC. Allyson L. Holbrook, PhD, is Professor of Public Administration and Psychology at the University of Illinois-Chicago. Edith D. de Leeuw, PhD, is Professor of Survey Methodology in the Department of Methodology and Statistics at Utrecht University. Brady T. West, PhD, is Research Associate Professor in the Survey Research Center at the University of Michigan-Ann Arbor.
List of Contributors xix Preface by Dr. Judith Tanur xxv About the Companion Website xxix 1 Probability Survey-Based Experimentation and the Balancing of Internal and External Validity Concerns 1 Paul J. Lavrakas, Courtney Kennedy, Edith D. de Leeuw, Brady T. West, Allyson L. Holbrook, and Michael W. Traugott 1.1 Validity Concerns in Survey Research 3 1.2 Survey Validity and Survey Error 5 1.3 Internal Validity 6 1.4 Threats to Internal Validity 8 1.5 External Validity 11 1.6 Pairing Experimental Designs with Probability Sampling 12 1.7 Some Thoughts on Conducting Experiments with Online Convenience Samples 12 1.8 The Contents of this Book 15 References 15 Part I Introduction to Section on Within-Unit Coverage 19 Paul J. Lavrakas and Edith D. de Leeuw 2 Within-Household Selection Methods: A Critical Review and Experimental Examination 23 Jolene D. Smyth, Kristen Olson, and Mathew Stange 2.1 Introduction 23 2.2 Within-Household Selection and Total Survey Error 24 2.3 Types of within-Household Selection Techniques 24 2.4 Within-Household Selection in Telephone Surveys 25 2.5 Within-Household Selection in Self-Administered Surveys 26 2.6 Methodological Requirements of Experimentally Studying Within-Household Selection Methods 27 2.7 Empirical Example 30 2.8 Data and Methods 31 2.9 Analysis Plan 34 2.10 Results 35 2.11 Discussion and Conclusions 40 References 42 3 Measuring within-Household Contamination: The Challenge of Interviewing More Than One Member of a Household 47 Colm OMuircheartaigh, Stephen Smith, and Jaclyn S.Wong 3.1 Literature Review 47 3.2 Data and Methods 50 Investigators 53 Field/Project Directors 53 3.3 The Sequence of Analyses 55 3.4 Results 55 3.5 Effect on Standard Errors of the Estimates 57 3.6 Effect on Response Rates 58 3.7 Effect on Responses 61 3.8 Substantive Results 64 References 64 Part II Survey Experiments with Techniques to Reduce Nonresponse 67 Edith D. de Leeuw and Paul J. Lavrakas 4 Survey Experiments on Interactions and Nonresponse: A Case Study of Incentives and Modes 69 A. Bianchi and S. Biffignandi 4.1 Introduction 69 4.2 Literature Overview 70 4.3 Case Study: Examining the Interaction between Incentives and Mode 73 4.4 Concluding Remarks 83 Acknowledgments 85 References 86 5 Experiments on the Effects of Advance Letters in Surveys 89 Susanne Vogl, Jennifer A. Parsons, Linda K. Owens, and Paul J. Lavrakas 5.1 Introduction 89 5.2 State of the Art on Experimentation on the Effect of Advance Letters 93 5.3 Case Studies: Experimental Research on the Effect of Advance Letters 95 5.4 Case Study I: Violence against Men in Intimate Relationships 96 5.5 Case Study II: The Neighborhood Crime and Justice Study 100 5.6 Discussion 106 5.7 Research Agenda for the Future 107 References 108 Part III Overview of the Section on the Questionnaire 111 Allyson Holbrook and Michael W. Traugott 6 Experiments on the Design and Evaluation of Complex Survey Questions 113 Paul Beatty, Carol Cosenza, and Floyd J. Fowler Jr. 6.1 Question Construction: Dangling Qualifiers 115 6.2 Overall Meanings of Question Can Be Obscured by Detailed Words 117 6.3 Are Two Questions Better than One? 119 6.4 The Use of Multiple Questions to Simplify Response Judgments 121 6.5 The Effect of Context or Framing on Answers 122 6.6 Do Questionnaire Effects Vary Across Sub-groups of Respondents? 124 6.7 Discussion 126 References 128 7 Impact of Response Scale Features on Survey Responses to Behavioral Questions 131 Florian Keusch and Ting Yan 7.1 Introduction 131 7.2 Previous Work on Scale Design Features 132 7.3 Methods 134 7.4 Results 136 7.5 Discussion 141 Acknowledgment 143 7.A Question Wording 143 7.A.1 Experimental Questions (One Question Per Screen) 143 7.A.2 Validation Questions (One Per Screen) 144 7.A.3 GfK Profile Questions (Not Part of the Questionnaire) 145 7.B Test of Intera