Total Survey Error in Practice

Improving Quality in the Era of Big Data
 
 
Standards Information Network (Verlag)
  • erschienen am 6. Februar 2017
  • |
  • 624 Seiten
 
E-Book | PDF mit Adobe-DRM | Systemvoraussetzungen
978-1-119-04168-9 (ISBN)
 
An edited volume for an upcoming conference on Total Survey Error (TSE), this book provides an overview of the TSE framework and current TSE research as related to survey design, data collection, estimation and analysis. The book recognizes that survey data affects many public policy and business decisions, and thus focuses on the framework for understanding and improving survey data quality.
The book also addresses issues with data quality in official statistics and in survey, opinion, and market research as the field of statistics has changed, leading to larger and messier data sets. This challenges survey organizations to find ways to collect data more efficiently without sacrificing quality.
This volume consists of the most up-to-date research and reporting from over 70 contributors representing the best academics and researchers from a range of fields. The chapters are broken out into five main sections: The TSE Framework; Implications for Survey Design; Data Collection and Data Processing Applications; Evaluation and Improvement; Estimation and Analysis. Each chapter introduces and examines at least one error source, such as sampling error, measurement error, and nonresponse error, which are the most recognized. The TSE framework presented also encourages readers not to lose sight of the less-commonly studied error sources, such as coverage error, processing error, and specification error. The book also notes the relationships between errors and the ways in which efforts to reduce one type can increase another, resulting in an estimate with more total bias.
Examples are provided of recent scandals involving incorrect or misleading official statistics and survey estimates, such as the ongoing controversy over the number of civilians killed in the Iraq war, and how many reputable polling firms made incorrect predictions about the outcome of the 2012 US election. In Sweden, a faulty calculation of the Consumer Price Index led to overpayments of social security benefits and some organizations collecting data in international surveys have been accused of fabricating portions of their data sets. This practical insight on survey data quality presents concerns about the data errors and the methods and approaches necessary to prevent or remove them.
1. Auflage
  • Englisch
  • Somerset
  • |
  • USA
John Wiley & Sons Inc
  • Für Beruf und Forschung
  • 16,86 MB
978-1-119-04168-9 (9781119041689)
weitere Ausgaben werden ermittelt
Paul P. Biemer, PhD, is distinguished fellow at RTI International and associate director of Survey Research and Development at the Odum Institute, University of North Carolina, USA.
Edith de Leeuw, PhD, is professor of survey methodology in the Department of Methodology and Statistics at Utrecht University, the Netherlands.
Stephanie Eckman, PhD, is fellow at RTI International, USA.
Brad Edwards is vice president, director of Field Services, and deputy area director at Westat, USA.
Frauke Kreuter, PhD, is professor and director of the Joint Program in Survey Methodology, University of Maryland, USA; professor of statistics and methodology at the University of Mannheim, Germany; and head of the Statistical Methods Research Department at the Institute for Employment Research, Germany.
Lars E. Lyberg, PhD, is senior advisor at Inizio, Sweden.
N. Clyde Tucker, PhD, is principal survey methodologist at the American Institutes for Research, USA.
Brady T. West, PhD, is research associate professor in the Survey Research Center, located within the Institute for Social Research at the University of Michigan (U-M), and also serves as statistical consultant on the Consulting for Statistics, Computing and Analytics Research (CSCAR) team at U-M, USA.
1 - Title Page [Seite 5]
2 - Copyright Page [Seite 6]
3 - Contents [Seite 7]
4 - Notes on Contributors [Seite 21]
5 - Preface [Seite 27]
6 - Section 1 The Concept of TSE and the TSE Paradigm [Seite 31]
6.1 - Chapter 1 The Roots and Evolution of the Total Survey Error Concept [Seite 33]
6.1.1 - 1.1 Introduction and Historical Backdrop [Seite 33]
6.1.2 - 1.2 Specific Error Sources and Their Control or Evaluation [Seite 35]
6.1.3 - 1.3 Survey Models and Total Survey Design [Seite 40]
6.1.4 - 1.4 The Advent of More Systematic Approaches Toward Survey Quality [Seite 42]
6.1.5 - 1.5 What the Future Will Bring [Seite 46]
6.1.6 - References [Seite 48]
6.2 - Chapter 2 Total Twitter Error: Decomposing Public Opinion Measurement on Twitter from a Total Survey Error Perspective [Seite 53]
6.2.1 - 2.1 Introduction [Seite 53]
6.2.1.1 - 2.1.1 Social Media: A Potential Alternative to Surveys? [Seite 53]
6.2.1.2 - 2.1.2 TSE as a Launching Point for Evaluating Social Media Error [Seite 54]
6.2.2 - 2.2 Social Media: An Evolving Online Public Sphere [Seite 55]
6.2.2.1 - 2.2.1 Nature, Norms, and Usage Behaviors of Twitter [Seite 55]
6.2.2.2 - 2.2.2 Research on Public Opinion on Twitter [Seite 56]
6.2.3 - 2.3 Components of Twitter Error [Seite 57]
6.2.3.1 - 2.3.1 Coverage Error [Seite 58]
6.2.3.2 - 2.3.2 Query Error [Seite 58]
6.2.3.3 - 2.3.3 Interpretation Error [Seite 59]
6.2.3.4 - 2.3.4 The Deviation of Unstructured Data Errors from TSE [Seite 60]
6.2.4 - 2.4 Studying Public Opinion on the Twittersphere and the Potential Error Sources of Twitter Data: Two Case Studies [Seite 61]
6.2.4.1 - 2.4.1 Research Questions and Methodology of Twitter Data Analysis [Seite 62]
6.2.4.2 - 2.4.2 Potential Coverage Error in Twitter Examples [Seite 63]
6.2.4.3 - 2.4.3 Potential Query Error in Twitter Examples [Seite 66]
6.2.4.3.1 - 2.4.3.1 Implications of Including or Excluding RTs for Error [Seite 66]
6.2.4.3.2 - 2.4.3.2 Implications of Query Iterations for Error [Seite 67]
6.2.4.4 - 2.4.4 Potential Interpretation Error in Twitter Examples [Seite 69]
6.2.5 - 2.5 Discussion [Seite 70]
6.2.5.1 - 2.5.1 A Framework That Better Describes Twitter Data Errors [Seite 70]
6.2.5.2 - 2.5.2 Other Subclasses of Errors to Be Investigated [Seite 71]
6.2.6 - 2.6 Conclusion [Seite 72]
6.2.6.1 - 2.6.1 What Advice We Offer for Researchers and Research Consumers [Seite 72]
6.2.6.2 - 2.6.2 Directions for Future Research [Seite 72]
6.2.7 - References [Seite 73]
6.3 - Chapter 3 Big Data: A Survey Research Perspective [Seite 77]
6.3.1 - 3.1 Introduction [Seite 77]
6.3.2 - 3.2 Definitions [Seite 78]
6.3.2.1 - 3.2.1 Sources [Seite 79]
6.3.2.2 - 3.2.2 Attributes [Seite 79]
6.3.2.2.1 - 3.2.2.1 Volume [Seite 80]
6.3.2.2.2 - 3.2.2.2 Variety [Seite 80]
6.3.2.2.3 - 3.2.2.3 Velocity [Seite 80]
6.3.2.2.4 - 3.2.2.4 Veracity [Seite 80]
6.3.2.2.5 - 3.2.2.5 Variability [Seite 82]
6.3.2.2.6 - 3.2.2.6 Value [Seite 82]
6.3.2.2.7 - 3.2.2.7 Visualization [Seite 82]
6.3.2.3 - 3.2.3 The Making of Big Data [Seite 82]
6.3.3 - 3.3 The Analytic Challenge: From Database Marketing to Big Data and Data Science [Seite 86]
6.3.4 - 3.4 Assessing Data Quality [Seite 88]
6.3.4.1 - 3.4.1 Validity [Seite 88]
6.3.4.2 - 3.4.2 Missingness [Seite 89]
6.3.4.3 - 3.4.3 Representation [Seite 89]
6.3.5 - 3.5 Applications in Market, Opinion, and Social Research [Seite 89]
6.3.5.1 - 3.5.1 Adding Value through Linkage [Seite 90]
6.3.5.2 - 3.5.2 Combining Big Data and Surveys in Market Research [Seite 91]
6.3.6 - 3.6 The Ethics of Research Using Big Data [Seite 92]
6.3.7 - 3.7 The Future of Surveys in a Data-Rich Environment [Seite 92]
6.3.8 - References [Seite 95]
6.4 - Chapter 4 The Role of Statistical Disclosure Limitation in Total Survey Error [Seite 101]
6.4.1 - 4.1 Introduction [Seite 101]
6.4.2 - 4.2 Primer on SDL [Seite 102]
6.4.3 - 4.3 TSE-Aware SDL [Seite 105]
6.4.3.1 - 4.3.1 Additive Noise [Seite 105]
6.4.3.2 - 4.3.2 Data Swapping [Seite 108]
6.4.4 - 4.4 Edit-Respecting SDL [Seite 109]
6.4.4.1 - 4.4.1 Simulation Experiment [Seite 110]
6.4.4.2 - 4.4.2 A Deeper Issue [Seite 112]
6.4.5 - 4.5 SDL-Aware TSE [Seite 113]
6.4.6 - 4.6 Full Unification of Edit, Imputation, and SDL [Seite 114]
6.4.7 - 4.7 ``Big Data´´ Issues [Seite 117]
6.4.8 - 4.8 Conclusion [Seite 119]
6.4.9 - Acknowledgments [Seite 121]
6.4.10 - References [Seite 122]
7 - Section 2 Implications for Survey Design [Seite 125]
7.1 - Chapter 5 The Undercoverage-Nonresponse Tradeoff [Seite 127]
7.1.1 - 5.1 Introduction [Seite 127]
7.1.2 - 5.2 Examples of the Tradeoff [Seite 128]
7.1.3 - 5.3 Simple Demonstration of the Tradeoff [Seite 129]
7.1.4 - 5.4 Coverage and Response Propensities and Bias [Seite 130]
7.1.5 - 5.5 Simulation Study of Rates and Bias [Seite 132]
7.1.5.1 - 5.5.1 Simulation Setup [Seite 132]
7.1.5.2 - 5.5.2 Results for Coverage and Response Rates [Seite 135]
7.1.5.3 - 5.5.3 Results for Undercoverage and Nonresponse Bias [Seite 136]
7.1.5.3.1 - 5.5.3.1 Scenario 1 [Seite 137]
7.1.5.3.2 - 5.5.3.2 Scenario 2 [Seite 138]
7.1.5.3.3 - 5.5.3.3 Scenario 3 [Seite 138]
7.1.5.3.4 - 5.5.3.4 Scenario 4 [Seite 139]
7.1.5.3.5 - 5.5.3.5 Scenario 7 [Seite 139]
7.1.5.4 - 5.5.4 Summary of Simulation Results [Seite 140]
7.1.6 - 5.6 Costs [Seite 140]
7.1.7 - 5.7 Lessons for Survey Practice [Seite 141]
7.1.8 - References [Seite 142]
7.2 - Chapter 6 Mixing Modes: Tradeoffs Among Coverage, Nonresponse, and Measurement Error Roger Tourangeau [Seite 145]
7.2.1 - 6.1 Introduction [Seite 145]
7.2.2 - 6.2 The Effect of Offering a Choice of Modes [Seite 148]
7.2.3 - 6.3 Getting People to Respond Online [Seite 149]
7.2.4 - 6.4 Sequencing Different Modes of Data Collection [Seite 150]
7.2.5 - 6.5 Separating the Effects of Mode on Selection and Reporting [Seite 152]
7.2.5.1 - 6.5.1 Conceptualizing Mode Effects [Seite 152]
7.2.5.2 - 6.5.2 Separating Observation from Nonobservation Error [Seite 153]
7.2.5.2.1 - 6.5.2.1 Direct Assessment of Measurement Errors [Seite 153]
7.2.5.2.2 - 6.5.2.2 Statistical Adjustments [Seite 154]
7.2.5.2.3 - 6.5.2.3 Modeling Measurement Error [Seite 156]
7.2.6 - 6.6 Maximizing Comparability Versus Minimizing Error [Seite 157]
7.2.7 - 6.7 Conclusions [Seite 159]
7.2.8 - References [Seite 160]
7.3 - Chapter 7 Mobile Web Surveys: A Total Survey Error Perspective [Seite 163]
7.3.1 - 7.1 Introduction [Seite 163]
7.3.2 - 7.2 Coverage [Seite 165]
7.3.3 - 7.3 Nonresponse [Seite 167]
7.3.3.1 - 7.3.1 Unit Nonresponse [Seite 167]
7.3.3.2 - 7.3.2 Breakoffs [Seite 169]
7.3.3.3 - 7.3.3 Completion Times [Seite 170]
7.3.3.4 - 7.3.4 Compliance with Special Requests [Seite 171]
7.3.4 - 7.4 Measurement Error [Seite 172]
7.3.4.1 - 7.4.1 Grouping of Questions [Seite 173]
7.3.4.1.1 - 7.4.1.1 Question-Order Effects [Seite 173]
7.3.4.1.2 - 7.4.1.2 Number of Items on a Page [Seite 173]
7.3.4.1.3 - 7.4.1.3 Grids versus Item-By-Item [Seite 173]
7.3.4.2 - 7.4.2 Effects of Question Type [Seite 175]
7.3.4.2.1 - 7.4.2.1 Socially Undesirable Questions [Seite 175]
7.3.4.2.2 - 7.4.2.2 Open-Ended Questions [Seite 176]
7.3.4.3 - 7.4.3 Response and Scale Effects [Seite 176]
7.3.4.3.1 - 7.4.3.1 Primacy Effects [Seite 176]
7.3.4.3.2 - 7.4.3.2 Slider Bars and Drop-Down Questions [Seite 177]
7.3.4.3.3 - 7.4.3.3 Scale Orientation [Seite 177]
7.3.4.4 - 7.4.4 Item Missing Data [Seite 178]
7.3.5 - 7.5 Links Between Different Error Sources [Seite 178]
7.3.6 - 7.6 The Future of Mobile web Surveys [Seite 179]
7.3.7 - References [Seite 180]
7.4 - Chapter 8 The Effects of a Mid-Data Collection Change in Financial Incentives on Total Survey Error in the National Survey of Famil... [Seite 185]
7.4.1 - 8.1 Introduction [Seite 185]
7.4.2 - 8.2 Literature Review: Incentives in Face-to-Face Surveys [Seite 186]
7.4.2.1 - 8.2.1 Nonresponse Rates [Seite 186]
7.4.2.2 - 8.2.2 Nonresponse Bias [Seite 187]
7.4.2.3 - 8.2.3 Measurement Error [Seite 188]
7.4.2.4 - 8.2.4 Survey Costs [Seite 189]
7.4.2.5 - 8.2.5 Summary [Seite 189]
7.4.3 - 8.3 Data and Methods [Seite 189]
7.4.3.1 - 8.3.1 NSFG Design: Overview [Seite 189]
7.4.3.2 - 8.3.2 Design of Incentive Experiment [Seite 191]
7.4.3.3 - 8.3.3 Variables [Seite 191]
7.4.3.4 - 8.3.4 Statistical Analysis [Seite 192]
7.4.4 - 8.4 Results [Seite 193]
7.4.4.1 - 8.4.1 Nonresponse Error [Seite 193]
7.4.4.2 - 8.4.2 Sampling Error and Costs [Seite 196]
7.4.4.3 - 8.4.3 Measurement Error [Seite 200]
7.4.5 - 8.5 Conclusion [Seite 203]
7.4.5.1 - 8.5.1 Summary [Seite 203]
7.4.5.2 - 8.5.2 Recommendations for Practice [Seite 204]
7.4.6 - References [Seite 205]
7.5 - Chapter 9 A Total Survey Error Perspective on Surveys in Multinational, Multiregional, and Multicultural Contexts [Seite 209]
7.5.1 - 9.1 Introduction [Seite 209]
7.5.2 - 9.2 TSE in Multinational, Multiregional, and Multicultural Surveys [Seite 210]
7.5.3 - 9.3 Challenges Related to Representation and Measurement Error Components in Comparative Surveys [Seite 214]
7.5.3.1 - 9.3.1 Representation Error [Seite 214]
7.5.3.1.1 - 9.3.1.1 Coverage Error [Seite 214]
7.5.3.1.2 - 9.3.1.2 Sampling Error [Seite 215]
7.5.3.1.3 - 9.3.1.3 Unit Nonresponse Error [Seite 216]
7.5.3.1.4 - 9.3.1.4 Adjustment Error [Seite 217]
7.5.3.2 - 9.3.2 Measurement Error [Seite 217]
7.5.3.2.1 - 9.3.2.1 Validity [Seite 218]
7.5.3.2.2 - 9.3.2.2 Measurement Error - The Response Process [Seite 218]
7.5.3.2.3 - 9.3.2.3 Processing Error [Seite 221]
7.5.4 - 9.4 QA and QC in 3MC Surveys [Seite 222]
7.5.4.1 - 9.4.1 The Importance of a Solid Infrastructure [Seite 222]
7.5.4.2 - 9.4.2 Examples of QA and QC Approaches Practiced by Some 3MC Surveys [Seite 223]
7.5.4.3 - 9.4.3 QA/QC Recommendations [Seite 225]
7.5.5 - References [Seite 226]
7.6 - Chapter 10 Smartphone Participation in Web Surveys: Choosing Between the Potential for Coverage, Nonresponse, and Measurement Error [Seite 233]
7.6.1 - 10.1 Introduction [Seite 233]
7.6.1.1 - 10.1.1 Focus on Smartphones [Seite 234]
7.6.1.2 - 10.1.2 Smartphone Participation: Web-Survey Design Decision Tree [Seite 234]
7.6.1.3 - 10.1.3 Chapter Outline [Seite 235]
7.6.2 - 10.2 Prevalence of Smartphone Participation in Web Surveys [Seite 236]
7.6.3 - 10.3 Smartphone Participation Choices [Seite 239]
7.6.3.1 - 10.3.1 Disallowing Smartphone Participation [Seite 239]
7.6.3.2 - 10.3.2 Discouraging Smartphone Participation [Seite 241]
7.6.4 - 10.4 Instrument Design Choices [Seite 242]
7.6.4.1 - 10.4.1 Doing Nothing [Seite 243]
7.6.4.2 - 10.4.2 Optimizing for Smartphones [Seite 243]
7.6.5 - 10.5 Device and Design Treatment Choices [Seite 246]
7.6.5.1 - 10.5.1 PC/Legacy versus Smartphone Designs [Seite 246]
7.6.5.2 - 10.5.2 PC/Legacy versus PC/New [Seite 246]
7.6.5.3 - 10.5.3 Smartphone/Legacy versus Smartphone/New [Seite 247]
7.6.5.4 - 10.5.4 Device and Design Treatment Options [Seite 247]
7.6.6 - 10.6 Conclusion [Seite 248]
7.6.7 - 10.7 Future Challenges and Research Needs [Seite 249]
7.6.8 - Appendix 10.A: Data Sources [Seite 250]
7.6.8.1 - A.1 Market Strategies (17 studies) [Seite 250]
7.6.8.2 - A.2 Experimental Data from Market Strategies International [Seite 250]
7.6.8.3 - A.3 Sustainability Cultural Indicators Program (SCIP) [Seite 250]
7.6.8.4 - A.4 Army Study to Assess Risk and Resilience in Service members (STARRS) [Seite 250]
7.6.8.5 - A.5 Panel Study of Income Dynamics Childhood Retrospective Circumstances Study (PSID-CRCS) [Seite 250]
7.6.9 - Appendix 10.B: Smartphone Prevalence in Web Surveys [Seite 251]
7.6.10 - Appendix 10.C: Screen Captures from Peterson et al. (2013) Experiment [Seite 255]
7.6.11 - Appendix 10.D: Survey Questions Used in the Analysis of the Peterson et al. (2013) Experiment [Seite 259]
7.6.12 - References [Seite 261]
7.7 - Chapter 11 Survey Research and the Quality of Survey Data Among Ethnic Minorities [Seite 265]
7.7.1 - 11.1 Introduction [Seite 265]
7.7.2 - 11.2 On the Use of the Terms Ethnicity and Ethnic Minorities [Seite 266]
7.7.3 - 11.3 On the Representation of Ethnic Minorities in Surveys [Seite 267]
7.7.3.1 - 11.3.1 Coverage of Ethnic Minorities [Seite 268]
7.7.3.2 - 11.3.2 Factors Affecting Nonresponse Among Ethnic Minorities [Seite 269]
7.7.3.3 - 11.3.3 Postsurvey Adjustment Issues Related to Surveys Among Ethnic Minorities [Seite 271]
7.7.4 - 11.4 Measurement Issues [Seite 272]
7.7.4.1 - 11.4.1 The Tradeoff When Using Response-Enhancing Measures [Seite 273]
7.7.5 - 11.5 Comparability, Timeliness, and Cost Concerns [Seite 274]
7.7.5.1 - 11.5.1 Comparability [Seite 275]
7.7.5.2 - 11.5.2 Timeliness and Cost Considerations [Seite 276]
7.7.6 - 11.6 Conclusion [Seite 277]
7.7.7 - References [Seite 278]
8 - Section 3 Data Collection and Data Processing Applications [Seite 283]
8.1 - Chapter 12 Measurement Error in Survey Operations Management: Detection, Quantification, Visualization, and Reduction [Seite 285]
8.1.1 - 12.1 TSE Background on Survey Operations [Seite 286]
8.1.2 - 12.2 Better and Better: Using Behavior Coding (CARIcode) and Paradata to Evaluate and Improve Question (Specification) Erro... [Seite 287]
8.1.2.1 - 12.2.1 CARI Coding at Westat [Seite 289]
8.1.2.2 - 12.2.2 CARI Experiments [Seite 290]
8.1.3 - 12.3 Field-Centered Design: Mobile App for Rapid Reporting and Management [Seite 291]
8.1.3.1 - 12.3.1 Mobile App Case Study [Seite 292]
8.1.3.2 - 12.3.2 Paradata Quality [Seite 294]
8.1.4 - 12.4 Faster and Cheaper: Detecting Falsification With GIS Tools [Seite 295]
8.1.5 - 12.5 Putting It All Together: Field Supervisor Dashboards [Seite 298]
8.1.5.1 - 12.5.1 Dashboards in Operations [Seite 298]
8.1.5.2 - 12.5.2 Survey Research Dashboards [Seite 299]
8.1.5.2.1 - 12.5.2.1 Dashboards and Paradata [Seite 299]
8.1.5.2.2 - 12.5.2.2 Relationship to TSE [Seite 299]
8.1.5.3 - 12.5.3 The Stovepipe Problem [Seite 300]
8.1.5.4 - 12.5.4 The Dashboard Solution [Seite 300]
8.1.5.5 - 12.5.5 Case Study [Seite 300]
8.1.5.5.1 - 12.5.5.1 Single Sign-On [Seite 300]
8.1.5.5.2 - 12.5.5.2 Alerts [Seite 301]
8.1.5.5.3 - 12.5.5.3 General Dashboard Design [Seite 301]
8.1.6 - 12.6 Discussion [Seite 303]
8.1.7 - References [Seite 305]
8.2 - Chapter 13 Total Survey Error for Longitudinal Surveys [Seite 309]
8.2.1 - 13.1 Introduction [Seite 309]
8.2.2 - 13.2 Distinctive Aspects of Longitudinal Surveys [Seite 310]
8.2.3 - 13.3 TSE Components in Longitudinal Surveys [Seite 311]
8.2.4 - 13.4 Design of Longitudinal Surveys from a TSE Perspective [Seite 315]
8.2.4.1 - 13.4.1 Is the Panel Study Fixed-Time or Open-Ended? [Seite 316]
8.2.4.2 - 13.4.2 Who To Follow Over Time? [Seite 316]
8.2.4.3 - 13.4.3 Should the Survey Use Interviewers or Be Self-Administered? [Seite 317]
8.2.4.4 - 13.4.4 How Long Should Between-Wave Intervals Be? [Seite 318]
8.2.4.5 - 13.4.5 How Should Longitudinal Instruments Be Designed? [Seite 319]
8.2.5 - 13.5 Examples of Tradeoffs in Three Longitudinal Surveys [Seite 320]
8.2.5.1 - 13.5.1 Tradeoff between Coverage, Sampling and Nonresponse Error in LISS Panel [Seite 320]
8.2.5.2 - 13.5.2 Tradeoff between Nonresponse and Measurement Error in BHPS [Seite 322]
8.2.5.3 - 13.5.3 Tradeoff between Specification and Measurement Error in SIPP [Seite 323]
8.2.6 - 13.6 Discussion [Seite 324]
8.2.7 - References [Seite 325]
8.3 - Chapter 14 Text Interviews on Mobile Devices [Seite 329]
8.3.1 - 14.1 Texting as a Way of Interacting [Seite 330]
8.3.1.1 - 14.1.1 Properties and Affordances [Seite 330]
8.3.1.1.1 - 14.1.1.1 Stable Properties [Seite 330]
8.3.1.1.2 - 14.1.1.2 Properties That Vary across Devices and Networks [Seite 331]
8.3.2 - 14.2 Contacting and Inviting Potential Respondents through Text [Seite 333]
8.3.3 - 14.3 Texting as an Interview Mode [Seite 333]
8.3.3.1 - 14.3.1 Coverage and Sampling Error [Seite 334]
8.3.3.2 - 14.3.2 Nonresponse Error [Seite 337]
8.3.3.3 - 14.3.3 Measurement Error: Conscientious Responding and Disclosure in Texting Interviews [Seite 338]
8.3.3.4 - 14.3.4 Measurement Error: Interface Design for Texting Interviews [Seite 340]
8.3.4 - 14.4 Costs and Efficiency of Text Interviewing [Seite 342]
8.3.5 - 14.5 Discussion [Seite 344]
8.3.6 - References [Seite 345]
8.4 - Chapter 15 Quantifying Measurement Errors in Partially Edited Business Survey Data [Seite 349]
8.4.1 - 15.1 Introduction [Seite 349]
8.4.2 - 15.2 Selective Editing [Seite 350]
8.4.2.1 - 15.2.1 Editing and Measurement Error [Seite 350]
8.4.2.2 - 15.2.2 Definition and the General Idea of Selective Editing [Seite 351]
8.4.2.3 - 15.2.3 Selekt [Seite 352]
8.4.2.4 - 15.2.4 Experiences from Implementations of Selekt [Seite 353]
8.4.3 - 15.3 Effects of Errors Remaining After SE [Seite 355]
8.4.3.1 - 15.3.1 Sampling Below the Threshold: The Two-Step Procedure [Seite 356]
8.4.3.2 - 15.3.2 Randomness of Measurement Errors [Seite 356]
8.4.3.3 - 15.3.3 Modeling and Estimation of Measurement Errors [Seite 357]
8.4.3.4 - 15.3.4 Output Editing [Seite 358]
8.4.4 - 15.4 Case Study: Foreign Trade in Goods Within the European Union [Seite 358]
8.4.4.1 - 15.4.1 Sampling Below the Cutoff Threshold for Editing [Seite 360]
8.4.4.2 - 15.4.2 Results [Seite 360]
8.4.4.3 - 15.4.3 Comments on Results [Seite 362]
8.4.5 - 15.5 Editing Big Data [Seite 364]
8.4.6 - 15.6 Conclusions [Seite 365]
8.4.7 - References [Seite 365]
9 - Section 4 Evaluation and Improvement [Seite 369]
9.1 - Chapter 16 Estimating Error Rates in an Administrative Register and Survey Questions Using a Latent Class Model [Seite 371]
9.1.1 - 16.1 Introduction [Seite 371]
9.1.2 - 16.2 Administrative and Survey Measures of Neighborhood [Seite 372]
9.1.3 - 16.3 A Latent Class Model for Neighborhood of Residence [Seite 375]
9.1.4 - 16.4 Results [Seite 378]
9.1.4.1 - 16.4.1 Model Fit [Seite 378]
9.1.4.2 - 16.4.2 Error Rate Estimates [Seite 380]
9.1.5 - 16.5 Discussion and Conclusion [Seite 384]
9.1.6 - Appendix 16.A: Program Input and Data [Seite 385]
9.1.7 - Acknowledgments [Seite 387]
9.1.8 - References [Seite 387]
9.2 - Chapter 17 ASPIRE: An Approach for Evaluating and Reducing the Total Error in Statistical Products with Application to Registers and the National Accounts [Seite 389]
9.2.1 - 17.1 Introduction and Background [Seite 389]
9.2.2 - 17.2 Overview of ASPIRE [Seite 390]
9.2.3 - 17.3 The ASPIRE Model [Seite 392]
9.2.3.1 - 17.3.1 Decomposition of the TSE into Component Error Sources [Seite 392]
9.2.3.2 - 17.3.2 Risk Classification [Seite 394]
9.2.3.3 - 17.3.3 Criteria for Assessing Quality [Seite 394]
9.2.3.4 - 17.3.4 Ratings System [Seite 395]
9.2.4 - 17.4 Evaluation of Registers [Seite 397]
9.2.4.1 - 17.4.1 Types of Registers [Seite 397]
9.2.4.2 - 17.4.2 Error Sources Associated with Registers [Seite 398]
9.2.4.3 - 17.4.3 Application of ASPIRE to the TPR [Seite 400]
9.2.5 - 17.5 National Accounts [Seite 401]
9.2.5.1 - 17.5.1 Error Sources Associated with the NA [Seite 402]
9.2.5.2 - 17.5.2 Application of ASPIRE to the Quarterly Swedish NA [Seite 404]
9.2.6 - 17.6 A Sensitivity Analysis of GDP Error Sources [Seite 406]
9.2.6.1 - 17.6.1 Analysis of Computer Programming, Consultancy, and Related Services [Seite 406]
9.2.6.2 - 17.6.2 Analysis of Product Motor Vehicles [Seite 408]
9.2.6.3 - 17.6.3 Limitations of the Sensitivity Analysis [Seite 409]
9.2.7 - 17.7 Concluding Remarks [Seite 409]
9.2.8 - Appendix 17.A: Accuracy Dimension Checklist [Seite 411]
9.2.9 - References [Seite 414]
9.3 - Chapter 18 Classification Error in Crime Victimization Surveys: A Markov Latent Class Analysis [Seite 417]
9.3.1 - 18.1 Introduction [Seite 417]
9.3.2 - 18.2 Background [Seite 419]
9.3.2.1 - 18.2.1 Surveys of Crime Victimization [Seite 419]
9.3.2.2 - 18.2.2 Error Evaluation Studies [Seite 420]
9.3.3 - 18.3 Analytic Approach [Seite 422]
9.3.3.1 - 18.3.1 The NCVS and Its Relevant Attributes [Seite 422]
9.3.3.2 - 18.3.2 Description of Analysis Data Set, Victimization Indicators, and Covariates [Seite 422]
9.3.3.3 - 18.3.3 Technical Description of the MLC Model and Its Assumptions [Seite 424]
9.3.4 - 18.4 Model Selection [Seite 426]
9.3.4.1 - 18.4.1 Model Selection Process [Seite 426]
9.3.4.2 - 18.4.2 Model Selection Results [Seite 428]
9.3.5 - 18.5 Results [Seite 429]
9.3.5.1 - 18.5.1 Estimates of Misclassification [Seite 429]
9.3.5.2 - 18.5.2 Estimates of Classification Error Among Demographic Groups [Seite 429]
9.3.6 - 18.6 Discussion and Summary of Findings [Seite 434]
9.3.6.1 - 18.6.1 High False-Negative Rates in the NCVS [Seite 434]
9.3.6.2 - 18.6.2 Decreasing Prevalence Rates Over Time [Seite 435]
9.3.6.3 - 18.6.3 Classification Error among Demographic Groups [Seite 435]
9.3.6.4 - 18.6.4 Recommendations for Analysts [Seite 436]
9.3.6.5 - 18.6.5 Limitations [Seite 436]
9.3.7 - 18.7 Conclusions [Seite 437]
9.3.8 - Appendix 18.A: Derivation of the Composite False-Negative Rate [Seite 437]
9.3.9 - Appendix 18.B: Derivation of the Lower Bound for False-Negative Rates from a Composite Measure [Seite 438]
9.3.10 - Appendix 18.C: Examples of Latent GOLD Syntax [Seite 438]
9.3.11 - References [Seite 440]
9.4 - Chapter 19 Using Doorstep Concerns Data to Evaluate and Correct for Nonresponse Error in a Longitudinal Survey [Seite 443]
9.4.1 - 19.1 Introduction [Seite 443]
9.4.2 - 19.2 Data and Methods [Seite 446]
9.4.2.1 - 19.2.1 Data [Seite 446]
9.4.2.2 - 19.2.2 Analytic Use of Doorstep Concerns Data [Seite 446]
9.4.3 - 19.3 Results [Seite 448]
9.4.3.1 - 19.3.1 Unit Response Rates in Later Waves and Average Number of Don´t Know and Refused Answers [Seite 448]
9.4.3.2 - 19.3.2 Total Nonresponse Bias and Nonresponse Bias Components [Seite 451]
9.4.3.3 - 19.3.3 Adjusting for Nonresponse [Seite 451]
9.4.4 - 19.4 Discussion [Seite 458]
9.4.5 - Acknowledgment [Seite 460]
9.4.6 - References [Seite 460]
9.5 - Chapter 20 Total Survey Error Assessment for Sociodemographic Subgroups in the 2012 U.S. National Immunization Survey [Seite 463]
9.5.1 - 20.1 Introduction [Seite 463]
9.5.2 - 20.2 TSE Model Framework [Seite 464]
9.5.3 - 20.3 Overview of the National Immunization Survey [Seite 467]
9.5.4 - 20.4 National Immunization Survey: Inputs for TSE Model [Seite 470]
9.5.4.1 - 20.4.1 Stage 1: Sample-Frame Coverage Error [Seite 471]
9.5.4.2 - 20.4.2 Stage 2: Nonresponse Error [Seite 473]
9.5.4.3 - 20.4.3 Stage 3: Measurement Error [Seite 474]
9.5.5 - 20.5 National Immunization Survey TSE Analysis [Seite 475]
9.5.5.1 - 20.5.1 TSE Analysis for the Overall Age-Eligible Population [Seite 475]
9.5.5.2 - 20.5.2 TSE Analysis by Sociodemographic Subgroups [Seite 478]
9.5.6 - 20.6 Summary [Seite 482]
9.5.7 - References [Seite 483]
9.6 - Chapter 21 Establishing Infrastructure for the Use of Big Data to Understand Total Survey Error: Examples from Four Survey Research Organizations [Seite 487]
9.6.1 - Overview [Seite 487]
9.6.2 - Part 1 Big Data Infrastructure at the Institute for Employment Research (IAB) [Seite 488]
9.6.2.1 - 21.1.1 Dissemination of Big Data for Survey Research at the Institute for Employment Research [Seite 488]
9.6.2.2 - 21.1.2 Big Data Linkages at the IAB and Total Survey Error [Seite 489]
9.6.2.2.1 - 21.1.2.1 Individual-Level Data: Linked Panel ``Labour Market and Social Security´´ Survey Data and Administrative Data (PAS... [Seite 489]
9.6.2.2.2 - 21.1.2.2 Establishment Data: The IAB Establishment Panel and Administrative Registers as Sampling Frames [Seite 491]
9.6.2.3 - 21.1.3 Outlook [Seite 493]
9.6.2.4 - Acknowledgments [Seite 494]
9.6.3 - References [Seite 494]
9.6.4 - Part 2 Using Administrative Records Data at the U.S. Census Bureau: Lessons Learned from Two Research Projects Evaluating Su... [Seite 497]
9.6.4.1 - 21.2.1 Census Bureau Research and Programs [Seite 497]
9.6.4.2 - 21.2.2 Using Administrative Data to Estimate Measurement Error in Survey Reports [Seite 498]
9.6.4.2.1 - 21.2.2.1 Address and Person Matching Challenges [Seite 499]
9.6.4.2.2 - 21.2.2.2 Event Matching Challenges [Seite 500]
9.6.4.2.3 - 21.2.2.3 Weighting Challenges [Seite 501]
9.6.4.2.4 - 21.2.2.4 Record Update Challenges [Seite 501]
9.6.4.2.5 - 21.2.2.5 Authority and Confidentiality Challenges [Seite 502]
9.6.4.3 - 21.2.3 Summary [Seite 502]
9.6.4.4 - Acknowledgments and Disclaimers [Seite 502]
9.6.5 - References [Seite 502]
9.6.6 - Part 3 Statistics New Zealand´s Approach to Making Use of Alternative Data Sources in a New Era of Integrated Data [Seite 504]
9.6.6.1 - 21.3.1 Data Availability and Development of Data Infrastructure in New Zealand [Seite 505]
9.6.6.2 - 21.3.2 Quality Assessment and Different Types of Errors [Seite 506]
9.6.6.3 - 21.3.3 Integration of Infrastructure Components and Developmental Streams [Seite 507]
9.6.7 - References [Seite 508]
9.6.8 - Part 4 Big Data Serving Survey Research: Experiences at the University of Michigan Survey Research Center [Seite 508]
9.6.8.1 - 21.4.1 Introduction [Seite 508]
9.6.8.2 - 21.4.2 Marketing Systems Group (MSG) [Seite 509]
9.6.8.2.1 - 21.4.2.1 Using MSG Age Information to Increase Sampling Efficiency [Seite 510]
9.6.8.3 - 21.4.3 MCH Strategic Data (MCH) [Seite 511]
9.6.8.3.1 - 21.4.3.1 Assessing MCH´s Teacher Frame with Manual Listing Procedures [Seite 512]
9.6.8.4 - 21.4.4 Conclusion [Seite 514]
9.6.8.5 - Acknowledgments and Disclaimers [Seite 514]
9.6.9 - References [Seite 514]
10 - Section 5 Estimation and Analysis [Seite 517]
10.1 - Chapter 22 Analytic Error as an Important Component of Total Survey Error: Results from a Meta-Analysis [Seite 519]
10.1.1 - 22.1 Overview [Seite 519]
10.1.2 - 22.2 Analytic Error as a Component of TSE [Seite 520]
10.1.3 - 22.3 Appropriate Analytic Methods for Survey Data [Seite 522]
10.1.4 - 22.4 Methods [Seite 525]
10.1.4.1 - 22.4.1 Coding of Published Articles [Seite 525]
10.1.4.2 - 22.4.2 Statistical Analyses [Seite 525]
10.1.5 - 22.5 Results [Seite 527]
10.1.5.1 - 22.5.1 Descriptive Statistics [Seite 527]
10.1.5.2 - 22.5.2 Bivariate Analyses [Seite 529]
10.1.5.3 - 22.5.3 Trends in Error Rates Over Time [Seite 532]
10.1.6 - 22.6 Discussion [Seite 535]
10.1.6.1 - 22.6.1 Summary of Findings [Seite 535]
10.1.6.2 - 22.6.2 Suggestions for Practice [Seite 536]
10.1.6.3 - 22.6.3 Limitations [Seite 536]
10.1.6.4 - 22.6.4 Directions for Future Research [Seite 537]
10.1.7 - Acknowledgments [Seite 538]
10.1.8 - References [Seite 538]
10.2 - Chapter 23 Mixed-Mode Research: Issues in Design and Analysis [Seite 541]
10.2.1 - 23.1 Introduction [Seite 541]
10.2.2 - 23.2 Designing Mixed-Mode Surveys [Seite 542]
10.2.3 - 23.3 Literature Overview [Seite 544]
10.2.4 - 23.4 Diagnosing Sources of Error in Mixed-Mode Surveys [Seite 546]
10.2.4.1 - 23.4.1 Distinguishing Between Selection and Measurement Effects: The Multigroup Approach [Seite 546]
10.2.4.1.1 - 23.4.1.1 Multigroup Latent Variable Approach [Seite 546]
10.2.4.1.2 - 23.4.1.2 Multigroup Observed Variable Approach [Seite 550]
10.2.4.2 - 23.4.2 Distinguishing Between Selection and Measurement Effects: The Counterfactual or Potential Outcome Approach [Seite 551]
10.2.4.3 - 23.4.3 Distinguishing Between Selection and Measurement Effects: The Reference Survey Approach [Seite 552]
10.2.5 - 23.5 Adjusting for Mode Measurement Effects [Seite 553]
10.2.5.1 - 23.5.1 The Multigroup Approach to Adjust for Mode Measurement Effects [Seite 553]
10.2.5.1.1 - 23.5.1.1 Multigroup Latent Variable Approach [Seite 553]
10.2.5.1.2 - 23.5.1.2 Multigroup Observed Variable Approach [Seite 555]
10.2.5.2 - 23.5.2 The Counterfactual (Potential Outcomes) Approach to Adjust for Mode Measurement Effects [Seite 555]
10.2.5.3 - 23.5.3 The Reference Survey Approach to Adjust for Mode Measurement Effects [Seite 556]
10.2.6 - 23.6 Conclusion [Seite 557]
10.2.7 - References [Seite 558]
10.3 - Chapter 24 The Effect of Nonresponse and Measurement Error on Wage Regression across Survey Modes: A Validation Study [Seite 561]
10.3.1 - 24.1 Introduction [Seite 561]
10.3.2 - 24.2 Nonresponse and Response Bias in Survey Statistics [Seite 562]
10.3.2.1 - 24.2.1 Bias in Regression Coefficients [Seite 562]
10.3.2.2 - 24.2.2 Research Questions [Seite 563]
10.3.3 - 24.3 Data and Methods [Seite 564]
10.3.3.1 - 24.3.1 Survey Data [Seite 564]
10.3.3.1.1 - 24.3.1.1 Sampling and Experimental Design [Seite 564]
10.3.3.1.2 - 24.3.1.2 Data Collection [Seite 565]
10.3.3.2 - 24.3.2 Administrative Data [Seite 566]
10.3.3.2.1 - 24.3.2.1 General Information [Seite 566]
10.3.3.2.2 - 24.3.2.2 Variable Selection [Seite 567]
10.3.3.2.3 - 24.3.2.3 Limitations [Seite 567]
10.3.3.2.4 - 24.3.2.4 Combined Data [Seite 567]
10.3.3.3 - 24.3.3 Bias in Univariate Statistics [Seite 568]
10.3.3.3.1 - 24.3.3.1 Bias: The Dependent Variable [Seite 568]
10.3.3.3.2 - 24.3.3.2 Bias: The Independent Variables [Seite 568]
10.3.3.4 - 24.3.4 Analytic Approach [Seite 569]
10.3.4 - 24.4 Results [Seite 571]
10.3.4.1 - 24.4.1 The Effect of Nonresponse and Measurement Error on Regression Coefficients [Seite 571]
10.3.4.2 - 24.4.2 Nonresponse Adjustments [Seite 573]
10.3.5 - 24.5 Summary and Conclusion [Seite 576]
10.3.6 - Acknowledgments [Seite 577]
10.3.7 - Appendix 24.A [Seite 578]
10.3.8 - Appendix 24.B [Seite 579]
10.3.9 - References [Seite 584]
10.4 - Chapter 25 Errors in Linking Survey and Administrative Data [Seite 587]
10.4.1 - 25.1 Introduction [Seite 587]
10.4.2 - 25.2 Conceptual Framework of Linkage and Error Sources [Seite 589]
10.4.3 - 25.3 Errors Due to Linkage Consent [Seite 591]
10.4.3.1 - 25.3.1 Evidence of Linkage Consent Bias [Seite 592]
10.4.3.2 - 25.3.2 Optimizing Linkage Consent Rates [Seite 593]
10.4.3.2.1 - 25.3.2.1 Placement of the Linkage Consent Request [Seite 593]
10.4.3.2.2 - 25.3.2.2 Wording of the Linkage Consent Request [Seite 593]
10.4.3.2.3 - 25.3.2.3 Active Versus Passive Consent [Seite 594]
10.4.3.2.4 - 25.3.2.4 Obtaining Linkage Consent in Longitudinal Surveys [Seite 594]
10.4.4 - 25.4 Erroneous Linkage with Unique Identifiers [Seite 595]
10.4.5 - 25.5 Erroneous Linkage with Nonunique Identifiers [Seite 597]
10.4.5.1 - 25.5.1 Common Nonunique Identifiers When Linking Data on People [Seite 597]
10.4.5.2 - 25.5.2 Common Nonunique Identifiers When Linking Data on Establishments [Seite 597]
10.4.6 - 25.6 Applications and Practical Guidance [Seite 598]
10.4.6.1 - 25.6.1 Applications [Seite 598]
10.4.6.2 - 25.6.2 Practical Guidance [Seite 599]
10.4.6.2.1 - 25.6.2.1 Initial Data Quality [Seite 600]
10.4.6.2.2 - 25.6.2.2 Preprocessing [Seite 600]
10.4.7 - 25.7 Conclusions and Take-Home Points [Seite 601]
10.4.8 - References [Seite 601]
11 - Index [Seite 605]
12 - Wiley Series in Survey Methodology [Seite 624]
13 - EULA [Seite 627]

Dateiformat: PDF
Kopierschutz: Adobe-DRM (Digital Rights Management)

Systemvoraussetzungen:

Computer (Windows; MacOS X; Linux): Installieren Sie bereits vor dem Download die kostenlose Software Adobe Digital Editions (siehe E-Book Hilfe).

Tablet/Smartphone (Android; iOS): Installieren Sie bereits vor dem Download die kostenlose App Adobe Digital Editions (siehe E-Book Hilfe).

E-Book-Reader: Bookeen, Kobo, Pocketbook, Sony, Tolino u.v.a.m. (nicht Kindle)

Das Dateiformat PDF zeigt auf jeder Hardware eine Buchseite stets identisch an. Daher ist eine PDF auch für ein komplexes Layout geeignet, wie es bei Lehr- und Fachbüchern verwendet wird (Bilder, Tabellen, Spalten, Fußnoten). Bei kleinen Displays von E-Readern oder Smartphones sind PDF leider eher nervig, weil zu viel Scrollen notwendig ist. Mit Adobe-DRM wird hier ein "harter" Kopierschutz verwendet. Wenn die notwendigen Voraussetzungen nicht vorliegen, können Sie das E-Book leider nicht öffnen. Daher müssen Sie bereits vor dem Download Ihre Lese-Hardware vorbereiten.

Bitte beachten Sie bei der Verwendung der Lese-Software Adobe Digital Editions: wir empfehlen Ihnen unbedingt nach Installation der Lese-Software diese mit Ihrer persönlichen Adobe-ID zu autorisieren!

Weitere Informationen finden Sie in unserer E-Book Hilfe.


Download (sofort verfügbar)

91,99 €
inkl. 7% MwSt.
Download / Einzel-Lizenz
PDF mit Adobe-DRM
siehe Systemvoraussetzungen
E-Book bestellen