Schweitzer Fachinformationen
Wenn es um professionelles Wissen geht, ist Schweitzer Fachinformationen wegweisend. Kunden aus Recht und Beratung sowie Unternehmen, öffentliche Verwaltungen und Bibliotheken erhalten komplette Lösungen zum Beschaffen, Verwalten und Nutzen von digitalen und gedruckten Medien.
Collecting, analysing, and drawing inferences from data are central to research in the medical and social sciences. Unfortunately, for any number of reasons, it is rarely possible to collect all the intended data. The ubiquity of missing data, and the problems this poses for both analysis and inference, has spawned a substantial statistical literature dating from 1950s. At that time, when statistical computing was in its infancy, many analyses were only feasible because of the carefully planned balance in the dataset (for example the same number of observations on each unit). Missing data meant the available data for analysis were unbalanced, thus complicating the planned analysis and in some instances rendering it infeasible. Early work on the problem was therefore largely computational (e.g. Healy and Westmacott, 1956, Afifi and Elashoff, 1966, Orchard and Woodbury, 1972, Dempster et al., 1977).
The wider question of the consequences of non-trivial proportions of missing data for inference was neglected until the seminal paper by Rubin (1976). This set out a typology for assumptions about the reasons for missing data and sketched their implications for analysis and inference. It marked the beginning of a broad stream of research about the analysis of partially observed data. The literature is now huge and continues to grow, both as methods are developed for large and complex data structures, and as increasing computer power and suitable software enables researchers to apply these methods.
For a broad overview of the literature, a good place to start for applied statisticians is Little and Rubin (2019). They give a good overview of likelihood methods and an introduction to multiple imputation. Allison (2002) presents a less technical overview. Schafer (1997) is more algorithmic, focusing on the expectation maximisation (EM) algorithm and imputation using the multivariate normal and general location model. Molenberghs and Kenward (2007) focus on clinical studies, while Daniels and Hogan (2008) focus on longitudinal studies with a Bayesian emphasis.
The above books concentrate on the parametric approaches. However, there is also a growing literature based around using inverse probability weighting, in the spirit of Horvitz and Thompson (1952), and associated doubly robust methods. In particular, we refer to the work of Robins and colleagues (e.g. Robins and Rotnitzky, 1995, Scharfstein et al., 1999). Vansteelandt et al. (2009) give an accessible introduction to these developments. A comparison with multiple imputation in a simple setting is given by Carpenter et al. (2006). The pros and cons are debated in Kang and Schafer (2007) and the theory is brought together by Tsiatis (2006).
This book is concerned with a particular statistical method for analysing and drawing inferences from incomplete data called multiple imputation (MI). Initially proposed by Rubin (1987) in the context of surveys, increasing awareness among researchers about the possible effects of missing data (e.g. Klebanoff and Cole, 2008) has led to an upsurge of interest (e.g. Sterne et al. (2009), Kenward and Carpenter (2007), Schafer (1999a), Rubin (1996)), fuelled by the increasing availability of software and computing power.
MI is attractive because it is both practical and widely applicable. Well-developed statistical software (see, for example, issue 45 of the Journal of Statistical Software) has placed MI within the reach of most researchers in the medical and social sciences, whether or not they have undertaken advanced training in statistics. However, the increasing use of MI in a range of settings beyond that originally envisaged has led to a bewildering proliferation of algorithms and software. Further, the implications of the underlying assumptions in the context of the data at hand are often unclear.
We are writing for researchers in the medical and social sciences with the aim of clarifying the issues raised by missing data, outlining the rationale for MI, explaining the motivation and relationship between the various imputation algorithms and describing and illustrating its application in various settings and to some complex data structures.
Throughout most of the book (with the partial exception of Chapter 8), we will assume that a key aim of analysis with incomplete data is to recover the information lost due to missing data. More specifically, we will take the 'substantive model' as the model that would be used with complete data. We can then define certain desirable properties of our estimator with incomplete data. First, it should be unbiased for the value of the parameter we would see with complete data. Second, it should have low variance. Third, we should have a reliable variance formula and a means of constructing confidence intervals with the advertised coverage.
In the context of multiple imputation, it is worth noting that these remain our aims; the aim of multiple imputation is not to accurately predict the missing values. Rubin (1996) describes it as follows:
'Judging the quality of missing data procedures by their ability to recreate the individual missing values [.] does not lead to choosing procedures that result in valid inference, which is our objective'.
An objection may be that the ability to perfectly predict missing values would result in valid inference; however, in our view, this hypothetical scenario would be one in which data are not really 'missing'.
Central to the analysis of partially observed data is an understanding of why the data are missing and the implications of this for the analysis. This is the focus of the remainder of this chapter. Introducing some of the examples that run through the book, we show how Rubin's typology (Rubin, 1976) provides the foundational framework for understanding the implications of missing data.
In this section, we consider possible reasons for missing data, illustrate these with examples, and note some preliminary implications for inference. We use the word 'possible' advisedly, since we can rarely be sure of the mechanism giving rise to missing data. Instead, a range of possible mechanisms are consistent with the observed data. In practice, we therefore wish to analyse the data under different mechanisms to establish the robustness of our inference in the face of uncertainty about the missingness mechanism.
All datasets consist of a series of units each of which provides information on a series of items. For example, in a cross-sectional questionnaire survey, the units would be individuals, and the items their answers to the questions. In a household survey, the units would be households, and the items information about the household and members of the household. In longitudinal studies, units would typically be individuals, while items would be longitudinal data from those individuals. In this book, units therefore correspond to the highest level in multi-level (i.e. hierarchical) data, and unless stated otherwise, data from different units are statistically independent.
Within this framework, it is useful to distinguish between units where all the information is missing, termed unit non-response and units who contribute partial information, termed item non-response. The statistical issues are the same in both cases and both can in principle be handled by MI. However, the main focus of this book is the latter.
Figure 1.1 Detail from a senior mandarin's house front in New Territories, Hong Kong.
We now introduce two key examples, which we return to throughout the book.
It is very important to investigate the patterns of missing data before embarking on a formal analysis. This can throw up vital information that might otherwise be overlooked and may even allow the missing data to be traced. For example, when analysing the new wave of a longitudinal survey, a colleague's careful examination of missing data patterns established that many of the missing questionnaires could be traced to a set of cardboard boxes. These turned out to have been left behind in a move. They were recovered, and the data entered.
Table 1.1 YCS variables for exploring the relationship between Year 11 attainment and social stratification.
Dateiformat: ePUBKopierschutz: Adobe-DRM (Digital Rights Management)
Systemvoraussetzungen:
Das Dateiformat ePUB ist sehr gut für Romane und Sachbücher geeignet – also für „fließenden” Text ohne komplexes Layout. Bei E-Readern oder Smartphones passt sich der Zeilen- und Seitenumbruch automatisch den kleinen Displays an. Mit Adobe-DRM wird hier ein „harter” Kopierschutz verwendet. Wenn die notwendigen Voraussetzungen nicht vorliegen, können Sie das E-Book leider nicht öffnen. Daher müssen Sie bereits vor dem Download Ihre Lese-Hardware vorbereiten.Bitte beachten Sie: Wir empfehlen Ihnen unbedingt nach Installation der Lese-Software diese mit Ihrer persönlichen Adobe-ID zu autorisieren!
Weitere Informationen finden Sie in unserer E-Book Hilfe.