Schweitzer Fachinformationen
Wenn es um professionelles Wissen geht, ist Schweitzer Fachinformationen wegweisend. Kunden aus Recht und Beratung sowie Unternehmen, öffentliche Verwaltungen und Bibliotheken erhalten komplette Lösungen zum Beschaffen, Verwalten und Nutzen von digitalen und gedruckten Medien.
PREFACE xvii
CONTRIBUTORS xix
1 Introduction 1 Steven A. Haney
1.1 The Beginning of High Content Screening, 1
1.2 Six Skill Sets Essential for Running HCS Experiments, 4
1.3 Integrating Skill Sets into a Team, 7
1.4 A Few Words on Experimental Design, 8
1.5 Conclusions, 9
Key Points, 9
Further Reading, 10
References, 10
SECTION I FIRST PRINCIPLES 11
2 Fluorescence and Cell Labeling 13 Anthony Davies and Steven A. Haney
2.1 Introduction, 13
2.2 Anatomy of Fluorescent Probes, Labels, and Dyes, 14
2.3 Stokes' Shift and Biological Fluorophores, 15
2.4 Fluorophore Properties, 16
2.5 Localization of Fluorophores Within Cells, 18
2.6 Multiplexing Fluorescent Reagents, 26
2.7 Specialized Imaging Applications Derived from Complex Properties of Fluorescence, 27
2.8 Conclusions, 30
Key Points, 31
Further Reading, 31
References, 31
3 Microscopy Fundamentals 33 Steven A. Haney, Anthony Davies, and Douglas Bowman
3.1 Introducing HCS Hardware, 33
3.2 Deconstructing Light Microscopy, 37
3.3 Using the Imager to Collect Data, 43
3.4 Conclusions, 45
Key Points, 45
Further Reading, 46
References, 46
4 Image Processing 47 John Bradley, Douglas Bowman, and Arijit Chakravarty
4.1 Overview of Image Processing and Image Analysis in HCS, 47
4.2 What is a Digital Image?, 48
4.3 "Addressing" Pixel Values in Image Analysis Algorithms, 48
4.4 Image Analysis Workflow, 49
4.5 Conclusions, 60
Key Points, 60
Further Reading, 60
References, 60
SECTION II GETTING STARTED 63
5 A General Guide to Selecting and Setting Up a High Content Imaging Platform 65 Craig Furman, Douglas Bowman, Anthony Davies, Caroline Shamu, and Steven A. Haney
5.1 Determining Expectations of the HCS System, 65
5.2 Establishing an HC Platform Acquisition Team, 66
5.3 Basic Hardware Decisions, 67
5.4 Data Generation, Analysis, and Retention, 72
5.5 Installation, 73
5.6 Managing the System, 75
5.7 Setting Up Workflows for Researchers, 77
5.8 Conclusions, 78
Key Points, 79
Further Reading, 79
6 Informatics Considerations 81 Jay Copeland and Caroline Shamu
6.1 Informatics Infrastructure for High Content Screening, 81
6.2 Using Databases to Store HCS Data, 86
6.3 Mechanics of an Informatics Solution, 89
6.4 Developing Image Analysis Pipelines: Data Management Considerations, 95
6.5 Compliance With Emerging Data Standards, 99
6.6 Conclusions, 101
Key Points, 102
Further Reading, 102
References, 102
7 Basic High Content Assay Development 103 Steven A. Haney and Douglas Bowman
7.1 Introduction, 103
7.2 Initial Technical Considerations for Developing a High Content Assay, 103
7.3 A Simple Protocol to Fix and Stain Cells, 107
7.4 Image Capture and Examining Images, 109
7.5 Conclusions, 111
Key Points, 112
Further Reading, 112
Reference, 112
SECTION III ANALYZING DATA 113
8 Designing Metrics for High Content Assays 115 Arijit Chakravarty, Steven A. Haney, and Douglas Bowman
8.1 Introduction: Features, Metrics, Results, 115
8.2 Looking at Features, 116
8.3 Metrics and Results: The Metric is the Message, 120
8.4 Types of High Content Assays and Their Metrics, 121
8.5 Metrics to Results: Putting it all Together, 126
8.6 Conclusions, 128
Key Points, 128
Further Reading, 129
References, 129
9 Analyzing Well-Level Data 131 Steven A Haney and John Ringeling
9.1 Introduction, 131
9.2 Reviewing Data, 132
9.3 Plate and Control Normalizations of Data, 134
9.4 Calculation of Assay Statistics, 135
9.5 Data Analysis: Hit Selection, 138
9.6 IC 50 Determinations, 139
9.7 Conclusions, 143
Key Points, 143
Further Reading, 143
References, 144
10 Analyzing Cell-Level Data 145 Steven A. Haney, Lin Guey, and Arijit Chakravarty
10.1 Introduction, 145
10.2 Understanding General Statistical Terms and Concepts, 146
10.3 Examining Data, 149
10.4 Developing a Data Analysis Plan, 155
10.5 Cell-Level Data Analysis: Comparing Distributions Through Inferential Statistics, 158
10.6 Analyzing Normal (or Transformed) Data, 159
10.7 Analyzing Non-Normal Data, 160
10.8 When to Call For Help, 162
10.9 Conclusions, 162
Key Points, 162
Further Reading, 163
References, 163
SECTION IV ADVANCED WORK 165
11 Designing Robust Assays 167 Arijit Chakravarty, Douglas Bowman, Anthony Davies, Steven A. Haney, and Caroline Shamu
11.1 Introduction, 167
11.2 Common Technical Issues in High Content Assays, 167
11.3 Designing Assays to Minimize Trouble, 172
11.4 Looking for Trouble: Building in Quality Control, 177
11.5 Conclusions, 179
Key Points, 180
Further Reading, 180
References, 180
12 Automation and Screening 181 John Ringeling, John Donovan, Arijit Chakravarty, Anthony Davies, Steven A Haney, Douglas Bowman, and Ben Knight
12.1 Introduction, 181
12.2 Some Preliminary Considerations, 181
12.3 Laboratory Options, 183
12.4 The Automated HCS Laboratory, 186
12.5 Conclusions, 192
Key Points, 192
Further Reading, 193
13 High Content Analysis for Tissue Samples 195 Kristine Burke, Vaishali Shinde, Alice McDonald, Douglas Bowman, and Arijit Chakravarty
13.1 Introduction, 195
13.2 Design Choices in Setting Up a High Content Assay in Tissue, 196
13.3 System Configuration: Aspects Unique to Tissue-Based HCS, 199
13.4 Data Analysis, 203
13.5 Conclusions, 207
Key Points, 207
Further Reading, 207
References, 208
SECTION V HIGH CONTENT ANALYTICS 209
14 Factoring and Clustering High Content Data 211 Steven A. Haney
14.1 Introduction, 211
14.2 Common Unsupervised Learning Methods, 212
14.3 Preparing for an Unsupervised Learning Study, 218
14.4 Conclusions, 228
Key Points, 228
Further Reading, 228
References, 229
15 Supervised Machine Learning 231 Jeff Palmer and Arijit Chakravarty
15.1 Introduction, 231
15.2 Foundational Concepts, 232
15.3 Choosing a Machine Learning Algorithm, 234
15.4 When Do You Need Machine Learning, and How Do You Use IT?, 243
15.5 Conclusions, 244
Key Points, 244
Further Reading, 244
Appendix A Websites and Additional Information on Instruments, Reagents, and Instruction 247
Appendix B A Few Words About One Letter: Using R to Quickly Analyze HCS Data 249 Steven A. Haney
B.1 Introduction, 249
B.2 Setting Up R, 250
B.3 Analyzing Data in R, 253
B.4 Where to Go Next, 261
Further Reading, 263
Appendix C Hypothesis Testing for High Content Data: A Refresher 265 Lin Guey and Arijit Chakravarty
C.1 Introduction, 265
C.2 Defining Simple Hypothesis Testing, 266
C.3 Simple Statistical Tests to Compare Two Groups, 269
C.4 Statistical Tests on Groups of Samples, 276
C.5 Introduction to Regression Models, 280
C.6 Conclusions, 285
Key Concepts, 286
Further Reading, 286
GLOSSARY 287
TUTORIAL 295
INDEX 323
STEVEN A. HANEY
Microscopy has historically been inherently a descriptive endeavor and in fact it is frequently described as an art as well as a science. It is also becoming increasingly recognized that image-based scoring needs to be standardized for numerous medical applications. For example, for medical diagnoses, interpretation of medical images has been used since the 1950s to distinguish disorders such as cervical dysplasias and karyotyping [1]. Cameras used in microscopes during this era were able to capture an image, reduce the image data to a grid that was printed on a dot-matrix printer and integrated regional intensities to interpret shapes and features. In essence, these principles have not changed in 50 years, but the sophistication and throughput with which it is done has increased with advances in microscope and camera design and computational power. In the early 1990s, these advances were realized as automated acquisition and analysis of biological assays became more common.
Advances in automated microscopy, namely the automated movement of slides on the stage, focusing, changing fluorophore filters, and setting proper image exposure times, were also essential to standardizing and improving biomedical imaging. Automated microscopy was necessary to reduce the amount of time required of laboratory personnel to produce these images, which was a bottleneck for these studies, especially medical diagnoses. A team of scientists from Boston and Cambridge, Massachusetts described an automated microscope in 1976 that directly anticipated its use in subcellular microscopy and image analysis [2]. The microscope, and a processed image of a promyelocyte captured using the instrument, are shown in Figure 1.1.
Figure 1.1 An early automated microscope used in biomedical research. (a) An example of an automated fluorescence microscope. Letters inside the figure are from the original source. The system is outfitted with controlled stage and filter movements (S and F), a push-button console for manual movements (B), a television camera and monitor (T and m) and a video terminal for digitizing video images (v). (b) A video image of a promyelocyte and (c) image analysis of (b), showing, an outline of the nucleus and cell borders, which can be used in automated cell type recognition. Reproduced with permission from [2]. Copyright 1974 John Wiley & Sons.
Until the mid-1990s, automated microscopy was applied in basic research to address areas of high technical difficulty, where rigorous measurements of subtle cellular events (such as textural changes) were needed, events that took place over long time periods or were rare (which made it challenging to acquire sufficient numbers of images of each event). In medicine, automated imaging was used to standardize the interpretation of assay results, such as for the diagnosis of disease from histological samples (where it was notoriously difficult to achieve concordance among clinical pathologists). Adapting quantitative imaging assays into a screening context was first described by Lansing Taylor and colleagues [3], who commercialized an automated microscope capable of screening samples in multiwell plates (a format that had emerged as an industry standard during this time period). The term "high content" was coined to contrast the low throughput screening in these imaging assays with the increasing scale of high throughput primary drug discovery screens. Many groups have since demonstrated the usefulness of automated microscopy in drug discovery [4, 5] and basic research [6, 7]. During this phase (the early 2000s), data acquisition, image analysis, and data management still imposed limits on image-based screening, but it did find an important place in the pharmaceutical industry, where expensive, labor-intensive assays critical for late-stage drug development were a bottleneck. One example is the micronucleus assay, an assay that measures the teratogenicity of novel therapeutics through counting the number of micronuclei (small nonnuclear chromosomal fragments that result from dysregulation of mitosis). An increase in the number of cells that contain micronuclei is indicative of genotoxicity, so this assay is frequently part of a screening program to make a go/no go decision on clinical development [8]. The assay requires finding binucleate cells and checking for a nearby micronucleus. For each compound assayed, a single technician might spend many hours in front of a microscope searching and counting nuclei. Automation of image capture and analysis not only reduced the work burden of researchers, but it also made the analysis itself more robust [9]. Similar applications were found in the field of cell biology, where automated microscopy was utilized to collect and analyze large data sets [10, 11].
Following from these early implementations, high content screening (HCS) has been widely adopted across many fields as the technology has improved and more instruments are available commercially. The speed at which images can be analyzed is limited by computer power, as more advanced computer technology has been developed, the scale at which samples can be analyzed has improved. Faster computers also mean that more measurements per cell can be made; shapes of cells and subcellular structures can be analyzed as well as probe intensities within regions of interest. This has led to the quantification of subtle morphological changes as assay endpoints. A widely used application of this approach has been receptor internalization assays, such as the TransfluorT assay to measure the activation of GPCRs through changes in the pattern of receptor staining, from even staining over the surface of the cells to dense puncta following internalization of the activated receptors through vesicle formation [12]. Concomitant with the increase in the sophistication of the assays themselves, improvements in the mechanical process of screening samples has also fed the growth of HCS. Gross-level changes, such as integrating plate handling robotics and fine-level changes, such as improvements in sample detection and autofocusing, have improved the scale of HCS to the point where image-based readouts are possible for true high throughput screens (screens of greater than 100,000 compounds) [5].
HCS has a strong presence in basic biological studies as well. The most widely recognized applications are similar to screening for drug candidates, including siRNA screening to identify genes that control a biological process, and chemical genetics, the identification of small molecules that perturb a specific cellular protein or process. While operationally similar to drug screening, they seek to explain and study biological questions rather than lead to therapeutics explicitly. Additional uses of HCS in basic science include the study of model organisms. Finally, the use of multiparametric single cell measurements has extended our understanding of pathway signaling in novel ways [11].
At this point we want to touch on the fundamental skill sets required to successfully set up and use an HCS system to address a biological problem, and how responsibilities might be divided up in different settings. The six major skill sets required to develop and run an HCS project are shown in Figure 1.2. Each area is distinct enough as to be a full-fledged area of expertise (hence introducing these areas as "skill sets"), but typically a person is competent in more than one area. It is rare that all roles can been successfully filled by one person. Therefore, the ability to develop a collaborative team is essential to HCS. It is also very important to understand that these roles vary between groups, and this can cause problems when people move between groups or as groups change in size. The skill sets are the following.
Figure 1.2 The basic skill sets essential for establishing and running HCS experiments. Skills noted in the figure are discussed in detail in the text.
The biologist develops the question that needs to be answered experimentally. In academia, the biologist is typically a cell biologist and oftentimes is also capable of collecting images by HCS as well. In industrial circles (pharma and biotech), a therapeutic team may be led by a biochemist or in vivo pharmacologist, who may have little training in fluorescence microscopy. The key area of expertise here is an appreciation of these problems and an ability to formulate strategies (experimental systems and assays) to address them. There is also a significant understanding of how cellular models in the laboratory relate to the biology in vivo. In addition to understanding the fundamental biological question, understanding how to establish a cellular model that incorporates relevant aspects of the biological environment is important.
Although many of the HCS systems are sold as turnkey "black-boxes," it is important to have a good understanding of fundamental microscopy components (staining techniques, reagents, and optics) as each has a significant impact on the quality of data generated by the instruments. For example, the choice of illumination system and filter sets determine which fluorescence wavelengths (fluorophores) you can use to stain specific cellular compartments. Other microscope objective characteristics (numerical...
Dateiformat: ePUBKopierschutz: Adobe-DRM (Digital Rights Management)
Systemvoraussetzungen:
Das Dateiformat ePUB ist sehr gut für Romane und Sachbücher geeignet – also für „fließenden” Text ohne komplexes Layout. Bei E-Readern oder Smartphones passt sich der Zeilen- und Seitenumbruch automatisch den kleinen Displays an. Mit Adobe-DRM wird hier ein „harter” Kopierschutz verwendet. Wenn die notwendigen Voraussetzungen nicht vorliegen, können Sie das E-Book leider nicht öffnen. Daher müssen Sie bereits vor dem Download Ihre Lese-Hardware vorbereiten.Bitte beachten Sie: Wir empfehlen Ihnen unbedingt nach Installation der Lese-Software diese mit Ihrer persönlichen Adobe-ID zu autorisieren!
Weitere Informationen finden Sie in unserer E-Book Hilfe.