Schweitzer Fachinformationen
Wenn es um professionelles Wissen geht, ist Schweitzer Fachinformationen wegweisend. Kunden aus Recht und Beratung sowie Unternehmen, öffentliche Verwaltungen und Bibliotheken erhalten komplette Lösungen zum Beschaffen, Verwalten und Nutzen von digitalen und gedruckten Medien.
Laura Suter-Dick and Friedlieb Pfannkuch
Computational tools are used in many life sciences research areas, including toxicity prediction. They take advantage of complex mathematical models to predict the effects caused by a given compound on an organism. Due to the complexity of the possible interactions between a treatment and a patient and the diversity of possible outcomes, models are applied to well-defined and specific fields, such as DNA damaging potential, estimation of the necessary dose to elicit an effect in a patient, or identification of relevant gene expression changes.
In silico tools make use of information regarding chemical structures and the immense data legacy that allows inferring interactions between chemical structures, physicochemical properties, and biological processes. These methods are farthest away from traditional animal studies, since they rely on existing databases rather than on generating experimental animal data.
Due to the complexity of this task, there are a fairly small number of endpoints that can be predicted with commonly employed in silico tools such as DEREK, VITIC, and M-Case with acceptable accuracy. In order to improve the current models and to expand to additional prediction algorithms, further validation and extension of the underlying databases is ongoing.
Similarly, modeling and simulation (M&S) can generate mathematical models able to simulate and therefore predict how a compound will behave in humans before clinical data become available. In the field of nonclinical safety, complex models allow for a prediction of the effect of an organism on a compound (pharmacokinetic models) as well as, to some extent, pharmacodynamic extrapolations, based on data generated in animal models as well as in in vitro human systems.
In addition to the in silico and modeling tools described above, the dramatically increasing amount of toxicologically relevant data needs to be appropriately monitored and collected. All "new" technologies produce very high volumes of data and thus having and using bioinformatics tools that can collect data from diverse sources and mine them to detect relevant patterns of change is vital. For this purpose, large databases are necessary, along with bioinformatics tools that can deal with diverse data types, multivariate analysis, and supervised and unsupervised discrimination algorithms. These tools take advantage of advanced statistics, combined with the large data sets stored in the databases generated using technologies such as omics or high-content imaging.
The omics technologies arose with the advent of advanced molecular biology techniques able to determine changes in the whole transcriptome, proteome, or metabolome. These powerful techniques were considered the ultimate holistic approach to tackle many biological questions, among them toxicological assessment. Several companies have invested in these areas of toxicological research.
Toxicogenomics is the more widespread of the omics technologies. Predictive approaches are based on databases with compounds (toxic/nontoxic) generated by (pharmaceutical) companies as well as by commercial vendors in the 1990s. All share the same focus of investigation: target organ toxicity to the liver and the kidney.
In addition, gene expression data are often the basis for mechanistic understanding of biological processes in several fields, including toxicology, pharmacology, and disease phenotype. Thus, transcriptomic data can be used as a merely predictive tool, as a mechanistic tool, or as a combination of both.
Subsequent to global gene expression analysis, assays can then be developed for a relatively small subset of genes relevant to specific toxicities and toxic mechanisms. Such assays can be used for screening and problem solving in toxicology studies and may also play a role in efficacy studies in preclinical and clinical research.
In addition, the modulation of gene expression through toxicants has also been used as a method for the discovery of putative biomarkers. In particular in the kidney, gene expression changes detected after renal injury led to protein assays in urine that could be measured noninvasively.
In the last few years, the ease with which full genome sequencing and mRNA sequencing can be performed has led to other ways to analyze samples and tissues. Sequence information from different species is also more readily available and allows for better interpretation of genomic and proteomic data, in particular when extrapolating to humans. The usefulness of this approach to predict the drug effects in humans and the latest developments in the field, the next-generation sequencing (NGS), will be discussed in this book in more detail.
Also, the role of DNA methylation, and of a variety of small noncoding RNA molecules, such as miRNAs, is gaining importance in toxicological assessment and biomarker discovery.
Proteomics is probably the oldest of the omics technologies and arguably the most relevant from a biological point of view, since protein expression and protein posttranslational modifications are the executors of cellular processes. From a knowledge point of view, there are large, publicly available databases with protein sequences that can be used for the identification of proteins.
However, proteomics is also probably the most technologically challenging of the three omics, mainly due to the large diversity in proteins, in particular in terms of abundance and physicochemical properties. This poses a massive challenge since it requires a technology with a dynamic range of several orders of magnitude as well as separation methods able to deal with extreme differences in lipophilicity.
Protein expression changes (or modifications such as phosphorylation) that can be detected are valuable pieces of information and can be used to understand biological pathways and to discover new biomarkers.
Similarly to toxicogenomic databases, metabolomic databases were generated by pharmaceutical companies, academia, and commercial providers. Metabolomic data were mainly generated using NMR and chromatography coupled with mass spectroscopy methods. The most commonly used fluids are urine and plasma, although tissue or cellular extracts can also be analyzed.
The main advantage of metabolomics is that the sampling of body fluids can be performed noninvasively. Thus, there were high expectations of using metabolomics for the discovery of new translational biomarkers. However, delivery has been slower than anticipated and much of the research to date has been descriptive rather than predictive [1].
An increasing number of molecular biology technologies related to toxicology research are available. In addition to genes, proteins, and metabolites, we have now the means to analyze many other factors that regulate and/or influence the expression of genes and proteins, as well as the secretion of metabolites. For example, it has been recognized that we should pay closer attention to DNA methylation status and miRNA expression, factors that profoundly regulate gene and protein expression, respectively.
Regarding the interpretation of the data in the context of safety assessment, there is still a lack of understanding of changes that occur during normal adaptive variations in physiology as opposed to changes due to alterations in pharmacology, so toxicity-related changes can be difficult to untangle.
The concept of "adverse outcome pathway" currently indicates the interest in identifying changes that will lead to a clinically relevant effect. To this end, it is also becoming apparent that the greatest benefit can be obtained by integrating these newer technologies with information from conventional toxicology and pathology.
The integration of highly sensitive molecular biology technologies with the well-established pathology assessment also provides a means to identify putative novel biomarkers. These markers not only may indicate toxicity but can also be used as markers for specific disease conditions.
Ideal biomarkers would allow monitoring onset, progression, and reversibility of adverse events and be translational, enabling their use in both preclinical and clinical settings [2-4].
For safety assessment, it is a major endeavor of toxicology research to identify sensitive and specific biomarkers, ideally translational across species and prodromal, for example, able to predict toxicities that may arise after prolonged exposure. As indicated above and although the omics technologies can identify many candidate markers, the amount of work and time required to validate these putative biomarkers is extremely challenging.
In vitro toxicology is not a new concept and the advantages of cell cultures are manifold, including the possibility of screening at low cost and the reduction in animal experimentation, supporting the 3Rs concept (refine, reduce, and replace animal experiments).
However, in the same way that the determination of an LD50 in animals is not really...
Dateiformat: ePUBKopierschutz: Adobe-DRM (Digital Rights Management)
Systemvoraussetzungen:
Das Dateiformat ePUB ist sehr gut für Romane und Sachbücher geeignet – also für „fließenden” Text ohne komplexes Layout. Bei E-Readern oder Smartphones passt sich der Zeilen- und Seitenumbruch automatisch den kleinen Displays an. Mit Adobe-DRM wird hier ein „harter” Kopierschutz verwendet. Wenn die notwendigen Voraussetzungen nicht vorliegen, können Sie das E-Book leider nicht öffnen. Daher müssen Sie bereits vor dem Download Ihre Lese-Hardware vorbereiten.Bitte beachten Sie: Wir empfehlen Ihnen unbedingt nach Installation der Lese-Software diese mit Ihrer persönlichen Adobe-ID zu autorisieren!
Weitere Informationen finden Sie in unserer E-Book Hilfe.