Do you want to gain a deeper understanding of how big tech analyses and exploits our text data, or investigate how political parties differ by analysing textual styles, associations and trends in documents? Or create a map of a text collection and write a simple QA system yourself?
This book explores how to apply state-of-the-art text analytics methods to detect and visualise phenomena in text data. Solidly based on methods from corpus linguistics, natural language processing, text analytics and digital humanities, this book shows readers how to conduct experiments with their own corpora and research questions, underpin their theories, quantify the differences and pinpoint characteristics. Case studies and experiments are detailed in every chapter using real-world and open access corpora from politics, World English, history, and literature. The results are interpreted and put into perspective, pitfalls are pointed out, and necessary pre-processing steps are demonstrated. This book also demonstrates how to use the programming language R, as well as simple alternatives and additions to R, to conduct experiments and employ visualisations by example, with extensible R-code, recipes, links to corpora, and a wide range of methods. The methods introduced
can be used across texts of all disciplines, from history or literature to party manifestos and patient reports.
Reihe
Sprache
Verlagsort
Verlagsgruppe
Bloomsbury Publishing PLC
Zielgruppe
Produkt-Hinweis
Broschur/Paperback
Klebebindung
Illustrationen
Maße
Höhe: 234 mm
Breite: 156 mm
Dicke: 25 mm
Gewicht
ISBN-13
978-1-350-37086-9 (9781350370869)
Copyright in bibliographic data and cover images is held by Nielsen Book Services Limited or by the publishers or by their respective licensors: all rights reserved.
Schweitzer Klassifikation
Gerold Schneider is Adjunct Professor at the Department of Computational Linguistics of the University of Zurich, Switzerland.
Autor*in
University of Zurich, Switzerland
List of Figures
List of Tables
Acknowledgements
1. Introduction
2. Spikes of Frequencies and First Steps in UNIX
3. Frequency Lists and First Steps in R
4. Overuse and Keywords and Using R Libraries
5. Document Classification and Supervised ML in LightSide and R
6. Topic Modelling and Unsupervised ML with Mallet and R
7. Kernel Density Estimation for Conceptual Maps
8. Distributional Semantics
9. BERT Models
10. Conclusions
References
Index