
Transformer, BERT, and GPT
Beschreibung
- Provides a comprehensive group of topics covering the details of the Transformer architecture, BERT models, and the GPT series, including GPT-3 and GPT-4.
- Features companion files with numerous code samples and figures from the book.
Weitere Details
Weitere Ausgaben
Inhalt
- Front Cover
- Half-Title Page
- LICENSE, DISCLAIMER OF LIABILITY, AND LIMITED WARRANTY
- Title Page
- Copyright Page
- Dedication
- Contents
- Preface
- Chapter 1 Introduction
- What is Generative AI?
- Conversational AI Versus Generative AI
- Is DALL-E Part of Generative AI?
- Are ChatGPT-3 and GPT-4 Part of Generative AI?
- DeepMind
- OpenAI
- Cohere
- Hugging Face
- AI21
- InflectionAI
- Anthropic
- What are LLMs?
- What is AI Drift?
- Machine Learning and Drift (Optional)
- What is Attention?
- Calculating Attention: A High-Level View
- An Example of Self Attention
- Multi-Head Attention (MHA)
- Summary
- Chapter 2 Tokenization
- What is Pre-Tokenization?
- What is Tokenization?
- Word, Character, and Subword Tokenizers
- Trade-Offs with Character-Based Tokenizers
- Subword Tokenization
- Subword Tokenization Algorithms
- Hugging Face Tokenizers and Models
- Hugging Face Tokenizers
- Tokenization for the DistilBERT Model
- Token Selection Techniques in LLMs
- Summary
- Chapter 3 Transformer Architecture Introduction
- Sequence-to-Sequence Models
- Examples of seq2seq Models
- What About RNNs and LSTMs?
- Encoder/Decoder Models
- Examples of Encoder/Decoder Models
- Autoregressive Models
- Autoencoding Models
- The Transformer Architecture: Introduction
- The Transformer is an Encoder/Decoder Model
- The Transformer Flow and Its Variants
- The transformers Library from Hugging Face
- Transformer Architecture Complexity
- Hugging Face Transformer Code Samples
- Transformer and Mask-Related Tasks
- Summary
- Chapter 4 Transformer Architecture in Greater Depth
- An Overview of the Encoder
- What are Positional Encodings?
- Other Details Regarding Encoders
- An Overview of the Decoder
- Encoder, Decoder, or Both: How to Decide?
- Delving Deeper into the Transformer Architecture
- Autoencoding Transformers
- The "Auto" Classes
- Improved Architectures
- Hugging Face Pipelines and How They Work
- Hugging Face Datasets
- Transformers and Sentiment Analysis
- Source Code for Transformer-Based Models
- Summary
- Chapter 5 The BERT Family Introduction
- What is Prompt Engineering?
- Aspects of LLM Development
- Kaplan and Under-Trained Models
- What is BERT?
- BERT and NLP Tasks
- BERT and the Transformer Architecture
- BERT and Text Processing
- BERT and Data Cleaning Tasks
- Three BERT Embedding Layers
- Creating a BERT Model
- Training and Saving a BERT Model
- The Inner Workings of BERT
- Summary
- Chapter 6 The BERT Family in Greater Depth
- A Code Sample for Special BERT Tokens
- BERT-Based Tokenizers
- Sentiment Analysis with DistilBERT
- BERT Encoding: Sequence of Steps
- Sentence Similarity in BERT
- Generating BERT Tokens (1)
- Generating BERT Tokens (2)
- The BERT Family
- Working with RoBERTa
- Italian and Japanese Language Translation
- Multilingual Language Models
- Translation for 1,000 Languages
- M-BERT
- Comparing BERT-Based Models
- Web-Based Tools for BERT
- Topic Modeling with BERT
- What is T5?
- Working with PaLM
- Summary
- Chapter 7 Working with GPT-3 Introduction
- The GPT Family: An Introduction
- GPT-2 and Text Generation
- What is GPT-3?
- GPT-3 Models
- What is the Goal of GPT-3?
- What Can GPT-3 Do?
- Limitations of GPT-3
- GPT-3 Task Performance
- How GPT-3 and BERT are Different
- The GPT-3 Playground
- Inference Parameters
- Overview of Prompt Engineering
- Details of Prompt Engineering
- Few-Shot Learning and Fine-Tuning LLMs
- Summary
- Chapter 8 Working with GPT-3 in Greater Depth
- Fine-Tuning and Reinforcement Learning (Optional)
- GPT-3 and Prompt Samples
- Working with Python and OpenAI APIs
- Text Completion in OpenAI
- The Completion() API in OpenAI
- Text Completion and Temperature
- Text Classification with GPT-3
- Sentiment Analysis with GPT-3
- GPT-3 Applications
- Open-Source Variants of GPT-3
- Miscellaneous Topics
- Summary
- Chapter 9 ChatGPT and GPT-4
- What is ChatGPT?
- Plugins, Code Interpreter, and Code Whisperer
- Detecting Generated Text
- Concerns about ChatGPT
- Sample Queries and Responses from ChatGPT
- ChatGPT and Medical Diagnosis
- Alternatives to ChatGPT
- Machine Learning and ChatGPT: Advanced Data Analytics
- What is InstructGPT?
- VizGPT and Data Visualization
- What is GPT-4?
- ChatGPT and GPT-4 Competitors
- LlaMa-2
- When Will GPT-5 Be Available?
- Summary
- Chapter 10 Visualization with Generative AI
- Generative AI and Art and Copyrights
- Generative AI and GANs
- What is Diffusion?
- CLIP (OpenAI)
- GLIDE (OpenAI)
- Text-to-Image Generation
- Text-to-Image Models
- The DALL-E Models
- DALL-E 2
- DALL-E Demos
- Text-to-Video Generation
- Text-to-Speech Generation
- Summary
- Index
Systemvoraussetzungen
Dateiformat: PDF
Kopierschutz: Adobe-DRM (Digital Rights Management)
Systemvoraussetzungen:
- Computer (Windows; MacOS X; Linux): Installieren Sie bereits vor dem Download die kostenlose Software Adobe Digital Editions (siehe E-Book Hilfe).
- Tablet/Smartphone (Android; iOS): Installieren Sie bereits vor dem Download die kostenlose App Adobe Digital Editions oder die App PocketBook (siehe E-Book Hilfe).
- E-Book-Reader: Bookeen, Kobo, Pocketbook, Sony, Tolino u.v.a.m. (nicht Kindle)
Das Dateiformat PDF zeigt auf jeder Hardware eine Buchseite stets identisch an. Daher ist eine PDF auch für ein komplexes Layout geeignet, wie es bei Lehr- und Fachbüchern verwendet wird (Bilder, Tabellen, Spalten, Fußnoten). Bei kleinen Displays von E-Readern oder Smartphones sind PDF leider eher nervig, weil zu viel Scrollen notwendig ist.
Mit Adobe-DRM wird hier ein „harter” Kopierschutz verwendet. Wenn die notwendigen Voraussetzungen nicht vorliegen, können Sie das E-Book leider nicht öffnen. Daher müssen Sie bereits vor dem Download Ihre Lese-Hardware vorbereiten.
Bitte beachten Sie: Wir empfehlen Ihnen unbedingt nach Installation der Lese-Software diese mit Ihrer persönlichen Adobe-ID zu autorisieren!
Weitere Informationen finden Sie in unserer E-Book Hilfe.