Schweitzer Fachinformationen
Wenn es um professionelles Wissen geht, ist Schweitzer Fachinformationen wegweisend. Kunden aus Recht und Beratung sowie Unternehmen, öffentliche Verwaltungen und Bibliotheken erhalten komplette Lösungen zum Beschaffen, Verwalten und Nutzen von digitalen und gedruckten Medien.
Up-to-date reference enabling readers to address the full spectrum of AI security challenges while maintaining model utility
Generative AI Security: Defense, Threats, and Vulnerabilities delivers a technical framework for securing generative AI systems, building on established standards while focusing specifically on emerging threats to large language models and other generative AI systems. Moving beyond treating AI security as a dual-use technology, this book provides detailed technical analysis of three critical dimensions: implementing AI-powered security tools, defending against AI-enhanced attacks, and protecting AI systems from compromise through attacks like prompt injection, model poisoning, and data extraction.
The book provides concrete technical implementations supported by real-world case studies of actual AI system compromises, examining documented cases like the DeepSeek breaches, Llama vulnerabilities, and Google's CaMeL security defenses to demonstrate attack methodologies and defense strategies while emphasizing foundational security principles that remain relevant despite technological shifts. Each chapter progresses from theoretical foundations to practical applications.
The book also includes an implementation guide and hands-on exercises focusing on specific vulnerabilities in generative AI architectures, security control implementation, and compliance frameworks.
Generative AI Security: Defense, Threats, and Vulnerabilities discusses topics including:
Generative AI Security: Defense, Threats, and Vulnerabilities is an essential resource for cybersecurity professionals and architects, engineers, IT professionals, and organization leaders seeking integrated strategies that address the full spectrum of Generative AI security challenges while maintaining model utility.
Shaila Rana, PhD, is a professor of Cybersecurity, co-founder of the ACT Research Institute, a cybersecurity, AI, and technology think tank, and serves as the Chair of the IEEE Standards Association initiative on Zero Trust Cybersecurity for Health Technology, Tools, Services, and Devices.
Rhonda Chicone, PhD, is a retired professor and the co-founder of the ACT Research Institute. A former CSO, CTO, and Director of Software Development, she brings decades of experience in software product development and cybersecurity.
Chapter 1:
Abstract
1.1: What is Generative AI?
1.2: The Evolution of AI in Cybersecurity
1.3 Overview of GAI in Security
1.4 Current Landscape of Generative AI Applications
1.5 A Triangular Approach
Chapter 1 Summary
Hypothetical Case Study: The Triangular Approach to AI Security
References
Chapter 2: Understanding Generative AI Technologies
2.1: ML Fundamentals
2.2 Deep Learning and Neural Networks
2.3 Generative Models
2.4 NLP in Generative AI
2.5 Computer Vision in Generative AI
Conclusion
Chapter 2 Summary:
Case Study:
Chapter 3: Generative AI as a Security Tool
3.1 AI-Powered Threat Detection and Response
3.2 Automated Vulnerability Discovery and Patching
3.3 Intelligent SIEMs
3.4 AI in Malware Analysis and Classification
3.5 Generative AI in Red Teaming
3.6 J-Curve for Productivity in AI-Driven Security
3.7 Regulatory Technology (RegTech)
3.8 AI for Emotional Intelligence (EQ) in Cybersecurity
Chapter 3 Summary:
Case study: GAI as a Tool
Chapter 4: Weaponized Generative AI
4.1 Deepfakes and Synthetic Media
4.2 AI-Powered Social Engineering
4.3 Automated Hacking and Exploit Generation
4.4 Privacy Concerns
4.5 Weaponization of AI: Attack Vectors
4.6 Defensive Strategies Against Weaponized Generative AI
Chapter 4 Summary:
Case Study 1: Weaponized AI in a Small-Sized Organization
Case Study 2: Weaponized AI in a Large Organization
Chapter 5: Generative AI Systems as a Target of Cyber Threats
5.1 Security Attacks on Generative AI
5.2 Privacy Attacks on Generative AI
5.3 Attacks on Availability
5.4 Physical Vulnerabilities
5.5 Model Extraction and Intellectual Property Risks
5.6 Model Poisoning and Supply Chain Risks
5.7 Open-Source GAI Models
5.8 Application-specific Risks
5.9 Challenges in Mitigating Generative AI Risks
Chapter 5 Summary:
Case Study 1: Small Organization - Securing Customer Chatbot Systems
Case Study 2: Medium-Sized Organization - Defending Against Model Extraction
Case Study 3: Large Organization - Addressing Data Poisoning in AI Training Pipelines
Chapter 6: Defending Against Generative AI Threats
6.1 Deepfake Detection Techniques
6.2 Adversarial Training and Robustness
6.3 Secure AI Development Practices
6.4 AI Model Security and Protection
6.5 Privacy-Preserving AI Techniques
6.6 Proactive Threat Intelligence and AI Incident Response
6.7 MLSecOps/SecMLOPs for Secure AI Development
Chapter 6 Summary:
Case Study: Comprehensive Defense Against Generative AI Threats in a Multinational Organization
Chapter 7: Ethical and Regulatory Considerations
7.1 Ethical Challenges in AI Security
7.2 AI Governance Frameworks
7.3 Current and Emerging AI Regulations
7.4 Responsible AI Development and Deployment
7.5 Balancing Innovation and Security
Chapter 7 Summary
Case Study: Navigating Ethical and Regulatory Challenges in AI Security for a Financial Institution
Chapter 8: Future Trends in Generative AI Security
8.1 Quantum Computing and AI Security
8.2 Human Collaboration in Cybersecurity
8.3 Advancements in XAI
8.4 The Role of Generative AI in Zero Trust
8.5 Micromodels
8.6 AI and Blockchain
8.7 Artificial General Intelligence (AGI)
8.8 Digital Twins
8.9 Agentic AI
8.10 Multimodal models
8.11 Robotics
Chapter 8 Summary:
Case Study: Applying the Triangular Framework to Generative AI Security
Chapter 9: Implementing Generative AI Security in Organizations
9.1 Assessing Organizational Readiness
9.2 Developing an AI Security Strategy
9.3 Shadow AI
9.4 Building and Training AI Security Teams
9.5 Policy Recommendations for AI and Generative AI Implementation: A Triangular Approach
Chapter 9 Summary
Case Study: Implementing Generative AI Security in Organizations - A Triangular Path Forward
Chapter 10 Future Outlook on AI and Cybersecurity
10.1 The Evolving Role of Security Professionals
10.2 AI-Driven Incident Response and Recovery
10.3 GAI Security Triad Framework (GSTF)
10.5 Preparing for Future Challenges
10.5 Responsible AI Security
Chapter 10 Summary:
Practice Quiz: AI Security Triangular Framework
Index
Dateiformat: PDFKopierschutz: Adobe-DRM (Digital Rights Management)
Systemvoraussetzungen:
Das Dateiformat PDF zeigt auf jeder Hardware eine Buchseite stets identisch an. Daher ist eine PDF auch für ein komplexes Layout geeignet, wie es bei Lehr- und Fachbüchern verwendet wird (Bilder, Tabellen, Spalten, Fußnoten). Bei kleinen Displays von E-Readern oder Smartphones sind PDF leider eher nervig, weil zu viel Scrollen notwendig ist. Mit Adobe-DRM wird hier ein „harter” Kopierschutz verwendet. Wenn die notwendigen Voraussetzungen nicht vorliegen, können Sie das E-Book leider nicht öffnen. Daher müssen Sie bereits vor dem Download Ihre Lese-Hardware vorbereiten.
Bitte beachten Sie: Wir empfehlen Ihnen unbedingt nach Installation der Lese-Software diese mit Ihrer persönlichen Adobe-ID zu autorisieren!
Weitere Informationen finden Sie in unserer E-Book Hilfe.