Integrate large language models into your enterprise applications with advanced strategies that drive transformation
Key Features
Explore design patterns for applying LLMs to solve real-world enterprise problems
Learn strategies for scaling and deploying LLMs in complex environments
Get more relevant results and improve performance by fine-tuning and optimizing LLMs
Purchase of the print or Kindle book includes a free PDF eBook
Book DescriptionThe integration of large language models (LLMs) into enterprise applications is transforming how businesses use AI to drive smarter decisions and efficient operations. LLMs in Enterprise is your practical guide to bringing these capabilities into real-world business contexts. It demystifies the complexities of LLM deployment and provides a structured approach for enhancing decision-making and operational efficiency with AI.
Starting with an introduction to the foundational concepts, the book swiftly moves on to hands-on applications focusing on real-world challenges and solutions. You'll master data strategies and explore design patterns that streamline the optimization and deployment of LLMs in enterprise environments. From fine-tuning techniques to advanced inferencing patterns, the book equips you with a toolkit for solving complex challenges and driving AI-led innovation in business processes.
By the end of this book, you'll have a solid grasp of key LLM design patterns and how to apply them to enhance the performance and scalability of your generative AI solutions.What you will learn
Apply design patterns to integrate LLMs into enterprise applications for efficiency and scalability?
Overcome common challenges in scaling and deploying LLMs?
Use fine-tuning techniques and RAG approaches to enhance LLM efficiency
Stay ahead of the curve with insights into emerging trends and advancements, including multimodality
Optimize LLM performance through customized contextual models, advanced inferencing engines, and evaluation patterns
Ensure fairness, transparency, and accountability in AI applications
Who this book is forThis book is designed for a diverse group of professionals looking to understand and implement advanced design patterns for LLMs in their enterprise applications, including AI and ML researchers exploring practical applications of LLMs, data scientists and ML engineers designing and implementing large-scale GenAI solutions, enterprise architects and technical leaders who oversee the integration of AI technologies into business processes, and software developers creating scalable GenAI-powered applications.
Sprache
Verlagsort
Zielgruppe
Maße
Höhe: 235 mm
Breite: 191 mm
ISBN-13
978-1-83620-307-0 (9781836203070)
Copyright in bibliographic data and cover images is held by Nielsen Book Services Limited or by the publishers or by their respective licensors: all rights reserved.
Schweitzer Klassifikation
Ahmed Menshawy is the Vice President of AI Engineering at Mastercard. He leads the AI Engineering team to drive the development and operationalization of AI products, address a broad range of challenges and technical debts for ML pipelines deployment. He also leads a team dedicated to creating several AI accelerators and capabilities, including serving engines and feature stores, aimed at enhancing various aspects of AI engineering. Mahmoud Fahmy is a Lead Machine Learning Engineer at Mastercard, specializing in the development and operationalization of AI products. His primary focus is on optimizing machine learning pipelines and navigating the intricate challenges of deploying models effectively for end customers.
Table of Contents
Introduction to Large Language Models
LLMs in Enterprise: Applications, Challenges, and Design Patterns
Advanced Fine-Tuning Techniques and Strategies for Large Language Models
Retrieval-Augmented Generation Pattern
Customizing Contextual LLMs
The Art of Prompt Engineering for Enterprise LLMs
Enterprise Challenges in Evaluating LLM Applications
The Data Blueprint: Crafting Effective Strategies for LLM Development
Managing Model Deployments in Production
Accelerated and Optimized Inferencing Patterns
Connected LLMs Pattern
Monitoring LLMs in Production
Responsible AI in LLMs
Emerging Trends and Multimodality