Schweitzer Fachinformationen
Wenn es um professionelles Wissen geht, ist Schweitzer Fachinformationen wegweisend. Kunden aus Recht und Beratung sowie Unternehmen, öffentliche Verwaltungen und Bibliotheken erhalten komplette Lösungen zum Beschaffen, Verwalten und Nutzen von digitalen und gedruckten Medien.
This book introduces readers to the field of explainable artificial intelligence (XAI), which aims to make AI models more transparent and trustworthy. It explores how XAI can enhance trust and confidence in AI models and their decisions across various innovative applications in fields such as healthcare, finance, and engineering, where AI can significantly impact quality of life. Readers will discover emerging trends related to XAI-such as large language models, generative AI, and natural language processing-that are transforming the landscape of AI research and applications. Featuring an interdisciplinary overview, the book examines the state of the art, challenges, and opportunities in XAI, accompanied by clear examples and detailed explanations of its methods and techniques. The book also offers a balanced perspective on the limitations and trade-offs of XAI and outlines future directions and opportunities for both research and practice. This book is intended for anyone who wants to learn more about XAI and understand how it can enhance trust in AI models.
Nicu Bizon (IEEE M'06; SM'16), was born in Albesti de Muscel, Arges county, Romania, 1961. He received the B.S. degree in electronic engineering from the University "Polytechnic" of Bucharest, Romania, in 1986, and the PhD degree in Automatic Systems and Control from the same university, in 1996. From 1996 to 1989, he was in hardware design with the Dacia Renault SA, Romania. He is currently professor with the National University of Science and Technology POLITEHNICA Bucharest, Pite?ti University Centre, Romania. He received two awards from Romanian Academy, in 2013 and 2016. He is editor of 17 books and more than 600 papers in scientific fields related to Energy. His current research interests include power electronic converters, fuel cell and electric vehicles, renewable energy, energy storage system, microgrids, and control and optimization of these systems.
Bhargav Appasani received the Ph.D. (Engg.) degree from the Birla Institute of Technology, Mesra, India. He is currently an Associate Professor at the School of Electronics Engineering, KIIT University, Bhubaneswar, India. He has published more than 170 articles in international journals and conference proceedings and has also contributed six book chapters with Springer and Elsevier. Additionally, he has authored four books-two with Springer, one with CRC and one with Nova Science Publishers-and is currently editing two more books, one by CRC Press and the other by Elsevier. He also has four patents filed to his credit. He serves as an academic editor for the Journal of Electrical and Computer Engineering (Wiley) and Applied Computational Intelligence and Soft Computing (Wiley), and Scientific Reports (Springer).
Explainable Artificial Intelligence and Trust in Smart Applications: Definition, Evolution and Challenges.- Explainable AI Models and Algorithms: Interpretability and Trade-offs.- Large Language Models and XAI: Use-cases, Dependency and Challenges.- Interpretable and Trustworthy XAI Models for Healthcare.- Medical Diagnosis System based on Explainable AI and Blockchain.- Fairness, Explainability, and Regulation for AI in Finance: Challenges and Prospects.- Explainable AI based Decision Making for Safe Autonomous Vehicles.- Explainable Artificial Intelligence for Efficient Energy Management System in Smart Grid 3.0.- Explainable AI for Future Smart Cities: Architectures, Applications and Challenges.- Explainble for Entertainment, Education, and Environment: Case Studies, and Future Trends.- Ethical Considerations in Using Explainable AI for Smart Applications: Model Bias, Moral and Regulatory Considerations.- An Explainable AI Framework for Country Development Analysis and Prediction using Fuzzy Logic and Deep Learning Neural Networks.- Explianable AI-based cyber-risk management framework to combat Cyber Attacks.- Explainable AI for Decision-Making Processes in Active Distribution Grids.- Explainable AI for Trustworthy Decisions in Self-Sustainable Community of Electricity Prosumers.- Explainability and Decision Making with Generative AI in Smart Applications.- XAI-based Breast Cancer Detection from Ultra-sound Images.
Dateiformat: PDFKopierschutz: Wasserzeichen-DRM (Digital Rights Management)
Systemvoraussetzungen:
Das Dateiformat PDF zeigt auf jeder Hardware eine Buchseite stets identisch an. Daher ist eine PDF auch für ein komplexes Layout geeignet, wie es bei Lehr- und Fachbüchern verwendet wird (Bilder, Tabellen, Spalten, Fußnoten). Bei kleinen Displays von E-Readern oder Smartphones sind PDF leider eher nervig, weil zu viel Scrollen notwendig ist. Mit Wasserzeichen-DRM wird hier ein „weicher” Kopierschutz verwendet. Daher ist technisch zwar alles möglich – sogar eine unzulässige Weitergabe. Aber an sichtbaren und unsichtbaren Stellen wird der Käufer des E-Books als Wasserzeichen hinterlegt, sodass im Falle eines Missbrauchs die Spur zurückverfolgt werden kann.
Weitere Informationen finden Sie in unserer E-Book Hilfe.