Schweitzer Fachinformationen
Wenn es um professionelles Wissen geht, ist Schweitzer Fachinformationen wegweisend. Kunden aus Recht und Beratung sowie Unternehmen, öffentliche Verwaltungen und Bibliotheken erhalten komplette Lösungen zum Beschaffen, Verwalten und Nutzen von digitalen und gedruckten Medien.
The book provides a practical approach by explaining the concepts of machine learning and deep learning algorithms, evaluation of methodology advances, and algorithm demonstrations with applications.
Over the past two decades, the field of machine learning and its subfield deep learning have played a main role in software applications development. Also, in recent research studies, they are regarded as one of the disruptive technologies that will transform our future life, business, and the global economy. The recent explosion of digital data in a wide variety of domains, including science, engineering, Internet of Things, biomedical, healthcare, and many business sectors, has declared the era of big data, which cannot be analysed by classical statistics but by the more modern, robust machine learning and deep learning techniques. Since machine learning learns from data rather than by programming hard-coded decision rules, an attempt is being made to use machine learning to make computers that are able to solve problems like human experts in the field.
The goal of this book is to present a??practical approach by explaining the concepts of machine learning and deep learning algorithms with applications. Supervised machine learning algorithms, ensemble machine learning algorithms, feature selection, deep learning techniques, and their applications are discussed. Also included in the eighteen chapters is unique information which provides a clear understanding of concepts by using algorithms and case studies illustrated with applications of machine learning and deep learning in different domains, including disease prediction, software defect prediction, online television analysis, medical image processing, etc. Each of the chapters briefly described below provides both a chosen approach and its implementation.
Audience
Researchers and engineers in artificial intelligence, computer scientists as well as software developers.
Pradeep Singh PhD, is an assistant professor in the Department of Computer Science Engineering, National Institute of Technology, Raipur, India. His current research interests include machine learning, deep learning, evolutionary computing, empirical studies on software quality, and software fault prediction models. He has more than 15 years of teaching experience with many publications in reputed international journals, conferences, and book chapters.
Shruthi H. Shetty*, Sumiksha Shetty┼, Chandra Singh╬ and Ashwath Rao§
Department of ECE, Sahyadri College of Engineering & Management, Adyar, India
Abstract
The fundamental goal of machine learning (ML) is to inculcate computers to use data or former practice to resolve a specified problem. Artificial intelligence has given us incredible web search, self-driving vehicles, practical speech affirmation, and a massively better cognizance of human genetic data. An exact range of effective programs of ML already exist, which comprises classifiers to swot e-mail messages to study that allows distinguishing between unsolicited mail and non-spam messages. ML can be implemented as class analysis over supervised, unsupervised, and reinforcement learning. Supervised ML (SML) is the subordinate branch of ML and habitually counts on a domain skilled expert who "teaches" the learning scheme with required supervision. It also generates a task that maps inputs to chosen outputs. SML is genuinely normal in characterization issues since the aim is to get the computer, familiar with created descriptive framework. The data annotation is termed as a training set and the testing set as unannotated data. When annotations are discrete in the value, they are called class labels and continuous numerical annotations as continuous target values. The objective of SML is to form a compact prototype of the distribution of class labels in terms of predictor types. The resultant classifier is then used to designate class labels to the testing sets where the estimations of the predictor types are known, yet the values of the class labels are unidentified. Under certain assumptions, the larger the size of the training set, the better the expectations on the test set. This motivates the requirement for numerous area specialists or even different non-specialists giving names to preparing the framework. SML problems are grouped into classification and regression. In Classification the result has discrete value and the aim is to predict the discrete values fitting to a specific class. Regression is acquired from the Labeled Datasets and continuous-valued result are predicted for the latest data which is given to the algorithm. When choosing an SML algorithm, the heterogeneity, precision, excess, and linearity of the information ought to be examined before selecting an algorithm. SML is used in a various range of applications such as speech and object recognition, bioinformatics, and spam detection. Recently, advances in SML are being witnessed in solid-state material science for calculating material properties and predicting their structure. This review covers various algorithms and real-world applications of SML. The key advantage of SML is that, once an algorithm swots with data, it can do its task automatically.
Keywords: Supervised machine learning, solid state material science, artificial intelligence, deep learning, linear regression, logistic regression, SVM, decision tree
The historical background of machine learning (ML), in the same way as other artificial intelligence (AI) concepts, started with apparently encouraging works during the 1950s and 1960s, trailed by a significant stretch of accumulation of information known as the "winter of AI" [9]. As of now, there has been an explosive concern essentially in the field related to deep learning. The start of the primary decade of the 21st century ended up being a defining moment throughout the entire existence of ML, and this is clarified by the three simultaneous patterns, which together gave an observable synergetic impact. The first pattern is big data and the second one is the reduction in the expense of equal processing and memory, and the third pattern is acquiring and building up the possibility of perceptron using deep learning algorithms. The investigation of ML has developed from the actions of a modest bunch of engineers investigating whether a machine could figure out how to solve the problem and impersonate the human mind, and a field of insights that generally overlooked computational reviews, to a wide control that has delivered basic measurable computational hypotheses of learning measures.
ML is one of the quickest developing fields in software engineering. A lot of studies have been carried out to make machines smart; learning is one of the human characters which are made as necessary aspects of the machine too. For example, we are standing at a crowded railway station waiting for a friend. As we wait, hundreds of people pass by. Each one looks different, but when our friend arrives we have no problem picking her out of the crowd. Recognizing people's faces is something we humans do effortlessly, but how would we program a computer to recognize a person? We could try to make a set of rules. For example, our friend has long black hair and brown eyes, but that could describe billions of people. What is it about her that you recognize? Is it the shape of her nose? But can we put it into words? The truth is that we can recognize people without ever really knowing how we do it. We cannot describe every detail of how we recognize someone. We just know how to do it. The trouble is that to program a computer, we need to break the task down into its little details. That makes it very difficult or even impossible to program a computer to recognize faces. Face recognition is an example of a task that people find very easy, but that is very hard for computers. These tasks are often called artificial intelligence or AI. ML is the subset of AI [1]. Earlier data was stored and handled by the companies. For example, each time we purchase a product, visit an official page, or when we walk around, we generate data. Every one of us is not just a generator yet also a buyer of information. The necessities are needed to be assumed also interests are to be anticipated. Think about a supermarket that is marketing thousands of products to millions of consumers either at stores or through the web store. What the market needs is to have the option to predict which client is probably going to purchase which item, to augment deals and benefits. Essentially every client needs to find the best suitable product. We do not know precisely which individuals are probably going to purchase which item. Client conduct changes as expected and by geological area. However, we realize that it is not arbitrary. Individuals do not go to store and purchase things irregular, they purchase frozen yogurt in summer and warm clothes in winter. Therefore, there are definite outlines in the data.
An application of AI strategies to an enormous information base is termed data mining [4, 17]. Data mining is an enormous volume of information handled to develop a basic model with significant use, for instance, having high perspective accuracy. To be insightful, a framework that is in a changing climate ought to be able to learn. If the framework can learn and receive such change, then the framework designer need not anticipate and give answers for every conceivable circumstance. An exact range of effective programs of ML already exists, which comprises classifiers to swot e-mail messages to study that allows us to distinguish between unsolicited mail and non-spam messages. For an immense size of data, the manual foreseeing gives an unpredictable task to individuals. To overthrow this issue, the machine is trained to foresee the future, with the assistance of training and test datasets. For the machine to be trained, different types of ML algorithms are accessible. The computer program is supposed to study from the experience E regarding few classes of task T from performance P extent. The estimated performance of a task improves with experience [8].
ML can be implemented as class analysis over supervised, unsupervised, and reinforcement learning (RL). These algorithms are structured into a taxonomy constructed on the estimated outcome.
Unsupervised learning (UL) is a kind of AI that searches for previously undetected samples in an informational set without prior marks and with the least human management. Cluster analysis and making data samples digestible are the two main methods of UL. SML works under defined instructions, whereas UL works for the unknown condition of the results. The UL algorithm is used in investigating the structure of the data and to identify different patterns, extract the information, and execute the task [12, 15].
R) can be an idea of a hit and a preliminary strategy of knowledge. For each activity performed, the machine is given a reward point or a penalty point. On the off chance that the alternative is right, the machine picks up the prize point or gets a penalty point if there should be an occurrence of an off-base reaction. The RL algorithm is the communication between the atmosphere and the learning specialist [14]. The learning specialist depends on exploitation and exploration. The point at which the learning specialist follows up on experimentation is called exploration, and exploitation is the point at which it plays out an activity-dependent on the information picked up from the surrounding
Supervised learning (SML) algorithms function on unidentified dependent data which is anticipated from a given arrangement of identified predictors [20, 21].
SML is genuinely normal in characterization issues since the aim is to get the computer to get familiar with a created descriptive framework. In SML, the data annotation is termed as a training set,...
Dateiformat: ePUBKopierschutz: Adobe-DRM (Digital Rights Management)
Systemvoraussetzungen:
Das Dateiformat ePUB ist sehr gut für Romane und Sachbücher geeignet – also für „fließenden” Text ohne komplexes Layout. Bei E-Readern oder Smartphones passt sich der Zeilen- und Seitenumbruch automatisch den kleinen Displays an. Mit Adobe-DRM wird hier ein „harter” Kopierschutz verwendet. Wenn die notwendigen Voraussetzungen nicht vorliegen, können Sie das E-Book leider nicht öffnen. Daher müssen Sie bereits vor dem Download Ihre Lese-Hardware vorbereiten.Bitte beachten Sie: Wir empfehlen Ihnen unbedingt nach Installation der Lese-Software diese mit Ihrer persönlichen Adobe-ID zu autorisieren!
Weitere Informationen finden Sie in unserer E-Book Hilfe.