Schweitzer Fachinformationen
Wenn es um professionelles Wissen geht, ist Schweitzer Fachinformationen wegweisend. Kunden aus Recht und Beratung sowie Unternehmen, öffentliche Verwaltungen und Bibliotheken erhalten komplette Lösungen zum Beschaffen, Verwalten und Nutzen von digitalen und gedruckten Medien.
Written and edited by a team of experts in the field, this collection of papers reflects the most up-to-date and comprehensive current state of machine learning and data science for industry, government, and academia.
Machine learning (ML) and data science (DS) are very active topics with an extensive scope, both in terms of theory and applications. They have been established as an important emergent scientific field and paradigm driving research evolution in such disciplines as statistics, computing science and intelligence science, and practical transformation in such domains as science, engineering, the public sector, business, social science, and lifestyle. Simultaneously, their applications provide important challenges that can often be addressed only with innovative machine learning and data science algorithms.
These algorithms encompass the larger areas of artificial intelligence, data analytics, machine learning, pattern recognition, natural language understanding, and big data manipulation. They also tackle related new scientific challenges, ranging from data capture, creation, storage, retrieval, sharing, analysis, optimization, and visualization, to integrative analysis across heterogeneous and interdependent complex resources for better decision-making, collaboration, and, ultimately, value creation.
Prateek Agrawal, PhD, completed his BTech in computer science engineering from Uttar Pradesh Technical University, Lucknow, India and MTech from ABV-IIITM, Gwalior, India. He also received his PhD from IKG-Punjab Technical University, Punjab, India. He has more than ten years of research and teaching experience. He has worked as a post-doctoral researcher at the department of ITEC, University of Klagenfurt, Austria and also serves as an associate professor in the school of computer science engineering, Lovely Professional University, India. He has authored more than 50 research papers in various peer reviewed journals and conferences. He is a life member of IET research society. He is a reviewer and editorial member of many journals of high repute. He also served as technical program committee member of many international conferences.
Charu Gupta, PhD, graduated with her BE and MTech in computer science and engineering, with honors. She completed her doctoral degree at the Department of Computer Science, Banasthali Vidyapith, Rajasthan, India. Presently serving as assistant professor at Bhagwan Parshuram Institute of Technology, she has over 10 years of teaching experience. She is a faculty coordinator (Delhi Section) of Free and Open Source Cell (FOSS cell) from the International Centre for Free and Open Source Software (ICFOSS), Govt. of Kerala, India. She is also the faculty co-ordinator of the e-Yantra Lab setup initiative (eLSI) in collaboration with IIT Bombay in the Department of Computer Science and Engineering, BPIT. She also has numerous papers in scientific and scholarly journals to her credit.
Vishu Madaan, PhD, received her BTech and MTech degrees in computer science engineering from Lovely Professional University, Phagwara, India and earned her PhD in computer science from IKG-Punjab Technical University, Punjab. She is an assistant professor with Lovely Professional University and has more than eight years of teaching and research experience. She has authored more than 30 research articles in peer-reviewed conferences and journals. She is also a member of IEEE and is a reviewer for many international conferences and technical journals.
Anand Sharma, PhD, received his PhD in engineering from MUST, Lakshmangarh, his MTech from ABV-IIITM, Gwalior and BE from RGPV, Bhopal. He has been working with Mody University of Science and Technology, Lakshmangarh for over 10 years and has more than 14 years of experience of teaching and research. He has pioneered research in the areas of information systems, information security, IoT, knowledge management and machine learning. He is the Vice-Chairman of CSI-Lakshmangarh Chapter and Student Branch Coordinator of CSI-MUST, student branch. He has authored or edited five books and has more than 70 papers in scientific and scholarly journals and conferences. He has organized more than 15 conferences, seminars, and workshops and is serving on the editorial boards of several journals as and is on the program committees of several international conferences.
Nisheeth Joshi, PhD, is an associate professor at Banasthali Vidyapith, India. He did his PhD in the area of natural language processing and is the recipient of the prestigious ISTE-U.P. Government National Award for Outstanding Work Done in Specified Areas of Engineering and Technology. He has authored several papers in scientific and technical journals and conferences. He has also been a mentor and consultant to various start-ups working in the areas of cognitive computing, natural language processing, speech processing, multimodal information retrieval and machine translation. He was also a consultant to C-DAC Pune where he developed the evaluation methodology for the Mantra Machine Translation System. This Machine Translation System is being used by Rajya Sabha, the Upper House of Parliament of India and Department of Official Languages, with the Government of India. In addition, he has two technology transfers and four copyrights to his credit.
Preface xiii
Book Description xv
1 Machine Learning: An Introduction to Reinforcement Learning 1Sheikh Amir Fayaz, Dr. S Jahangeer Sidiq, Dr. Majid Zaman and Dr. Muheet Ahmed Butt
1.1 Introduction 2
1.2 Reinforcement Learning Paradigm: Characteristics 11
1.3 Reinforcement Learning Problem 12
1.4 Applications of Reinforcement Learning 15
2 Data Analysis Using Machine Learning: An Experimental Study on UFC 23Prashant Varshney, Charu Gupta, Palak Girdhar, Anand Mohan, Prateek Agrawal and Vishu Madaan
2.1 Introduction 23
2.2 Proposed Methodology 25
2.3 Experimental Evaluation and Visualization 31
2.4 Conclusion 44
3 Dawn of Big Data with Hadoop and Machine Learning 47Balraj Singh and Harsh Kumar Verma
3.1 Introduction 48
3.2 Big Data 48
3.3 Machine Learning 53
3.4 Hadoop 55
3.5 Studies Representing Applications of Machine Learning Techniques with Hadoop 57
3.6 Conclusion 61
4 Industry 4.0: Smart Manufacturing in Industries -- The Future 67Dr. K. Bhavana Raj
4.1 Introduction 67
5 COVID-19 Curve Exploration Using Time Series Data for India 75Apeksha Rustagi, Divyata, Deepali Virmani, Ashok Kumar, Charu Gupta, Prateek Agrawal and Vishu Madaan
5.1 Introduction 76
5.2 Materials Methods 77
5.3 Concl usion and Future Work 86
6 A Case Study on Cluster Based Application Mapping Method for Power Optimization in 2D NoC 89Aravindhan Alagarsamy and Sundarakannan Mahilmaran
6.1 Introduction 90
6.2 Concept Graph Theory and NOC 91
6.3 Related Work 94
6.4 Proposed Methodology 97
6.5 Experimental Results and Discussion 100
6.6 Conclusion 105
7 Healthcare Case Study: COVID-19 Detection, Prevention Measures, and Prediction Using Machine Learning & Deep Learning Algorithms 109Devesh Kumar Srivastava, Mansi Chouhan and Amit Kumar Sharma
7.1 Introduction 110
7.2 Literature Review 111
7.3 Coronavirus (Covid19) 112
7.4 Proposed Working Model 118
7.5 Experimental Evaluation 130
7.6 Conclusion and Future Work 132
8 Analysis and Impact of Climatic Conditions on COVID-19 Using Machine Learning 135Prasenjit Das, Shaily Jain, Shankar Shambhu and Chetan Sharma
8.1 Introduction 136
8.2 COVID-19 138
8.3 Experimental Setup 141
8.4 Proposed Methodology 142
8.5 Results Discussion 143
8.6 Conclusion and Future Work 143
9 Application of Hadoop in Data Science 147Balraj Singh and Harsh K. Verma
9.1 Introduction 148
9.2 Hadoop Distributed Processing 153
9.3 Using Hadoop with Data Science 160
9.4 Conclusion 164
10 Networking Technologies and Challenges for Green IOT Applications in Urban Climate 169Saikat Samanta, Achyuth Sarkar and Aditi Sharma
10.1 Introduction 170
10.2 Background 170
10.3 Green Internet of Things 173
10.4 Different Energy--Efficient Implementation of Green IOT 177
10.5 Recycling Principal for Green IOT 178
10.6 Green IOT Architecture of Urban Climate 179
10.7 Challenges of Green IOT in Urban Climate 181
10.8 Discussion & Future Research Directions 181
10.9 Conclusion 182
11 Analysis of Human Activity Recognition Algorithms Using Trimmed Video Datasets 185Disha G. Deotale, Madhushi Verma, P. Suresh, Divya Srivastava, Manish Kumar and Sunil Kumar Jangir
11.1 Introduction 186
11.2 Contributions in the Field of Activity Recognition from Video Sequences 190
11.3 Conclusion 212
12 Solving Direction Sense Based Reasoning Problems Using Natural Language Processing 215Vishu Madaan, Komal Sood, Prateek Agrawal, Ashok Kumar, Charu Gupta, Anand Sharma and Awadhesh Kumar Shukla
12.1 Introduction 216
12.2 Methodology 217
12.3 Description of Position 222
12.4 Results and Discussion 224
12.5 Graphical User Interface 225
13 Drowsiness Detection Using Digital Image Processing 231G. Ramesh Babu, Chinthagada Naveen Kumar and Maradana Harish
13.1 Introduction 231
13.2 Literature Review 232
13.3 Proposed System 233
13.4 The Dataset 234
13.5 Working Principle 235
13.6 Convolutional Neural Networks 239
13.6.1 CNN Design for Decisive State of the Eye 239
13.7 Performance Evaluation 240
13.8 Conclusion 242
References 242
Index 245
Sheikh Amir Fayaz1, Dr. S Jahangeer Sidiq2*, Dr. Majid Zaman3 and Dr. Muheet Ahmed Butt1
1Department of Computer Science University of Kashmir, Srinagar, J&K, India
2School of Computer Applications Lovely Professional University, Phagwara, Punjab, India
3Directorate of IT&SS University of Kashmir, Srinagar, J&K, India
Abstract
Reinforcement Learning (RL) is a prevalent prototype for finite sequential decision making under improbability. A distinctive RL algorithm functions with only restricted knowledge of the environment and with limited response or feedback on the quality of the conclusions. To work efficiently in complex environments, learning agents entail the capability to form convenient generalizations, that is, the ability to selectively overlook extraneous facts. It is a challenging task to develop a single illustration that is suitable for a large problem setting. This chapter provides a brief introduction to reinforcement learning models, procedures, techniques, and reinforcement learning processes. Particular focus is on the aspects related to the agent-environment interface and how Reinforcement Learning can be used in various daily life practical applications. The basic concept that we will explore is that of a solution to the Re-enforcement Learning problem using the Markov Decision Process (MDP). We assume that the reader has a basic idea of machine learning concepts and algorithms.
Keywords: Reinforcement learning, Markov decision process, agent-environment interaction, exploitation, exploration
Many years ago, we started automating physical solutions with machines during the Industrial Revolution, replacing animals with machines. We kind of know how to pull something forward across a track and then we just implement those machines. We use the machines instead of humans and animals as laborers and, of course, if we want to make a huge boom in productivity, then after that, the second wave of automation occurred and is basically still happening. Now, we are in you could call the Digital Revolution, where instead of taking physical solutions, we took mental solutions as a canonical example. For example, if we have a calculator and we know how to do division, we can program that into a calculator and have the machine do the mental task in future operations. Though we automated its mental solutions, we still come up with the solutions by ourselves, i.e., we came up with what we want to do and how to do it and then we implemented it on the machine. The next step here is to define the problem, then have a machine solve for it itself. For this, we require learning, i.e., we require something in addition because if we do not put anything into the system, how can we know it. One thing we can put into a system is our knowledge. This is what was done with these machines either for mental or physical solutions, but the other thing we could put in there is some knowledge on how the machine could learn itself.
This chapter is structured as follows: the current section provides a brief introduction about machine learning and its types. The main focus in Section 1.1.3.3 has been put on Reinforcement Learning and its types, learning process, and RL learning concepts. Section 1.2 introduces a Reinforcement Learning Paradigm and its characteristics. Section 1.3 defines the Reinforcement Learning (RL) problem where the goal is to select the actions to make the most of the total rewards. Section 1.4 gives the basic algorithm which defines the solution of the reinforcement problem using the Markov Decision Process (MDP) with its mathematical Notation. Lastly, Section 1.5 briefly explains the applications of the Reinforcement Learning process in the current day to day learning environment.
Machine learning is the science of getting computers to act by feeding them data and letting them learn a few trails on their own. Here, we are not actually programming the machine but feeding the machine lots of data so that it can learn by itself [15, 20].
Definition: Machine learning is as subset of Artificial Intelligence (AI) which provides machines the capability to learn automatically and improve from experience without being explicitly programmed.
Example:
As kids we would not be able to differentiate between fruits like apples, cherries, and oranges. When we first see how apples, oranges, and cherries look, we were not able to differentiate between them. This is because we have not observed them enough from what they look like, but as we grow up, we learn what apples, cherries, and oranges look like, i.e., we came to know the color, shape, and size of an apple and so on. Similarly, when it comes to a machine, if we input images of an apple or any fruit, initially it will not be able to differentiate between them. This is because it does not have enough data about them, but if we keep feeding the machine with a number of images about these (cherries, apples, and oranges), it will learn how to differentiate between the three. Just like humans learn by observing and collecting data, similarly machines also learn when we give a lot of data, i.e., when we input a lot of data to a machine it will learn how to distinguish between them. The machine will basically train in the data and try to differentiate between the various rules by using various machine learning algorithms.
There are three different types of machine learning:
This type of learning means to direct a certain activity and make sure it is done correctly. This is one of the popular machine learning methods. Several classification and regression applications were developed recently by researchers [52-58]. If we feed the model with a set of data, called training data, which contains both the inputs and the corresponding expected output, then the training data will teach the model for the correct output for a particular input [14, 17-19]. The data in supervised learning is labeled data. In supervised learning we have a joint distribution of data points and labels [47-49]. Our goal is to maximize the probability of the classifier as shown below:
where "Ds" is the labeled data, and x (input) and y (output) are the samples from p(x,y), which is a marginal response to the joint distribution from the labelled data set sample.
This type of learning acts without anyone's direction or supervision. In this method, the model is basically given a dataset which is neither labeled nor classified [22, 23], so the machine does not know how the output should look [31, 50, 51]. For example, if we input an image of a fruit, we are not going to label it as say "apple". Instead, we only give data and the machine is going to learn on its own.
where "Du" is the unlabeled data and x (input) is the sample from p(x).
Is one of the types of machine learning which has been growing in the past few years. It has various applications like self-driving or automated cars and image processing techniques wherein we detect objects using reinforcement learning. This type of machine learning has become one of the most important types in the current world to establish or to encourage a pattern of behavior. It is a learning technique where an agent interrelates through its environment by generating actions and discovering errors or rewards. In other words, Reinforcement Learning is a type of a machine learning where an agent learns to act in an environment by carrying out actions and observing the results [1, 16].
Reinforcement Learning is all about taking a suitable action in order to make the most of the reward in a particular state or situation. In this learning there is no predictable output as in supervised learning; here, the reinforcement agent chooses which action to take in order to perform a specified task. In the absence of a training dataset, it is bound to acquire from its experience itself [2, 3].
Thus, Reinforcement Learning is the type of machine learning where machines learn from evaluation, whereas other learning algorithms (supervised learning) learn from instructions. So, Reinforcement Learning is essentially an interaction between the agent and the environment (Figure 1.1) in the fashion that the agent performs an action which makes some changes in the environment and based on that, the environment gives a reward (positive or negative) to the agent. If the reward is positive, that reinforces that particular action and if the award is negative that tends to prevent the agent from performing that same action again [21]. So, this process goes on for over many iterations until the agent has learned to interact with an environment in a meaningful way. This is an essence of reinforcement learning.
Let us try to understand the concept of reinforcement learning through an example of Tic-Tac-Toe which you are familiar with and might have played a lot in your childhood. This game (Figure 1.2) essentially consists of 9 places on the square board and you need to populate the 9 places with two...
Dateiformat: ePUBKopierschutz: Adobe-DRM (Digital Rights Management)
Systemvoraussetzungen:
Das Dateiformat ePUB ist sehr gut für Romane und Sachbücher geeignet – also für „fließenden” Text ohne komplexes Layout. Bei E-Readern oder Smartphones passt sich der Zeilen- und Seitenumbruch automatisch den kleinen Displays an. Mit Adobe-DRM wird hier ein „harter” Kopierschutz verwendet. Wenn die notwendigen Voraussetzungen nicht vorliegen, können Sie das E-Book leider nicht öffnen. Daher müssen Sie bereits vor dem Download Ihre Lese-Hardware vorbereiten.Bitte beachten Sie: Wir empfehlen Ihnen unbedingt nach Installation der Lese-Software diese mit Ihrer persönlichen Adobe-ID zu autorisieren!
Weitere Informationen finden Sie in unserer E-Book Hilfe.