1
Artificial Intelligence (AI) in Food Processing
S. Abinaya1, Anil Panghal2, Sunil Kumar2, Anju Kumari1, Nitin Kumar2 and Navnidhi Chhikara3*
1Centre of Food Science and Technology, Chaudhary Charan Singh Haryana Agricultural University, Hisar, India
2Department of Processing and Food Engineering, AICRP-PHET, Chaudhary Charan Singh Haryana Agricultural University, Hisar, India
3Department of Food Technology, Guru Jambheshwar University of Science and Technology, Hisar, India
Abstract
The food processing sector holds a significant place among other business sectors globally that support high employability. The efficient production and packing of food products depend greatly on the human workforce. Owing to the involvement of the human workforce, the food industries are not only unable to maintain food safety but also the demand-supply chain. Food is a basic human need. Reducing food waste, streamlining the supply chain, and improving food delivery, logistics, and safety are imperative. The most efficient approach to address these problems in the food industry is through industrial automation. Artificial Intelligence (AI) plays a significant role in achieving these goals. AI is defined as a branch of study that mimics human thought processes, learning abilities, and knowledge storage systems. AI has become an integral part of technological advancements in the food industry in recent years as a result of growing food demands spurred on by a growing world population. The food industry is becoming more and more in need of these intelligent systems due to their versatility for performing tasks like food quality assessment, quality control, classification of foods, food processing, and forecasting. The numerous applications of artificial intelligence in the food sector will grow as a result of ongoing technological advancements and a wider range of application scenarios. This chapter helps to shed light on cutting-edge AI and its technologies in the food processing sector. The first part of this chapter explains what is AI, its components, techniques, and the ways forward its popularity in various sectors. The second part of this chapter provides insight into the various food processing applications of different AI technologies including Machine learning, Expert systems, Fuzzy logic systems, and machine vision, etc., It also discusses their benefits, drawbacks, and approaches to provide guidance for choosing the best approaches to advance future developments related to AI and the food industry. Furthermore, it also explains the efficiency of the use of combinations of two or more AI techniques to make tedious process and applications simple. In dayto-day life, the application of AI continues to grow because of its ability to improve waste management and maintain food quality, hygiene, and safety. In the future, AI significantly changes the food processing sector by producing more reasonable and healthier productivity for the growing population.
Keywords: AI, emerging technologies, expert system, neural networks, fuzzy logic, food industry, quality assessment, food safety
1.1 Introduction
With the advancement of mechanization, the processing sector and current industry have reached productivity peaks in a matter of decades. The processing sector was the first to be transformed by technology developments and many other industries followed (Volter, 2013). In the early 1900s, the thought of automation performing jobs with more precision and eliminating human labor in all disciplines was a vision of hope for a better future. Artificial Intelligence, popularly known as AI, has risen to prominence in recent times, surpassing humans in activities such as object identification and data analysis (Cohen & Feigenbaum, 2014). While learning systems and processing capacity improve, this scenario appears to make a significant step forward. The origins of automation may be traced back to the early 1800s when it enabled the manufacturing sector, which eventually led to current technical advancements (MacLeod, 2002). Automation has now infiltrated nearly all fields and is outperforming market trades by a wide margin (Frohm et al., 2008). The majority of equipment in the 18th century was designed to do simple operations like welding, spinning, and repetitive activities, allowing human workers to focus on more sophisticated activities (Mantoux, 2013). From the early 1900s until the present, various forms of automation have appeared in a few instances, ultimately turning their attention to a wide range of sectors. Nevertheless, recent advances in AI have caused humanity to reconsider the potential of learning and ask, "What might be the depths of AI when machines can learn?". AI is a collection of various approaches and phenomena, amid which two fundamental principles, Neural Networks (NN) and Deep Learning (DL), are credited for AI's remarkable progress (Norvig, 2002). AI is a term used to describe computer-generated intelligence that can develop to analyze, plan, comprehend, and interpret human language (Wang, 2008). It is the study and creation of digital systems capable of doing activities that would ordinarily need human intellect, such as vision, speech identification, strategic planning, and language processing (Kumar, 2018). The pioneer of AI, John McCarthy, described it as the science and engineering of creating intelligent devices, particularly intelligent computer programs. Artificial intelligence can be divided into two categories: strong AI and weak AI. The weak AI principle states that the computer should be built to serve as an intelligent element that mimics human decisions, but the strong AI concept states that the machine should be able to reflect the human brain (Borana & Jodhpur, 2016). AI has a range of algorithms to pick from including reinforcement learning, Expert Systems (ES), Fuzzy Logic (FL), Swarm Intelligence, Turing Test, Cognitive Science, Artificial Neural Networks (ANN), and logic programming (Borana & Jodhpur, 2016). AI's seductive potential has earned it the most popular tool to use in fields such as decision-making and process optimization, intending to lower total costs, improve quality, and increase profitability (Ge et al., 2017; Mahadevappa et al., 2017). Food demand is expected to increase from 59 to 98% by 2050, as the world's population grows (Elferink et al., 2016). Consequently, AI was used to meet this food demand in areas such as supply chain management, food sorting, production development, food quality enhancement, and adequate food hygiene (Funes et al., 2015). ANN was used to assist complicated problem-solving in the food industry (Funes et al., 2015), and the classification and prediction of variables are simple and easier when using ANN (Correa et al., 2018), which has resulted in a growing demand for ANN over the past year. In addition, FL and ANN were performed as controllers in the areas of food safety, quality management, yield increase, and reducing costs (Kondakci & Zhou, 2017; Wang et al., 2017).
1.2 Evolution of Artificial Intelligence
For scientists, Artificial Intelligence is not a new term or technique. This technique is much older. In Ancient Greek and Egyptian mythologies, there are even tales of mechanical men. The achievements in the history of AI that outline the route from AI formation to current development are listed below (McCorduck & Cfe, 2004):
- Warren McCulloch and Walter Pits published the first study on artificial intelligence in 1943, which is today known as AI. They presented an artificial neuron approach.
- In the year 1949, Donald Hebb developed an updated rule for altering the intensity of neuron connections. He named the rule Hebbian Learning.
- In the year 1950, Alan Turing, an English mathematician, invented the machine learning system. In his paper "Computing Machinery and Intelligence," Alan Turing proposes a test. A Turing test can be used to determine whether or not a machine can demonstrate intelligent behavior comparable to human intelligence.
- In the year 1955, Allen Newell and Herbert A. Simon built "Logic Theorist," the "first artificial intelligence program." This program verified 38 of 52 mathematical theorems, as well as discovered new and more concise solutions for several of them.
- At the Dartmouth Conference in 1956, John McCarthy, an American computer scientist, coined the term "Artificial Intelligence." AI became a recognized academic discipline for the first time. High-level computer languages such as FORTRAN, LISP, and COBOL were created during the period. There was a lot of interest in AI during this period.
- In the year 1966, the focus of researchers was on inventing algorithms that could solve mathematic problems.
- In the year 1972, Japan produced WABOT-1, the world's first intelligent humanoid robot.
- The first AI winter took place between 1974 and 1980. The AI winter represents a duration when computer scientists faced a severe lack of government support (funds) for AI research. Throughout AI winters, there was a drop in public interest in AI.
- After a brief hiatus, AI returned with Expert System. Expert systems have been built to mimic the abilities of a human expert to make decisions.
- Between 1987 to 1993, the AI Winter lasted for the second time. Investors and the government have once again halted funding for AI research, citing excessive costs and ineffective results. XCON, for example, was a...