Chapter 1: Cloud computing
Cloud computing
It is claimed by proponents of public and hybrid clouds that cloud computing enables businesses to eliminate or significantly reduce the upfront costs of IT infrastructure. The advocates of cloud computing assert that it enables businesses to get their applications up and running more quickly, with improved manageability and less maintenance, and that it enables information technology teams to more rapidly adjust resources to meet fluctuating and unpredictable demand, The International Data Corporation (IDC) reports that global spending on cloud computing services has reached $706 billion, and it is anticipated that this figure will reach $1.3 trillion by 2025.
As early as 1993, Apple spin-off General Magic and AT&T used the term "cloud" to refer to platforms for distributed computing when describing their respective (paired) Telescript and Personal Link technologies. Since then, the term "cloud" has come to be synonymous with such platforms. Andy Hertzfeld made some remarks about Telescript, which is General Magic's distributed programming language, in the article "Bill and Andy's Excellent Adventure II," which was published in the April 1994 issue of Wired:
The great thing about Telescript is that now, instead of just having a device to program, we now have the entire Cloud out there, where a single program can go and travel to many different sources of information and create a sort of virtual service. Previously, we only had this one device to program. That was an idea that had never occurred to anyone before. The example that Jim White, the designer of Telescript, X.400, and ASN.1, uses now is a date-arranging service. In this example, a software agent goes to a flower shop and places an order for flowers. Next, the software agent goes to a ticket shop and purchases tickets for a show. Finally, all of this information is communicated to both parties.
During the 1960s, the initial concepts of time-sharing became popularized through a process known as Remote Job Entry (RJE). This terminology was mostly associated with large vendors like IBM and DEC at the time. By the early 1970s, full-time-sharing solutions were accessible on a variety of platforms, including Multics (running on GE hardware), Cambridge CTSS, and the earliest ports of UNIX (on DEC hardware). Despite this, the "data center" model, in which users submitted jobs to operators to be run on IBM mainframes, was by far and away the most common.
In the 1990s, telecommunications companies that had previously focused primarily on providing dedicated point-to-point data circuits started providing virtual private network (VPN) services. These services offered a comparable level of service quality but came at a lower price point. They were able to make more efficient use of the available bandwidth across the entire network by switching the direction of the traffic as they saw fit in order to balance the use of the servers. They started using the cloud as a symbol to indicate the boundary between what the provider was responsible for and what the users were responsible for in the system. The introduction of cloud computing pushed this boundary further, encompassing not only all servers but also the network infrastructure.
At the very least, the use of the cloud as a metaphor for virtualized services can be traced back to General Magic in 1994. There, the metaphor was employed to describe the universe of "places" that mobile agents operating in a Telescript environment were able to visit. According to what Andy Hertzfeld has described:
Andy says that the "beauty of Telescript" is that now, instead of just having a device to program, we now have the entire Cloud out there, where a single program can go and travel to many different sources of information and create a sort of virtual service. "The beauty of Telescript is that now, instead of just having a device to program, we now have the entire Cloud out there," Andy says.
Based on its extensive history of application in networking and telecommunications, the use of the cloud as a metaphor can be traced back to General Magic communications employee David Hoffman. In addition to its use by General Magic, it was also put to use in the promotion of AT&T's Personal Link Services, which are associated with General Magic.
Amazon launched its web services subsidiary, Amazon Web Services, in July 2002 with the stated mission of "enabling developers to build innovative and entrepreneurial applications on their own." Amazon's Simple Storage Service (S3) was first made available to customers in March of 2006, and Elastic Compute Cloud (EC2) was made available to customers in August of the same year. These products were among the first to employ server virtualization in order to provide IaaS with more affordable and on-demand pricing structures.
The beta version of the Google App Engine was made available for public testing in April of 2008.
Microsoft Azure, which had been announced in October 2008, was finally made available to customers in February of 2010.
FedRAMP was the first government-wide cloud services accreditation program, and it was launched in 2011. It provides standardized risk assessment methodologies for cloud products and services. In 2011, the United States government launched the Federal Risk Management Program, also known as FedRAMP.
Preview versions of Google Compute Engine were made available to users in May 2012, and the full product was made available to everyone in December 2013.
The concept of cloud computing is based on the idea that users should be able to reap the benefits of a wide range of technologies without having to have in-depth knowledge of or expertise with each of those technologies individually. The cloud is designed to reduce operating expenses and to assist users in concentrating on the most important aspects of their businesses rather than being hampered by technical challenges.
The characteristics of cloud computing are similar to those of:
Client-server model Client-server computing refers to any distributed application that differentiates between service providers (servers) and service requestors. This distinction is made using a model known as the client-server architecture (clients).
The 1960s through the 1980s saw the rise of the computer bureau, which was a type of service bureau that offered various computer-related services.
Grid computing is a type of distributed and parallel computing in which a "super and virtual computer" is composed of a cluster of networked, loosely coupled computers working together to perform very large tasks. These computers are loosely coupled to one another and are connected through a grid.
Distributed computing paradigm that brings data, compute, storage, and application services closer to the client or near-user edge devices, such as network routers. Also known as "fog computing." In addition, data is handled at the network level, on smart devices, and on the end-user client side (for example, mobile devices), as opposed to being sent to a remote location to be processed.
The "packaging of computing resources, such as computation and storage, as a metered service similar to a traditional public utility, such as electricity," is what is meant by the term "utility computing."
Peer-to-peer - architecture that is decentralized and does not require any form of central coordination. Participants both contribute and benefit from the resources that are made available (in contrast to the traditional client-server model).
A live, isolated computer environment in the cloud known as a cloud sandbox, in which a piece of software, piece of code, or file can be executed without affecting the application in which it is executed.
The following is a list of important qualities that cloud computing possesses::
Cloud service providers frequently claim to have achieved cost savings. When using a public cloud delivery model, capital expenditures, such as the purchase of servers, are converted into operational expenditures. contains a number of articles that investigate cost considerations in greater detail, the majority of which come to the conclusion that cost savings depend on the type of activities supported and the type of infrastructure that is available within the organization.
Independence from both devices and locations
Because the data is hosted on an external server that is maintained by a provider, cloud environment maintenance is made much simpler. This eliminates the need to make financial investments in data center hardware. The costs of IT maintenance for cloud computing are reduced when compared with the costs of maintaining on-premises data centers because the cloud provider's IT maintenance team manages and updates the cloud's IT maintenance.
The ability to share resources and costs among a large group of users is made possible by multitenancy. This makes it possible to::
a concentration of infrastructure in areas where it can be maintained at lower expense (such as real estate, electricity, etc.)
Capacity to handle peak loads grows (users need not engineer and pay for the resources and equipment to meet their highest possible load-levels)
improvements in utilization and efficiency for systems that are frequently only used between 10 and 20 percent of the time.
Performance is monitored by IT professionals from the service provider, and in order to construct architectures that are consistent and have a loose coupling, web services are used as the system interface.
When multiple users can simultaneously work on the same data, rather than waiting for it to be saved and emailed, productivity may be increased. This...