Schweitzer Fachinformationen
Wenn es um professionelles Wissen geht, ist Schweitzer Fachinformationen wegweisend. Kunden aus Recht und Beratung sowie Unternehmen, öffentliche Verwaltungen und Bibliotheken erhalten komplette Lösungen zum Beschaffen, Verwalten und Nutzen von digitalen und gedruckten Medien.
Acknowledgments xi
1 Introduction 1
References 6
2 The Business of Cloud Computing 7
2.1 IT Industry Transformation through Virtualization and Cloud 7
2.2 The Business Model Around Cloud 13
2.2.1 Cloud Providers 14
2.2.2 Software and Service Vendors 15
2.3 Taking Cloud to the Network Operators 15
References 18
3 CPU Virtualization 19
3.1 Motivation and History 20
3.2 A Computer Architecture Primer 21
3.2.1 CPU, Memory, and I/O 21
3.2.2 How the CPU Works 23
3.2.3 In-program Control Transfer: Jumps and Procedure Calls 25
3.2.4 Interrupts and Exceptions-the CPU Loop Refined 28
3.2.5 Multi-processing and its Requirements-The Need for an Operating System 34
3.2.6 Virtual Memory-Segmentation and Paging 38
3.2.7 Options in Handling Privileged Instructions and the Final Approximation of the CPU Loop 42
3.2.8 More on Operating Systems 44
3.3 Virtualization and Hypervisors 48
3.3.1 Model, Requirements, and Issues 49
3.3.2 The x86 Processor and Virtualization 52
3.3.3 Dealing with a Non-virtualizable CPU 55
3.3.4 I/O Virtualization 57
3.3.5 Hypervisor Examples 60
3.3.6 Security 65
References 69
4 Data Networks-The Nervous System of the Cloud 71
4.1 The OSI Reference Model 74
4.1.1 Host-to-Host Communications 74
4.1.2 Interlayer Communications 76
4.1.3 Functional Description of Layers 79
4.2 The Internet Protocol Suite 85
4.2.1 IP-The Glue of the Internet 87
4.2.2 The Internet Hourglass 98
4.3 Quality of Service in IP Networks 102
4.3.1 Packet Scheduling Disciplines and Traffic Specification Models 103
4.3.2 Integrated Services 105
4.3.3 Differentiated Services 109
4.3.4 Multiprotocol Label Switching (MPLS) 112
4.4 WAN Virtualization Technologies 117
4.5 Software-Defined Network 120
4.6 Security of IP 125
References 129
5 Networking Appliances 131
5.1 Domain Name System 131
5.1.1 Architecture and Protocol 134
5.1.2 DNS Operation 140
5.1.3 Top-Level Domain Labels 142
5.1.4 DNS Security 145
5.2 Firewalls 149
5.2.1 Network Perimeter Control 153
5.2.2 Stateless Firewalls 155
5.2.3 Stateful Firewalls 158
5.2.4 Application-Layer Firewalls 161
5.3 NAT Boxes 163
5.3.1 Allocation of Private IP Addresses 165
5.3.2 Architecture and Operation of the NAT Boxes 168
5.3.3 Living with NAT 172
5.3.4 Carrier-Grade NAT 180
5.4 Load Balancers 184
5.4.1 Load Balancing in a Server Farm 185
5.4.2 A Practical Example: A Load-Balanced Web Service 187
5.4.3 Using DNS for Load Balancing 188
References 191
6 Cloud Storage and the Structure of a Modern Data Center 193
6.1 Data Center Basics 195
6.1.1 Compute 196
6.1.2 Storage 196
6.1.3 Networking 198
6.2 Storage-Related Matters 198
6.2.1 Direct-Attached Storage 200
6.2.2 Network-Attached Storage 208
6.2.3 Storage Area Network 215
6.2.4 Convergence of SAN and Ethernet 221
6.2.5 Object Storage 230
6.2.6 Storage Virtualization 233
6.2.7 Solid-State Storage 236
References 242
7 Operations, Management, and Orchestration in the Cloud 245
7.1 Orchestration in the Enterprise 247
7.1.1 The Service-Oriented Architecture 253
7.1.2 Workflows 255
7.2 Network and Operations Management 259
7.2.1 The OSI Network Management Framework and Model 261
7.2.2 Policy-Based Management 264
7.3 Orchestration and Management in the Cloud 267
7.3.1 The Life Cycle of a Service 268
7.3.2 Orchestration and Management in OpenStack 274
7.4 Identity and Access Management 287
7.4.1 Implications of Cloud Computing 289
7.4.2 Authentication 291
7.4.3 Access Control 295
7.4.4 Dynamic Delegation 299
7.4.5 Identity Federation 302
7.4.6 OpenStack Keystone (A Case Study) 303
References 309
Appendix: Selected Topics 313
A.1 The IETF Operations and Management Standards 313
A.1.1 SNMP 313
A.1.2 COPS 316
A.1.3 Network Configuration (NETCONF) Model and Protocol 319
A.2 Orchestration with TOSCA 324
A.3 The REST Architectural Style 329
A.3.1 The Origins and Development of Hypermedia 329
A.3.2 Highlights of the World Wide Web Architecture 332
A.3.3 The Principles of REST 334
A.4 Identity and Access Management Mechanisms 336
A.4.1 Password Management 336
A.4.2 Kerberos 338
A.4.3 Access Control Lists 341
A.4.4 Capability Lists 342
A.4.5 The Bell-LaPadula Model 343
A.4.6 Security Assertion Markup Language 345
A.4.7 OAuth 2.0 347
A.4.8 OpenID Connect 349
A.4.9 Access Control Markup Language 351
References 353
Index 355
If the seventeenth and early eighteenth centuries are the age of clocks, and the later eighteenth and the nineteenth centuries constitute the age of steam engines, the present time is the age of communication and control.
Norbert Wiener (from the 1948 edition of Cybernetics: or Control and Communication in the Animal and the Machine).
It is unfortunate that we don't remember the exact date of the extraordinary event that we are about to describe, except that it took place sometime in the Fall of 1994. Then Professor Noah Prywes of the University of Pennsylvania gave a memorable invited talk at Bell Labs, at which two authors1 of this book were present. The main point of the talk was a proposal that AT&T (of which Bell Labs was a part at the time) should go into the business of providing computing services-in addition to telecommunications services-to other companies by actually running these companies' data centers. "All they need is just to plug in their terminals so that they receive IT services as a utility. They would pay anything to get rid of the headaches and costs of operating their own machines, upgrading software, and what not."
Professor Prywes, whom we will meet more than once in this book, well known in Bell Labs as a software visionary and more than that-the founder and CEO of a successful software company, Computer Command and Control-was suggesting something that appeared too extravagant even to the researchers. The core business of AT&T at that time was telecommunications services. The major enterprise customers of AT&T were buying the customer premises equipment (such as private branch exchange switches and machines that ran software in support of call centers). In other words, the enterprise was buying things to run on premises rather than outsourcing things to the network provider!
Most attendees saw the merit of the idea, but could not immediately relate it to their day-to-day work, or-more importantly-to the company's stated business plan. Furthermore, at that very moment the Bell Labs computing environment was migrating from the Unix programming environment hosted on mainframes and Sun workstations to Microsoft Office-powered personal computers. It is not that we, who "grew up" with the Unix operating system, liked the change, but we were told that this was the way the industry was going (and it was!) as far as office information technology was concerned. But if so, then the enterprise would be going in exactly the opposite way-by placing computing in the hands of each employee. Professor Prywes did not deny the pace of acceptance of personal computing; his argument was that there was much more to enterprises than what was occurring inside their individual workstations-payroll databases, for example.
There was a lively discussion, which quickly turned to the detail. Professor Prywes cited the achievements in virtualization and massive parallel-processing technologies, which were sufficient to enable his vision. These arguments were compelling, but ultimately the core business of AT&T was networking, and networking was centered on telecommunications services.
Still, telecommunications services were provided by software, and even the telephone switches were but peripheral devices controlled by computers. It was in the 1990s that virtual telecommunications networking services such as Software Defined Networks-not to be confused with the namesake development in data networking, which we will cover in Chapter 4-were emerging on the purely software and data communications platform called Intelligent Network. It is on the basis of the latter that Professor Prywes thought the computing services could be offered. In summary, the idea was to combine data communications with centralized powerful computing centers, all under the central command and control of a major telecommunications company. All of us in the audience were intrigued.
The idea of computing as a public utility was not new. It had been outlined by Douglas F. Parkhill in his 1966 book [1].
In the end, however, none of us could sell the idea to senior management. The times the telecommunications industry was going through in 1994 could best be characterized as "interesting," and AT&T did not fare particularly well for a number of reasons.2 Even though Bell Labs was at the forefront of the development of all relevant technologies, recommending those to businesses was a different matter-especially where a proposal for a radical change of business model was made, and especially in turbulent times.
In about a year, AT&T announced its trivestiture. The two authors had moved, along with a large part of Bell Labs, into the equipment manufacturing company which became Lucent Technologies and, 10 years later, merged with Alcatel to form Alcatel-Lucent.
At about the same time, Amazon launched a service called Elastic Compute Cloud (EC2), which delivered pretty much what Professor Prywes had described to us. Here an enterprise user-located anywhere in the world-could create, for a charge, virtual machines in the "Cloud" (or, to be more precise, in one of the Amazon data centers) and deploy any software on these machines. But not only that, the machines were elastic: as the user's demand for computing power grew, so did the machine power-magically increasing to meet the demand-along with the appropriate cost; when the demand dropped so did the computing power delivered, and also the cost. Hence, the enterprise did not need to invest in purchasing and maintaining computers, it paid only for the computing power it received and could get as much of it as necessary!
As a philosophical aside: one way to look at the computing development is through the prism of dialectics. As depicted in Figure 1.1(a), with mainframe-based computing as the thesis, the industry had moved to personal-workstation-based computing-the antithesis. But the spiral development-fostered by advances in data networking, distributed processing, and software automation-brought forth the Cloud as the synthesis, where the convenience of seemingly central on-demand computing is combined with the autonomy of a user's computing environment. Another spiral (described in detail in Chapter 2) is depicted in Figure 1.1(b), which demonstrates how the Public Cloud has become the antithesis to the thesis of traditional IT data centers, inviting the outsourcing of the development (via "Shadow IT " and Virtual Private Cloud). The synthesis is Private Cloud, in which the Cloud has moved computing back to the enterprise but in a very novel form.
Figure 1.1 Dialectics in the development of Cloud Computing: (a) from mainframe to Cloud; (b) from IT data center to Private Cloud.
At this point we are ready to introduce formal definitions, which have been agreed on universally and thus form a standard in themselves. The definitions have been developed at the National Institute of Standards and Technology (NIST) and published in [2]. To begin with, Cloud Computing is defined as a model "for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction." This Cloud model is composed of five essential characteristics, three service models, and four deployment models.
The five essential characteristics are presented in Figure 1.2.
Figure 1.2 Essential characteristics of Cloud Computing. Source: NIST SP 800-145, p. 2.
The three service models, now well known, are Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS), and Infrastructure-as-a-Service (IaaS). NIST defines them thus:
Dateiformat: ePUBKopierschutz: Adobe-DRM (Digital Rights Management)
Systemvoraussetzungen:
Das Dateiformat ePUB ist sehr gut für Romane und Sachbücher geeignet – also für „fließenden” Text ohne komplexes Layout. Bei E-Readern oder Smartphones passt sich der Zeilen- und Seitenumbruch automatisch den kleinen Displays an. Mit Adobe-DRM wird hier ein „harter” Kopierschutz verwendet. Wenn die notwendigen Voraussetzungen nicht vorliegen, können Sie das E-Book leider nicht öffnen. Daher müssen Sie bereits vor dem Download Ihre Lese-Hardware vorbereiten.Bitte beachten Sie: Wir empfehlen Ihnen unbedingt nach Installation der Lese-Software diese mit Ihrer persönlichen Adobe-ID zu autorisieren!
Weitere Informationen finden Sie in unserer E-Book Hilfe.