Schweitzer Fachinformationen
Wenn es um professionelles Wissen geht, ist Schweitzer Fachinformationen wegweisend. Kunden aus Recht und Beratung sowie Unternehmen, öffentliche Verwaltungen und Bibliotheken erhalten komplette Lösungen zum Beschaffen, Verwalten und Nutzen von digitalen und gedruckten Medien.
This book about web scraping covers practical concepts with detailed explanations and example code. We will introduce you to the essential topics in extracting or scraping data (that is, high-quality data) from websites, using effective techniques from the web and the Python programming language.
In this chapter, we are going to understand basic concepts related to web scraping. Whether or not you have any prior experience in this domain, you will easily be able to proceed with this chapter.
The discussion of the web or websites in our context refers to pages or documents including text, images, style sheets, scripts, and video contents, built using a markup language such as HTML. It's almost a container of various content.
The following are a couple of common queries in this context:
Most of us will have come across the concept of data and the benefits or usage of data in deriving information, decision-making, gaining insights from facts, or even knowledge discovery. There has been growing demand for data, or high-quality data, in most industries globally (such as governance, medical sciences, artificial intelligence, agriculture, business, sport, and R&D).
We will learn what exactly web scraping is, explore the techniques and technologies it is associated with, and find and extract data from the web, with the help of the Python programming language, in the chapters ahead.
In this chapter, we are going to cover the following main topics:
You can use any Operating System (OS) (such as Windows, Linux, or macOS) along with an up-to-date web browser (such as Google Chrome or Mozilla Firefox) installed on your PC or laptop.
Scraping is a process of extracting, copying, screening, or collecting data. Scraping or extracting data from the web (a buildup of websites, web pages, and internet-related resources) for certain requirements is normally called web scraping. Data collection and analysis are crucial in information gathering, decision-making, and research-related activities. However, as data can be easily manipulated, web scraping should be carried out with caution.
The popularity of the internet and web-based resources is causing information domains to evolve every day, which is also leading to growing demand for raw data. Data is a basic requirement in the fields of science and technology, and management. Collected or organized data is processed, analyzed, compared with historical data, and trained using Machine Learning (ML) with various algorithms and logic to obtain estimations and information and gain further knowledge.
Web scraping provides the tools and techniques to collect data from websites, fit for either personal or business-related needs, but with legal considerations.
As seen in Figure 1.1, we obtain data from various websites based on our needs, write/execute crawlers, collect necessary content, and store it. On top of this collected data, we do certain analyses and come up with some information related to decision-making.
Figure 1.1: Web scraping - storing web content as data
We will explore more about scraping and the analysis of data in later chapters.
There are some legal factors that are also to be considered before performing scraping tasks. Most websites contain pages such as Privacy Policy, About Us, and Terms and Conditions, where information on legal action and prohibited content, as well as general information, is available. It is a developer's ethical duty to comply with these terms and conditions before planning any scraping activities on a website.
Important note
Scraping, web scraping, and crawling are terms that are generally used interchangeably in both the industry and this book. However, they have slightly different meanings. Crawling, also known as spidering, is a process used to browse through the links on websites and is often used by search engines for indexing purposes, whereas scraping is mostly related to content extraction from websites.
You now have a basic understanding of web scraping. We will try to explore and understand the latest web-based technologies that are extremely helpful in web scraping in the upcoming section.
A web page is not only a document or container of content. The rapid development in computing and web-related technologies today has transformed the web, with more security features being implemented and the web becoming a dynamic, real-time source of information. Many scraping communities gather historic data; some analyze hourly data or the latest obtained data.
At our end, we (users) use web browsers (such as Google Chrome, Mozilla Firefox, and Safari) as an application to access information from the web. Web browsers provide various document-based functionalities to users and contain application-level features that are often useful to web developers.
Web pages that users view or explore through their browsers are not just single documents. Various technologies exist that can be used to develop websites or web pages. A web page is a document that contains blocks of HTML tags. Most of the time, it is built with various sub-blocks linked as dependent or independent components from various interlinked technologies, including JavaScript and Cascading Style Sheets (CSS).
An understanding of the general concepts of web pages and the techniques of web development, along with the technologies found inside web pages, will provide more flexibility and control in the scraping process. A lot of the time, a developer can also employ reverse-engineering techniques.
Reverse engineering is an activity that involves breaking down and examining the concepts that were required to build certain products. For more information on reverse engineering, please refer to the GlobalSpec article How Does Reverse Engineering Work?, available at https://insights.globalspec.com/article/7367/how-does-reverse-engineering-work.
Here, we will introduce and explore a few of the available web technologies that can help and guide us in the process of data extraction.
Hypertext Transfer Protocol (HTTP) is an application protocol that transfers resources (web-based), such as HTML documents, between a client and a web server. HTTP is a stateless protocol that follows the client-server model. Clients (web browsers) and web servers communicate or exchange information using HTTP requests and HTTP responses, as seen in Figure 1.2:
Figure 1.2: HTTP (client and server or request-response communication)
Requests and responses are cyclic in nature - they are like questions and answers from clients to the server, and vice versa.
Another encrypted and more secure version of the HTTP protocol is Hypertext Transfer Protocol Secure (HTTPS). It uses Secure Sockets Layer (SSL) (learn more about SSL at https://developer.mozilla.org/en-US/docs/Glossary/SSL) and Transport Layer Security (TLS) (learn more about TLS at https://developer.mozilla.org/en-US/docs/Glossary/TLS) to communicate encrypted content between a client and a server. This type of security allows clients to exchange sensitive data with a server in a safe manner. Activities such as banking, online shopping, and e-payment gateways use HTTPS to make sensitive data safe and prevent it from being exposed.
An HTTP request URL begins with http://, for example, http://www.packtpub.com, and an HTTPS request URL begins with https://, such as https://www.packpub.com.
You have now learned a bit about HTTP. In the next section, you will learn about HTTP requests (or HTTP request methods).
Web browsers or clients submit their requests to the server. Requests are forwarded to the server using various methods (commonly known as HTTP request methods), such as GET and...
Dateiformat: ePUBKopierschutz: Adobe-DRM (Digital Rights Management)
Systemvoraussetzungen:
Das Dateiformat ePUB ist sehr gut für Romane und Sachbücher geeignet – also für „fließenden” Text ohne komplexes Layout. Bei E-Readern oder Smartphones passt sich der Zeilen- und Seitenumbruch automatisch den kleinen Displays an. Mit Adobe-DRM wird hier ein „harter” Kopierschutz verwendet. Wenn die notwendigen Voraussetzungen nicht vorliegen, können Sie das E-Book leider nicht öffnen. Daher müssen Sie bereits vor dem Download Ihre Lese-Hardware vorbereiten.Bitte beachten Sie: Wir empfehlen Ihnen unbedingt nach Installation der Lese-Software diese mit Ihrer persönlichen Adobe-ID zu autorisieren!
Weitere Informationen finden Sie in unserer E-Book Hilfe.