Schweitzer Fachinformationen
Wenn es um professionelles Wissen geht, ist Schweitzer Fachinformationen wegweisend. Kunden aus Recht und Beratung sowie Unternehmen, öffentliche Verwaltungen und Bibliotheken erhalten komplette Lösungen zum Beschaffen, Verwalten und Nutzen von digitalen und gedruckten Medien.
Bitte beachten Sie
Von Mittwoch, dem 12.11.2025 ab 23:00 Uhr bis Donnerstag, dem 13.11.2025 bis 07:00 Uhr finden Wartungsarbeiten bei unserem externen E-Book Dienstleister statt. Daher bitten wir Sie Ihre E-Book Bestellung außerhalb dieses Zeitraums durchzuführen. Wir bitten um Ihr Verständnis. Bei Problemen und Rückfragen kontaktieren Sie gerne unseren Schweitzer Fachinformationen E-Book Support.
"Practical Parquet Engineering" "Practical Parquet Engineering" is an authoritative and comprehensive guide to mastering the design, implementation, and optimization of Apache Parquet, the industry-standard columnar storage format for big data analytics. Beginning with the architectural fundamentals, the book elucidates Parquet's design philosophy and core principles, providing a nuanced understanding of its logical and physical models. Readers will benefit from in-depth comparisons to alternative formats like ORC and Avro, along with explorations of schema evolution, metadata management, and the unique benefits of self-describing storage-making this an essential reference for anyone seeking to build resilient and efficient data infrastructure. Moving from theory to hands-on application, the book offers actionable best practices for both writing and querying Parquet at scale. Topics such as file construction, encoding strategies, compression, and partitioning are addressed with precision, alongside nuanced guidance for language-specific implementations and optimizing data pipelines in distributed and cloud environments. Advanced chapters cover real-world performance tuning, including benchmarking, profiling, cache strategies, and troubleshooting complex bottlenecks in production. Readers will also learn how to leverage Parquet's rich metadata and statistics for query acceleration, and how to integrate seamlessly with modern analytics frameworks like Spark, Presto, and Hive. Addressing emerging requirements around security, compliance, and data quality, "Practical Parquet Engineering" goes beyond functionality to cover data governance, encryption, access control, and regulatory mandates like GDPR and HIPAA. Dedicated chapters on validation, testing, and quality management socialize industry-strength patterns for ensuring correctness and resilience. The book culminates in advanced topics, custom engineering extensions, and a diverse suite of case studies from enterprise data lakes, global analytics, IoT, and hybrid-cloud architectures, making it an indispensable resource for data engineers, architects, and technical leaders aiming to future-proof their data platforms with Parquet.
Unlock the full potential of Parquet by mastering the art and science of writing data efficiently and flexibly. In this chapter, you'll journey from foundational writing patterns to advanced encoding strategies, learning how engineering precision at the point of write yields faster queries, lower costs, and a more robust data lake. Whether your use case is petabyte-scale streaming or seamless Python pipelines, discover actionable techniques that transform raw records into highly-optimized Parquet files.
The construction of optimal Parquet files is central to achieving a balance between efficient data ingestion, query responsiveness, and overall cost management in distributed data architectures. The central parameters influencing this balance are file size, row group definition, and data block alignment. Each dimension impacts storage access patterns, compression effectiveness, and parallel processing capabilities.
File size is a critical design choice that must carefully consider the underlying storage medium, cluster infrastructure, and query workload characteristics. Larger files improve compression ratios by exploiting redundancy across a broader data scope and reduce metadata overhead by limiting the number of files tracked in the filesystem. However, very large files can impair parallelism during query execution and lead to increased shuffle costs in distributed processing engines such as Apache Spark or Presto.
Recommended Parquet file sizes typically range from 256 MB to 1 GB. This range strikes an effective compromise between compression benefit and parallel read efficiency. The choice within this range should align closely with the average size of data units consumed or produced by downstream processes. For instance, if an analytics job parallelizes workload by input splits of 512 MB, constructing files near this size maximizes throughput by minimizing unnecessary splits or partial file reads.
Fine-tuning file size also takes into account the underlying block size of distributed storage systems such as HDFS or object stores. Matching Parquet file sizes to the storage block size (commonly 128 MB or 256 MB) minimizes the number of blocks accessed per file read, further reducing latency and improving I/O performance.
Row groups constitute the fundamental unit of I/O in Parquet. Each row group contains a subset of rows and stores column chunk data for those rows contiguously on disk. Proper definition of row groups maximizes the efficacy of columnar storage by enabling predicate pushdown and selective column reads at an efficient granularity.
The row group size is typically set between 50 MB and 512 MB of uncompressed data, often close to the target file size for simplification. Large row groups enhance compression and reduce seek overhead but increase memory requirements during read operations since the entire group must be decompressed. Conversely, small row groups allow finer-grained access but incur metadata and I/O overhead due to more frequent seeks.
When deciding row group size, consider memory constraints on query executors and the complexity of expected filter predicates. Row groups should be balanced to allow maximum predicate pruning without causing out-of-memory errors or excessive disk I/O operations.
Row groups are further optimized through intelligent sorting or clustering of data before writing. Grouping rows based on frequently filtered columns ensures that row groups become highly selective, thus accelerating downstream queries. This technique leverages natural data locality to reduce scanned data volume.
Data block alignment addresses the physical layout of Parquet files on the storage system. Proper alignment ensures that each row group and column chunk begins at storage block boundaries, thus minimizing block-level splits and remote fetch operations.
Misaligned data blocks can cause distributed file systems and object stores to retrieve multiple sequential blocks to read a single row group, harming latency and increasing network usage. Defragmenting files through alignment promotes better I/O patterns and lower costs.
The alignment can be controlled at file writing time by specifying block size and buffer flushing parameters in the Parquet writer configuration. It is recommended to align row groups and column chunks based on the underlying storage block size, typically by padding data chunks or adjusting buffer thresholds.
Moreover, maintaining consistent alignment across a dataset simplifies future compaction or file merging operations by producing predictable file and block offsets.
Efficient ingestion demands rapid file writes and minimal staging. This requirement favors slightly larger files and fewer row groups but must be balanced against query-time considerations. Splitting very large datasets into numerous small files impairs ingestion performance due to overhead amplifications from metadata operations and coordination in distributed systems.
Conversely, to optimize query performance, files must be structured to leverage predicate pushdown and parallel I/O effectively. This often translates to moderately sized files with well-layout row groups and column statistics, enabling distributed engines to prune reads and minimize data shuffle.
Cost optimization further constrains these choices. Excessive small files lead to high metadata management and cloud storage API request costs. Large files impose higher caching and shuffle network expenses. Data transfer fees in cloud environments also hinge on efficient block-level retrievals enabled by alignment.
The ability to balance these factors relies on a workflow where data producers and consumers agree on a coherent data partitioning, file sizing, and batching strategy. Profiling ingestion pipeline throughput, query execution plans, and storage cost analytics provides feedback loops to iteratively refine Parquet file construction parameters.
Consider a streaming ingestion pipeline on cloud object storage with 256 MB storage block size and a mix of aggregation and filter-intensive queries.
parquet_writer_options = { 'block_size': 256 * 1024 * 1024, # 256 MB block size to align with storage 'row_group_size': 128 * 1024 * 1024, # 128 MB uncompressed row groups 'compression': 'snappy', # Balanced performance and compression 'write_buffer_size': 64 * 1024 * 1024 # Buffer flush size to tune block alignment }
These settings produce files approximately 512 MB in size composed of 4 row groups, each aligned to a 256 MB block boundary. The compression codec 'snappy' offers fast decompression, critical for query latency.
Execution metrics in production confirm improved ingestion throughput by 20% and a reduction in query scan volume by 30%, resulting in tangible cost savings without sacrificing freshness or query speed.
By rigorously applying these best practices, data architects and engineers ensure Parquet files serve as robust building blocks for performant, cost-effective, and scalable analytics systems.
Apache Parquet employs a variety of encoding strategies that are fundamental to its ability to efficiently store and process large-scale columnar data. These encoding mechanisms cater to different data characteristics, optimizing both storage footprint and processing speed. The principal encoding types include dictionary encoding, delta encoding, run-length encoding (RLE), and bit-packing. Additionally, Parquet supports pluggable compression codecs that can be selected and tuned according to the nature of the dataset and workload requirements.
Dictionary Encoding is a prominent technique used when a column contains a limited set of distinct values relative to its length. Parquet constructs a...
Dateiformat: ePUBKopierschutz: Adobe-DRM (Digital Rights Management)
Systemvoraussetzungen:
Das Dateiformat ePUB ist sehr gut für Romane und Sachbücher geeignet – also für „fließenden” Text ohne komplexes Layout. Bei E-Readern oder Smartphones passt sich der Zeilen- und Seitenumbruch automatisch den kleinen Displays an. Mit Adobe-DRM wird hier ein „harter” Kopierschutz verwendet. Wenn die notwendigen Voraussetzungen nicht vorliegen, können Sie das E-Book leider nicht öffnen. Daher müssen Sie bereits vor dem Download Ihre Lese-Hardware vorbereiten.Bitte beachten Sie: Wir empfehlen Ihnen unbedingt nach Installation der Lese-Software diese mit Ihrer persönlichen Adobe-ID zu autorisieren!
Weitere Informationen finden Sie in unserer E-Book Hilfe.
Dateiformat: ePUBKopierschutz: ohne DRM (Digital Rights Management)
Das Dateiformat ePUB ist sehr gut für Romane und Sachbücher geeignet – also für „glatten” Text ohne komplexes Layout. Bei E-Readern oder Smartphones passt sich der Zeilen- und Seitenumbruch automatisch den kleinen Displays an. Ein Kopierschutz bzw. Digital Rights Management wird bei diesem E-Book nicht eingesetzt.