ECP project optimizes lossy compression methods to manage big science data volumes

Scientists working on the VeloC-SZ project have optimized SZ, an error-bounded prediction-based lossy compression model. SZ reduces dataset size by one order of magnitude or more while meeting users’ speed and accuracy needs by identifying and storing the most pertinent data during simulation and experiments. Evaluations with real-world HPC datasets related to the HACC cosmology simulation, crystallography imaging (EXAFEL), atomic and molecular electronic structure (GAMESS), and atomistic capability research (EXAALT) showed that the optimized SZ improves the compression ratio up to 46% compared with the second-best compressor while respecting the same user-specified accuracy. Parallel input/output (I/O) performance is improved by up to 40%. The researcher’s results were published in the June 2020 edition of HPDC ’20: Proceedings of the 29th International Symposium on High-Performance Parallel and Distributed Computing.

The Exascale Computing Project has long funded research to develop computation strategies that balance the large volume of data produced by big science with storage space and I/O limitations inherent in today’s supercomputers. The researchers and others are targeting lossy compression methods, which reduce data volume and control data distortion based on user-specified error bounds, because they will be capable of handling exascale computing data volumes.

Reducing dataset size by sampling or decimation is a seemingly simple but nonoptimal solution given that these methods are outperformed by SZ in compression performance (ratio, accuracy) and speed. SZ reduces scientific datasets by bounding the compression error, either point-wise or statistically. The researchers used second-order regression and second-order Lorenzo predictors to improve prediction accuracy and conducted a comprehensive priori compression quality analysis, in addition to evaluating SZ performance on multiple datasets. SZ is implemented with CPU, GPU, and FPGA frameworks, and its GPU implementation, reaching approximately 40GB/s of compression throughput, responds directly to the heterogeneous exascale computing architectures. Users can run larger problems, accelerate executions, reduce storage footprint of their applications and save more pertinent data for in-situ or off-line analysis.

 

Zhao, Kai, Sheng Di, Xin Liang, Sihuan Li, Dingwen Tao, Zizhong Chen, and Franck Cappello. 2020. “Significantly Improving Lossy Compression for HPC Datasets with Second-Order Prediction and Parameter Optimization.” Proceedings of the 29th International Symposium on High-Performance Parallel and Distributed Computing (June 21). doi:10.1145/3369583.3392688. http://dx.doi.org/10.1145/3369583.3392688.