Date of Award
Spring 5-2018
Document Type
Thesis
Degree Name
Master of Science in Information Systems (MSIS)
First Advisor
Christopher Olson
Second Advisor
David Bishop
Third Advisor
Stephen Kresbach
Abstract
The main hypothesis of this paper is whether compression performance – both hardware and software – is at, approaching, or will ever reach a point where real-time compression of cached data in large data sets will be viable to improve hit ratios and overall throughput.
The problem identified is: storage access is unable to keep up with application and user demands, and cache (RAM) is too small to contain full data sets. A literature review of several existing techniques discusses how storage IO is reduced or optimized to maximize the available performance of the storage medium. However, none of the techniques discovered preclude, or are mutually exclusive with, the hypothesis proposed herein.
The methodology includes gauging three popular compressors which meet the criteria for viability: zlib, lz4, and zstd. Common storage devices are also benchmarked to establish costs for both IO and compression operations to help build charts and discover break-even points under various circumstances.
The results indicate that modern CISC processors and compressors are already approaching tradeoff viability, and that FPGA and ASIC could potentially reduce all overhead by pipelining compression – nearly eliminating the cost portion of the tradeoff, leaving mostly benefit.
Recommended Citation
Harper, Kyle, "Viability of Time-Memory Trade-Offs in Large Data Sets" (2018). Masters Theses & Doctoral Dissertations. 400.
https://scholar.dsu.edu/theses/400