Appears in ACM Operating Systems Review, 1993. Supercedes Carnegie Mellon University SCS Technical Report CMU-CS-93-113.
R. Hugo Patterson, Garth A. Gibson, M. Satyanarayanan
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
This paper focuses on extending the power of caching and prefetching to reduce file read latencies by exploiting application level hints about future I/O accesses. We argue that systems that disclose high-level knowledge can transfer optimization information across module boundaries in a manner consistent with sound software engineering principles. Such Transparent Informed Prefetching (TIP) systems provide a technique for converting the high through put of new technologies such as disk arrays and log-structured file systems into low latency for applications. Our preliminary experiments show that even without a high-throughput I/O sub system TIP yields reduced execution time of up to 30% for applications obtaining data from a remote file server and up to 13% for applications obtaining data from a single local disk. These experiments indicate that greater performance benefits will be available when TIP is integrated with low level resource management policies and highly parallel I/O subsystems such as disk arrays.
FULL PAPER: pdf / postscript
ORIGINAL TR VERSION OF THIS PAPER: pdf / postscript