Second Workshop on Hot Topics in Autonomic Computing. June 15, 2007. Jacksonville, FL. Supercedes Carnegie Mellon University Parallel Data Lab Technical Report CMU-PDL-07-101, January 2007.
Eno Thereska, Dushyanth Narayanan*, Anastassia Ailamaki, Gregory R. Ganger
Dept. Electrical and Computer Engineering
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
*Microsoft Research - Cambridge, UK
To be effective for automation, in practice, system models used for performance prediction and behavior checking must be robust. They must be able to cope with component upgrades, misconfigurations, and workload-system interactions that were not anticipated. This paper promotes making models self-evolving, such that they continuously evaluate their accuracy and adjust their predictions accordingly. Such self-evaluation also enables confidence values to be provided with predictions, including identification of situations where no trustworthy prediction can be produced. With a combination of expectation-based and observationbased techniques, we believe that such self-evolving models can be achieved and used as a robust foundation for tuning, problem diagnosis, capacity planning, and administration tasks.
KEYWORDS: behavioral models, relearning, statistical methods, queuing laws
FULL WORKSHOP PAPER: pdf
FULL TR: pdf