A Model for Application Slowdown Estimation in On-Chip Networks and Its Use for Improving System Fairness and Performance

International Conference on Computer Design (ICCD), October 3-5, 2016, Phoenix, USA.

Xiyue Xiang† Saugata Ghose‡ Onur Mutlu*‡ Nian-Feng Tzeng†

† University of Louisiana at Lafayette
‡ Carnegie Mellon University
* xETH Z¨urich


In a network-on-chip (NoC) based system, the NoC is a shared resource among multiple processor cores. Network requests generated by different applications running on different cores can interfere with each other, leading to a slowdown in performance of each application. The degree of slowdown introduced by this interference varies for each application, as it depends on (1) the sensitivity of the application to NoC performance, and (2) network traffic induced by other applications running concurrently on the system. In modern systems, NoC interference is largely uncontrolled, and therefore some applications unfairly slow down much more than others. This can lead to overall system performance degradation, prevent fair progress of different applications, and cause starvation of unfairly-treated applications. Our goal is to accurately model the slowdown of each application executing on the system due to NoC interference at runtime, and to use this information to improve system performance and reduce unfairness.

To this end, we propose the NoC Application Slowdown (NAS) Model, the first online model that accurately estimates how much network delays due to interference contribute to the overall stall time of each application. The key idea of NAS is to determine how the delays induced at each level of network data transmission overlap with each other, and to use the overlap information to calculate the net impact of the delays on application stall time. Our model determines the application slowdowns at runtime with a very low error rate, averaging 4.2% over 90 multiprogrammed workloads for an 88 mesh network. We use NAS to develop Fairness- Aware Source Throttling (FAST), a mechanism that employs slowdown predictions to control the network injection rates of applications in a way that minimizes system unfairness. Our results over a variety of multiprogrammed workloads show that FAST improves average system fairness and performance by 9.5% and 5.2%, respectively.





© 2017. Last updated 30 March, 2017