Towards a Benchmarking Suite for Kernel Tuners

Warning

This publication doesn't include Faculty of Sports Studies. It includes Institute of Computer Science. Official publication website can be found on muni.cz.
Authors

TORRING Jacob O BEN van Werkhoven PETROVIČ Filip WILLEMSEN Floris-Jan FILIPOVIČ Jiří ELSTER Anne C

Year of publication 2023
Type Article in Proceedings
Conference 2023 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)
MU Faculty or unit

Institute of Computer Science

Citation
Web URL
Doi http://dx.doi.org/10.1109/IPDPSW59300.2023.00124
Keywords autotuning; benchmarking
Description As computing system become more complex combining CPUs and GPUs, it is becoming harder and harder for programmers to keep their codes optimized as the hardware gets updated. Autotuners try to alleviate this by hiding as many architecture-based optimization details as possible from the end-user, so that the code can be used efficiently across different generations of systems. Several autotuning frameworks have emerged, but a comparative analysis between these related works is scarce, owing to the significant manual effort required to port a tunable kernel from one tuner another. In this article we introduce a new benchmark suite for evaluating the performance of optimization algorithms used by modern autotuners targeting GPUs. The suite contains tunable GPU kernels that are representative of real-world applications, allowing for comparisons between optimization algorithms and the examination of code optimization, search space difficulty, and performance portability. Our framework facilitates easy integration of new autotuners and benchmarks by defining a shared problem interface. Our benchmark suite is evaluated based on five characteristics: convergence rate, local minima centrality, optimal speedup, Permutation Feature Importance (PFI), and performance portability. The results show that optimization parameters greatly impact performance and the need for global optimization. The importance of each parameter is consistent across GPU architectures, however, the specific values need to be optimized for each architecture. Our portability study highlights the crucial importance of autotuning each application for a specific target architecture. The results reveal that simply transferring the optimal configuration from one architecture to another can result in a performance ranging from 58.5% to 99.9% of the optimal performance, depending on the GPU architecture. This highlights the importance of autotuning in modern computing systems and the value of our benchmark suite in facilitating the study of optimization algorithms and their effectiveness in achieving optimal performance for specific target architectures.
Related projects:

You are running an old browser version. We recommend updating your browser to its latest version.

More info