Benchmarking in High Performance Computing (HPC) is the act of measuring the performance of software, or parts thereof, on computers with the idea to generate comparable numbers. These comparable numbers, also called metrics, allow computers to be compared, on a hardware level, basically answering the question if my computer is better than yours, and on a software level, answering in this case the question if my software is better than yours. Benchmarks in computer science have been around for a long time as countries, states, institutes, companies, etc. compete for having the best computers and software, see for example the Top500 (www.top500.org), the list of the fastest public supercomputers in the world. Benchmarks are the basis for comparing computers and software.
Many people, especially from the gaming community, have already experienced benchmarking on their local computer. Many games, but also commercial and free software come with benchmarks. These allow one to compare how the computer performs in comparison to those of others, or help dial in the perfect settings for the best experience. Each and every time one uses these tools they are either looking to improve performance or making sure performance has not degraded.
In HPC benchmarking serves the same purpose. It allows us to test our hardware and software, and compare computers and software. We generally go beyond this though, we keep logs of the measured performance metrics so that we can see how performance evolves over time and why. This helps us stay on track in the development of faster software that makes efficient use of faster hardware.
Keeping logs of performance evolution also helps us to demonstrate to our sponsors and the public that our work has, or will have, tangible impact and gives us the basis for arguing for our needs. This allows for efficient procurement of grants and other money streams and allows the European HPC environment to thrive and remain competitive in a world driven by HPC.