User Tools

Site Tools


gsoc:2024-gsoc-perf

perf: Linux Profiling GSoC 2024

Google Summer of Code Project ideas for perf, the Linux profiling subsystem.

(Main Linux foundation GSoC 2024 page)

What is perf?

perf has two components:

  1. kernel support for performance counters, performance monitoring units and tracing;
  2. a tool for accessing the kernel provided data, recording and visualizing it.

perf has had various enhancements such as support for BPF.

Perf community

Maintainers: Peter Zijlstra <peterz at infradead.org>, Ingo Molnar <mingo at redhat.com>, Arnaldo Carvalho de Melo <acme at kernel.org>, Namhyung Kim <namhyung at kernel.org>

Mailing list: linux-perf-users at vger.kernel.org

IRC: #perf on irc.oftc.net

Code Licenses: mostly GPLv2

Wiki: https://perf.wiki.kernel.org/

Mentor contacts: Ian Rogers <irogers+gsoc24 at google dot com>, Namhyung Kim <namhyung at kernel.org>, Arnaldo Carvalho de Melo <acme at kernel.org>

Qualities of a good proposal

  • Contributor has engaged with the community by, for example, writing a patch, contributing to the wiki, or reporting a bug.
  • The time plan for the project is clear and mentions how other commitments the contributor has will be managed alongside the project. Commitments likely vary over the some. As per GSoC projects are large (350h), medium (175h) or small (90h) so show how you will use the time.
  • There is sufficient detail in the proposal that it is clear the contributor and mentors will be able to get the project done.

Project Proposals

Bring your open proposal

  • complexity: intermediate or hard
  • duration: small, medium or large
  • requirements: machine to work and test on, typically a bare metal (ie not cloud) Linux machine. C programming, possibly other languages if interested in things like Rust integration.

If you have your own ideas for how tracing and profiling can be improved in the Linux kernel and perf tool then these are welcomed. Some areas that have been brought up in the last year are better support for more programming languages and new profiling commands like function latency measuring. The perf tool is full of metrics like flops and memory bandwidth, so adding a roofline model to determine the bottlenecks of an application would be a possibility.

New Performance Monitoring Unit (PMU) kernel drivers

  • complexity: intermediate or hard
  • duration: small, medium or large
  • requirements: machine to work and test on, typically a bare metal (ie not cloud) Linux machine, test hardware for the PMU you are working on. C programming.

Have a computer you really love but can only query the core CPU's PMU? Are there data sheets describing performance monitoring counters that could be exposed through the perf event API? Why not work to add a PMU driver to the Linux kernel and expose those performance counters or even more advanced features like sampling. Drivers can be added for accelerators, GPUs, data buses, caches, etc. For example, the Raspberry Pi 5 has performance counters only for its core CPUs and not for things like its memory bus.

Improved Python integration

  • complexity: intermediate or hard
  • duration: small, medium or large
  • requirements: machine to work and test on, typically a bare metal (ie not cloud) Linux machine. C and/or Python programming.

A lot of what makes the perf tool useful is user interface, however, writing user interfaces in C is tedious and error prone. Python support is long established within the perf tool but it could use some TLC. Some examples of work that needs doing are:

  • Making the python module depend only on libperf. libperf is a library of perf's core components. Currently the python module uses bits and pieces of code from the perf code base but with bits stubbed out. Making the python module just depend on libperf and cleaning up libperf would solve this.
  • A perf data file module that is standalone. Currently using the perf module to read a perf data file requires the C code. This is a distribution hassle and reading a file doesn't need the rest of perf. Other projects have written perf data file readers in python, but they suffer as the perf.data file format evolves. The perf tool's testing can ensure the python and C implementations are in sync. Ideally such a module can be installed with tools like pip so it is easy to depend upon.
  • Improved packaging and testing. Most python scripts in perf are under the perf script command, but some like flamegraph and gecko deserve to be top-level. Improving how this is organized should help perf users. The current command line argument processing for perf script is also messy as arguments may be for record, report or both.
  • Improved testing. As with perf's C code, good code coverage and testing will ensure the python integration is working well. Current testing is limited to just things like did the python module load.

With better python integration it is hoped that tools like perf report can also have a python equivalent. This would allow UI toolkits like Textualize to be used and improve the user experience.

A better python experience can also look to improve the gecko profiling experience or add other data converters like for pprof and Chrome's trace event format.

Scalability and speed

  • complexity: intermediate or hard
  • duration: small, medium or large
  • requirements: machine to work and test on, typically a bare metal (ie not cloud) Linux machine. C programming, multi-threading/pthread library.

The perf tool is largely single threaded even though sometimes it needs to do something on every CPU in the system. This is embarrassingly parallel but the tool isn't exploiting it. Work was done to create a work pool mechanism but not merged due to latent bugs in memory management. Address sanitizer and reference count checking have solved this problem but we still need to integrate the work pool code.

Another improvement is that currently the perf report command will process an entire perf.data file before providing a visualization. This can be slow for large perf.data files. In contrast, the perf top command will gather data in the background while providing a visualization. Breaking apart the perf report command so that processing is performed on a background thread with the visualization periodically refreshing in the foreground will mean that at least during the slow load the user can do something.

One more thing can do is to reduce the number of file descriptors in perf record with –threads option. Currently it needs a couple of pipes to communicate between the worker threads. I think it can be greatly reduced by using eventfd(2) instead of having pipes for each thread.

Data type profiling

  • complexity: intermediate or hard
  • duration: medium or large
  • requirements: physical machine to work and test on (Intel recommended). C programming, understanding DWARF format is a plus.

Data type profiling is a new technique to show memory access profiles with type information. See LWN article for more detail. It's still in the early stage and has a lot of room for improvement. For example, it needs to support C++ and other languages, better integration to other perf commands like annotate and c2c, performance optimization, other architecture support and so on. It'd be ok if you're not familiar with ELF or DWARF format.

perf trace and BTF

  • complexity: intermediate or hard
  • duration: medium or large
  • requirements: machine to work and test on. C programming, BPF

perf trace is similar to strace but much performant since it doesn't use ptrace. So it needs to capture and understand the format of syscall arguments as strace does. Right now, it has to build a list of format to pretty-print the syscall args. But we find it limited and manual work. Instead, it could use BTF (BPF type format) which has all the type information and is available in the (most) kernel.

gsoc/2024-gsoc-perf.txt · Last modified: 2024/02/08 05:08 by namhyung