This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
realtime:documentation:howto:tools:cpu_shielding [2017/05/24 10:00] anna-maria config switch does not exist in state of the art kernel versions |
— (current) | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====== CPU shielding using /proc and /dev/cpuset ====== | + | |
- | ===== Interrupt shielding ===== | + | |
- | ==== User Space ==== | + | |
- | Then make sure that the interrupts are not automatically balanced by the irqbalance daemon. This daemon is started from the irqbalance init script. To disable once do: | + | |
- | <code bash> | + | |
- | $ /etc/init.d/irqbalance stop | + | |
- | </code> | + | |
- | To disable after next reboot do: | + | |
- | <code bash> | + | |
- | $ chkconfig irqbalance off | + | |
- | </code> | + | |
- | After this you can change the CPU affinity mask of each interrupt by doing: | + | |
- | <code bash> | + | |
- | $ echo hex_mask > /proc/irq/<irq_number>/smp_affinity | + | |
- | </code> | + | |
- | To check that the affinity mask has been set you can check the contents of the smp_affinity file. | + | |
- | <WRAP center round info 100%> | + | |
- | The mask is updated the next time an interrupt is serviced. So you may not see the change immediately. | + | |
- | </WRAP> | + | |
- | More information can be found in [[https://www.kernel.org/doc/Documentation/IRQ-affinity.txt|IRQ affinity]] | + | |
- | ===== Process shielding ===== | + | |
- | The kernel has an cpuset feature that allows you to create cpusets for real-time purposes. The kernel interface is proc filesystem based. It is described in [[https://www.kernel.org/doc/Documentation/cgroup-v1/cpusets.txt|cpusets]]. | + | |
- | Each cpuset is represented by a directory in the cgroup file system containing (on top of the standard cgroup files) the following files describing that cpuset: | + | |
- | <WRAP center round box 100%> | + | |
- | * cpus: list of CPUs in that cpuset | + | |
- | * mems: list of Memory Nodes in that cpuset | + | |
- | * memory_migrate flag: if set, move pages to cpusets nodes | + | |
- | * cpu_exclusive flag: is cpu placement exclusive? | + | |
- | * mem_exclusive flag: is memory placement exclusive? | + | |
- | * mem_hardwall flag: is memory allocation hardwalled | + | |
- | * memory_pressure: measure of how much paging pressure in cpuset | + | |
- | * memory_spread_page flag: if set, spread page cache evenly on allowed nodes | + | |
- | * memory_spread_slab flag: if set, spread slab cache evenly on allowed nodes | + | |
- | * sched_load_balance flag: if set, load balance within CPUs on that cpuset | + | |
- | * sched_relax_domain_level: the searching range when migrating tasks | + | |
- | </WRAP> | + | |
- | In addition, the root cpuset only has the following file: | + | |
- | <WRAP center round box 100%> | + | |
- | * memory_pressure_enabled flag: compute memory_pressure? | + | |
- | </WRAP> | + | |
- | Here is a quick example of how to use cpuset to reserve one cpu for your real-time process on a 4 cpu machine: | + | |
- | <code bash> | + | |
- | $ mkdir /dev/cpuset/rt0 | + | |
- | $ echo 0 > /dev/cpuset/rt0/cpus | + | |
- | $ echo 0 > /dev/cpuset/rt0/mems | + | |
- | $ echo 1 > /dev/cpuset/rt0/cpu_exclusive | + | |
- | $ echo $RT_PROC_PID > /dev/cpuset/rt0/tasks | + | |
- | $ mkdir /dev/cpuset/system | + | |
- | $ echo 1-3 > /dev/cpuset/system/cpus | + | |
- | $ echo 0 > /dev/cpuset/system/mems | + | |
- | $ echo 1 > /dev/cpuset/system/cpu_exclusive | + | |
- | $ for pid in $(cat /dev/cpuset/tasks); do /bin/echo $pid > /dev/cpuset/system/tasks; done | + | |
- | </code> | + |