On an embedded Linux system, I want to execute a process or thread thereof steadily on a specific core.
I found the API functions sched_setaffinity and pthread_setaffinity_np for that.
The manpage for sched_setaffinity references the page for
cpuset, on which it is stated that: there should be a "nodev cpuset" in /proc/filesystems, if CPUSET is supported.
I have here now 2 embedded systems, one with kernel 4.1 running on a dual core iMX6, and one with kernel 3.10 (Linux4Tegra) running on a Tegra TK1 board, quad core CPU.
On both systems, the "nodev cpuset" is *not* present.
Also, their files /proc/config.gz both have CONGIG_CPUSET disabled.
But: on the kernel 4.1 system, setting my process to core1 works and has a visible effect.
On the kernel 3.10 system, sched_setaffinity produces an error, "invalid argument". Since I only masked it to use core1 in a 4-core system, that argument should not be invalid, unless something else is wrong?
Because I want to use the TK1 board (kernel version frozen to 3.10 because NVIDIA drivers), I was thinking about looking into how to compile a new Linux image with the CPUSET option enabled (using the board manufacturer's Yocto based stuff, never done that before).
But I am now doubting that this effort will yield success, as, by the above outlined seeming contradiction, my understanding of what's necessary for a Linux build to support the assigning of CPU affinities must be wrong.
Can someone tell me how it really is, exactly?
Background:
Say I have a program on an embedded Linux system with an ARM dual core processor, which does some comparably heavy data shoveling, receiving a lot of UDP packets. To prevent CPU core0 from ever maxing out on peaks and thus losing packets, I'd like to move my system calls (mainly just recv(socketFd...) )to core1, so core0 will be doing whatever the system does anyway when UDP packets arrive, and core1 takes care of the rest. And then for quad core, further processing gets done on cores2+3, such the first two will be stable.