This patch adds a test based on real world usage of runc hooks
(libnvidia-container). We verify that mounting a library inside
a container and running ldconfig succeeds.
Signed-off-by: Renaud Gaubert <rgaubert@nvidia.com>
1. When using `runc`, we should check `$status` and not `$?`.
2. Before exit code check, let's (try to) show errors from CRIU log.
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
For some reason, runc systemd drivers (both v1 and v2) never set
systemd unit property named `CPUQuotaPeriod` (known as
`CPUQuotaPeriodUSec` on dbus and in `systemctl show` output).
Set it, and add a check to all the integration tests. The check is less
than trivial because, when not set, the value is shown as "infinity" but
when set to the same (default) value, shown as "100ms", so in case we
expect 100ms (period = 100000 us), we have to _also_ check for
"infinity".
[v2: add systemd version checks since CPUQuotaPeriod requires v242+]
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
The infinity value was changed in systemd 227.
systemd >= 227: "infinity"
systemd <= 226: 18446744073709551615
e.g. 03a7b521e3 (diff-423c8c1eeb2ef5b08849c3c30b7e53aeR558)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
Add four "corner case" tests that check that the CPU period/quota
can be set/updated even in case neither CPU quota nor CPU period
(were previously) set.
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
Also, enable tests of setting quota and period separately in case
systemd cgroup driver is used, as commit 32746fb334
("update: do not overwrite old cpu quota/period") made it possible
to do so.
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
Fix#2046
Previously, the test was failing with EINVAL during writing 500001 to `/sys/fs/cgroup/cpu,cpuacct/runc-cgroups-integration-test/test-cgroup/cpu.rt_runtime_us`, because `/sys/fs/cgroup/cpu,cpuacct/runc-cgroups-integration-test/cpu.rt_runtime_us` was initialized with 0.
The issue had not been caught in Ubuntu 18.04 CI because it doesn't support rt.
Tested on Ubuntu 20.04.
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
In case swap cgroup control is not available, the "event oom" test gives
the following error:
> # not ok 30 events oom
> # (in test file tests/integration/events.bats, line 134)
> # `[ "$status" -eq 0 ]' failed
> # <....>
> # runc run -d --console-socket /tmp/console.sock test_busybox (status=1):
> # time="2020-05-29T02:10:20Z" level=warning msg="signal: killed"
> # time="2020-05-29T02:10:20Z" level=error msg="container_linux.go:353: starting container process caused: process_linux.go:437: container init caused: process_linux.go:403: setting cgroup config for procHooks process caused: failed to write \"33554432\" to \"/sys/fs/cgroup/memory/test_busybox/memory.memsw.limit_in_bytes\": open /sys/fs/cgroup/memory/test_busybox/memory.memsw.limit_in_bytes: permission denied"
When I try to run the test without setting the swap limit, the shell
process is still getting killed, but the test hangs. I am not sure what
the reason is, but realistically this test is hard to perform without
the swap limit, so let's require cgroup swap for it.
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
For v2, mem+swap is always present. For v1, check it once and set a
variable which is used below.
This also removes CGROUP_MEMORY for v2 case since it's no longer used.
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
The "unlimited" value is the same for memory and memory+swap,
so let's use SYSTEM_MEM for both.
In fact, it was already used in one place to check swap, probably due to
a typo.
This also fixes the following failure on a cgroup v1 system without
mem+swap control (Ubuntu 19.04):
> # not ok 78 update cgroup v1/v2 common limits
> # (in test file tests/integration/update.bats, line 72)
> # `SYSTEM_MEM_SWAP=$(cat "${CGROUP_MEMORY_BASE_PATH}/$MEM_SWAP")' failed
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
1. __runc does not set $status, so the check is misleading.
2. Add set +eux to the nest.sh script so we can error out early, and see
what is going on.
3. Doing "echo +io" > cgroup.controllers is giving an error on my
machine ("sh: write error: Operation not supported"). It is probably
fine to just enable pids controller.
4. Add status check for runc exec nest.sh
5. Remove the second check for cgroup.threads contents -- it was already
checked earlier (the output of nest.sh script).
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
This is a quick-n-dirty fix the regression introduced by commit
06d7c1d, which made it impossible to only set CpuQuota
(without the CpuPeriod). It partially reverts the above commit,
and adds a test case.
The proper fix will follow.
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
... and mem+swap is not explicitly set otherwise.
This ensures compatibility with cgroupv1 controller which interprets
things this way.
With this fixed, we can finally enable swap tests for cgroupv2.
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
Currently, both systemd cgroup drivers (v1 and v2) only set
"TasksMax" unit property if the value > 0, so there is no
way to update the limit to -1 / unlimited / infinity / max.
Since systemd driver is backed by fs driver, and both fs and fs2
set the limit of -1 properly, it works, but systemd still has
the old value:
# runc --systemd-cgroup update $CT --pids-limit 42
# systemctl show runc-$CT.scope | grep TasksMax
TasksMax=42
# cat /sys/fs/cgroup/system.slice/runc-$CT.scope/pids.max
42
# ./runc --systemd-cgroup update $CT --pids-limit -1
# systemctl show runc-$CT.scope | grep TasksMax=
TasksMax=42
# cat /sys/fs/cgroup/system.slice/runc-xx77.scope/pids.max
max
Fix by changing the condition to allow -1 as a valid value.
NOTE other negative values are still being ignored by systemd drivers
(as it was done before). I am not sure whether this is correct, or
should we return an error.
A test case is added.
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
1. do not allow to set quota without period or period without quota, as we
won't be able to calculate new value for CPUQuotaPerSecUSec otherwise.
2. do not ignore setting quota to -1 when a period is not set.
3. update the test case accordingly.
Note that systemd value checks will be added in the next commit.
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
Such containers should remain paused after the update. This has
historically been true, but this helps ensure that the systemd cgroup
changes (freezing the container during SetUnitProperties) don't break
this behaviour.
Signed-off-by: Aleksa Sarai <asarai@suse.de>
Unfortunately, runc update doesn't support setting devices rules
directly so we have to trigger it by modifying a different rule (which
happens to trigger a devices update).
Signed-off-by: Aleksa Sarai <asarai@suse.de>
1. The command `runc checkpoint --lazy-server --status-fd $FD` actually
accepts a file name as an $FD. Make it accept a file descriptor,
like its name implies and the documentation states.
In addition, since runc itself does not use the result of CRIU status
fd, remove the code which relays it, and pass the FD directly to CRIU.
Note 1: runc should close this file descriptor itself after passing it
to criu, otherwise whoever waits on it might wait forever.
Note 2: due to the way criu swrk consumes the fd (it reopens
/proc/$SENDER_PID/fd/$FD), runc can't close it as soon as criu swrk has
started. There is no good way to know when criu swrk has reopened the
fd, so we assume that as soon as we have received something back, the
fd is already reopened.
2. Since the meaning of --status-fd has changed, the test case using
it needs to be fixed as well.
Modify the lazy migration test to remove "sleep 2", actually waiting
for the the lazy page server to be ready.
While at it,
- remove the double fork (using shell's background process is
sufficient here);
- check the exit code for "runc checkpoint" and "criu lazy-pages";
- remove the check for no errors in dump.log after restore, as we
are already checking its exit code.
[v2: properly close status fd after spawning criu]
[v3: move close status fd to after the first read]
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>