Skip to content

Namespace encapsulation cross-over with cluster upgrade #178

@sferlin

Description

@sferlin

The following CRs under optional/other were applied as day2:
Howver, the same CRs exist also under the path to be enabled at installation time.

mount_namespace_config_master.yaml
mount_namespace_config_worker.yaml

What they do is described here. In short, they enable kubens.service, which I noticed it is the reason that kubelet does not come back and worker nodes become "notready".
Logging on the node, disabling kubens.service and rebooting bring them back to "ready", i.e., kubelet is back.

In this documentation ^ I also can confirm that encapsulation is applied, since systemd is in a different ns from kubelet and CRIO e.g.:
sh-5.1# readlink /proc/$(pgrep kubelet)/ns/mnt
mnt:[4026533839]
sh-5.1# readlink /proc/$(pgrep crio)/ns/mnt
mnt:[4026533839]
sh-5.1# readlink /proc/1/ns/mnt
mnt:[4026531841]

I think these CRs were already applied at installation time, the cluster was 4.16 and it was upgraded to 4.18. One way to identify both versions of encapsulation is to look at the mc's.

As a follow up on the problem, the behaviour described ^ may be a cross-over between the "old way" vs "new way" of enabling namespace encapsulation. Especially because encapsulation is properly enabled even with the kubens service disabled. Originally, the intention was that the kubens "new way" would coexist with the "old way", but it turned out they don't actually coexist.

The assumption here is that this issue was only visible, because the cluster has been upgraded, and both methods overlap.

CC: @lack @imiller0

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions