[英]Using sysctls in Google Kubernetes Engine (GKE)
I'm running a k8s cluster - 1.9.4-gke.1 - on Google Kubernetes Engine (GKE). 我正在Google Kubernetes Engine(GKE)上运行一个k8s集群-1.9.4-gke.1。
I need to set sysctl net.core.somaxconn
to a higher value inside some containers. 我需要在某些容器中将sysctl
net.core.somaxconn
设置为更高的值。
I've found this official k8s page: Using Sysctls in a Kubernetes Cluster - that seemed to solve my problem. 我已经找到了k8s官方页面: 在Kubernetes集群中使用Sysctls-似乎可以解决我的问题。 The solution was to make an annotation on my pod spec like the following:
解决的办法是在我的pod规范上添加注释,如下所示:
annotations:
security.alpha.kubernetes.io/sysctls: net.core.somaxconn=1024
But when I tried to create my pod: 但是,当我尝试创建自己的广告连播时:
Status: Failed
Reason: SysctlForbidden
Message: Pod forbidden sysctl: "net.core.somaxconn" not whitelisted
So I've tried to create a PodSecurityPolicy like the following: 因此,我尝试创建类似于以下内容的PodSecurityPolicy:
---
apiVersion: extensions/v1beta1
kind: PodSecurityPolicy
metadata:
name: sites-psp
annotations:
security.alpha.kubernetes.io/sysctls: 'net.core.somaxconn'
spec:
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
runAsUser:
rule: RunAsAny
fsGroup:
rule: RunAsAny
volumes:
- '*'
... but it didn't work either. ...但是它也不起作用。
I've also found that I can use a kubelet
argument on every node to whitelist the specific sysctl
: --experimental-allowed-unsafe-sysctls=net.core.somaxconn
我还发现我可以在每个节点上使用
kubelet
参数将特定的sysctl
列入白名单:-- --experimental-allowed-unsafe-sysctls=net.core.somaxconn
I've added this argument to the KUBELET_TEST_ARGS setting on my GCE machine and restarted it. 我已将此参数添加到我的GCE计算机上的KUBELET_TEST_ARGS设置中并重新启动了它。 From what I can see from the output of
ps
command, it seems that the option was successfully added to the kubelet
process on the startup: 从
ps
命令的输出中可以看到,该选项似乎在启动时已成功添加到kubelet
进程中:
/home/kubernetes/bin/kubelet --v=2 --kube-reserved=cpu=60m,memory=960Mi --experimental-allowed-unsafe-sysctls=net.core.somaxconn --allow-privileged=true --cgroup-root=/ --cloud-provider=gce --cluster-dns=10.51.240.10 --cluster-domain=cluster.local --pod-manifest-path=/etc/kubernetes/manifests --experimental-mounter-path=/home/kubernetes/containerized_mounter/mounter --experimental-check-node-capabilities-before-mount=true --cert-dir=/var/lib/kubelet/pki/ --enable-debugging-handlers=true --bootstrap-kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig --kubeconfig=/var/lib/kubelet/kubeconfig --anonymous-auth=false --authorization-mode=Webhook --client-ca-file=/etc/srv/kubernetes/pki/ca-certificates.crt --cni-bin-dir=/home/kubernetes/bin --network-plugin=kubenet --volume-plugin-dir=/home/kubernetes/flexvolume --node-labels=beta.kubernetes.io/fluentd-ds-ready=true,cloud.google.com/gke-nodepool=temp-pool --eviction-hard=memory.available<100Mi,nodefs.available<10%,nodefs.inodesFree<5% --feature-gates=ExperimentalCriticalPodAnnotation=true
The problem is that I keep receiving a message telling me that my pod cannot be started because sysctl net.core.somaxconn
is not whitelisted. 问题是我一直收到一条消息,告诉我由于sysctl
net.core.somaxconn
未列入白名单而无法启动我的pod。
Is there some limitation on GKE so that I cannot whitelist a sysctl? GKE有一些限制,因此我无法将sysctl列入白名单吗? Am I doing something wrong?
难道我做错了什么?
This is an intentional Kubernetes limitation. 这是Kubernetes的故意限制。 There is an open PR to add
net.core.somaxconn
to the whitelist here: https://github.com/kubernetes/kubernetes/pull/54896 有一个开放的PR可以将
net.core.somaxconn
添加到白名单中: https : //github.com/kubernetes/kubernetes/pull/54896
As far as I know, there isn't a way to override this behavior on GKE. 据我所知,没有办法在GKE上覆盖此行为。
Until sysctl support becomes better integrated you can put this in your pod spec 在更好地集成sysctl支持之前,您可以将其放入您的pod规范中
spec:
initContainers:
- name: sysctl-buddy
image: busybox:1.29
securityContext:
privileged: true
command: ["/bin/sh"]
args:
- -c
- sysctl -w net.core.somaxconn=4096 vm.overcommit_memory=1
resources:
requests:
cpu: 1m
memory: 1Mi
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.