[英]How to setup an audit policy into kube-apiserver?
I've been reading about how to setup audit in kubernetes here which basically says that in order to enable audit I have to specify a yaml policy file to kube-apiserver when starting it up, by using the flag --audit-policy-file
.我一直在阅读有关如何在 kubernetes 中设置审计here基本上说,为了启用审计,我必须在启动时向 kube-apiserver 指定一个 yaml 策略文件,使用标志
--audit-policy-file
.
Now, there are two things I don't understand about how to achieve this:现在,关于如何实现这一目标,我有两件事不明白:
kops edit cluster
as suggested here: https://github.com/kubernetes/kops/blob/master/docs/cluster_spec.md#kubeapiserver .kops edit cluster
: https : //github.com/kubernetes/kops/blob/master/docs/cluster_spec.md#kubeapiserver 。 Surprisingly, kubernetes does not create a deployment for this, should I create it myself?--audit-policy-file=/some/path/my-audit-file.yaml
.--audit-policy-file=/some/path/my-audit-file.yaml
。 Do I create a configMap with it and/or a volume? Thanks!谢谢!
What's the proper way to add/update a startup parameter of the command that runs kube-apiserver?
添加/更新运行 kube-apiserver 的命令的启动参数的正确方法是什么?
In 99% of the ways that I have seen kubernetes clusters deployed, the kubelet
binary on the Nodes reads the kubernetes descriptors in /etc/kubernetes/manifests
on the host filesystem and runs the Pods described therein.在我所见过的 99% 的 kubernetes 集群部署方式中,节点上的
kubelet
二进制文件读取主机文件系统上/etc/kubernetes/manifests
的 kubernetes 描述符并运行其中描述的/etc/kubernetes/manifests
。 So, the answer to the first question is to edit -- or cause the configuration management tool you are using to update -- the file /etc/kubernetes/manifests/kube-apiserver.yaml
(or hopefully a very similarly named file).因此,第一个问题的答案是编辑 - 或导致您使用的配置管理工具更新 - 文件
/etc/kubernetes/manifests/kube-apiserver.yaml
(或者希望是一个非常相似的名称文件)。 If you have multiple master Nodes, you will need to repeat that process for all master Nodes.如果您有多个主节点,则需要对所有主节点重复该过程。 In most cases, the
kubelet
binary will see the change to the manifest file and will restart the apiserver's Pod automatically, but in the worst case restarting kubelet
may be required.在大多数情况下,
kubelet
二进制文件会看到清单文件的更改并自动重启 apiserver 的 Pod,但在最坏的情况下可能需要重启kubelet
。
Be sure to watch the output of the newly started apiserver's docker container to check for errors, and only roll that change out to the other apiserver manifest files after you have confirmed it works correctly.请务必查看新启动的 apiserver 的 docker 容器的输出以检查错误,并且只有在确认其正常工作后才将该更改推出到其他 apiserver 清单文件。
How can I reference this file afterwards, so it's available in the filesystem when kube-apiserver startup command runs?
之后如何引用此文件,以便在运行 kube-apiserver 启动命令时在文件系统中可用?
Roughly the same answer: either via ssh or any on-machine configuration management tool.大致相同的答案:通过 ssh 或任何机器上的配置管理工具。 The only asterisk to this one is that since the apisever's manifest file is a normal
Pod
declaration, one will wish to be mindful of the volume:
s and volumeMount:
s just like you would for any other in-cluster Pod
.这个唯一的星号是,由于 apisever 的清单文件是一个普通的
Pod
声明,人们会希望注意volume:
s 和volumeMount:
s 就像你对任何其他集群内Pod
volumeMount:
的一样。 That is likely to be fine if your audit-policy.yaml
lives in or under /etc/kubernetes
, since that directory is already volume mounted into the Pod (again: most of the time).如果您的
audit-policy.yaml
位于/etc/kubernetes
中或之下,那可能/etc/kubernetes
,因为该目录已经卷安装到 Pod 中(同样:大部分时间)。 It's writing out the audit log file that will most likely require changes, since unlike the rest of the config the log file path cannot be readOnly: true
and thus will at minimum require a 2nd volumeMount
without the readOnly: true
, and likely will require a 2nd volume: hostPath:
to make the log directory visible into the Pod.它正在写出最有可能需要更改的审计日志文件,因为与配置的其余部分不同,日志文件路径不能为
readOnly: true
,因此至少需要一个没有readOnly: true
的第二个volumeMount
,并且可能需要一个第二volume: hostPath:
使日志目录在 Pod 中可见。
I actually haven't tried using a ConfigMap
for the apiserver itself, as that's very meta.我实际上还没有尝试为 apiserver 本身使用
ConfigMap
,因为这是非常元的。 But, in a multi-master setup, I don't know that it's impossible , either.但是,在多主机设置中,我也不知道这是不可能的。 Just be cautious, because in such a self-referential setup it would be very easy to bring down all masters with a bad configuration since they wouldn't be able to communicate with themselves to read the updated config.
请小心,因为在这种自引用设置中,很容易让所有配置错误的主服务器宕机,因为它们将无法与自己通信以读取更新的配置。
Kubelet continuously monitors the changes for static pod definitions in /etc/kubernetes/manifests
. Kubelet 持续监控
/etc/kubernetes/manifests
静态 pod 定义的变化。 There is no deployment associated with kube-apiserver config and neither does modifying the pod definition directly works.没有与 kube-apiserver 配置相关的部署,修改 pod 定义也不能直接工作。
--audit-policy-file=/some/path/my-audit-file.yaml
argument and preferably audit logs --audit-log-path=/var/log/apiserver-audit.log
in /etc/kubernetes/manifests/kube-apiserver.yaml
./etc/kubernetes/manifests/kube-apiserver.yaml
包含审计策略文件--audit-policy-file=/some/path/my-audit-file.yaml
参数,最好是审计日志--audit-log-path=/var/log/apiserver-audit.log
/etc/kubernetes/manifests/kube-apiserver.yaml
。audit-log-path
and audit-policy-file
.audit-log-path
和audit-policy-file
卷挂载。 Eg:volumeMounts:
- mountPath: /some/path/my-audit-file.yaml
name: audit
readOnly: true
- mountPath: /var/log/apiserver-audit.log
name: audit-log
readOnly: false
...
volumes:
- hostPath:
path: /some/path/my-audit-file.yaml
type: File
name: audit
- hostPath:
path: /var/log/apiserver-audit.log
type: FileOrCreate
name: audit-log
...
Not mounting the volumes, can result in an error - when you try to run any kubectl command - The connection to the server xxxx:yyyy was refused - did you specify the right host or port?
不挂载卷,可能会导致错误 - 当您尝试运行任何 kubectl 命令时 -
The connection to the server xxxx:yyyy was refused - did you specify the right host or port?
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.